You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Brain-Stem Software Strategy

Brain-Stem Software StrategyTM

by Cesar A. Brea

March 31, 2002


Brain-Stem Strategy Questions

  • What activities or problems do many people find expensive or painful enough to be on their radar screens?

  • What technologies will soon perform well enough, or become cheap enough to help?

  • How can these technologies be applied to lower costs
    and ease pain? What cool new things become possible that will create
    new revenue opportunities for customers? Who can afford them? What else
    becomes a “must-have” at the same time?

  • Who else sees this? What are they doing about it? Are they ahead, or do they have enough money and talent that we should worry?

  • What’s our plan for beating them to the punch, and making money at it?

Let’s consider an example and then generalize from there.

Markets Worth Serving: A Case Study of Web-Based CRM

Take customer service. It has both implicit and explicit costs.
Try to skimp on it, and you experience higher returns, warranty
expenses, and customer defections — and a damaged reputation that
keeps new customers from coming on board. As customer expectations for
quality and service have risen over the last couple of decades, so has
spending on customer service. Building and running call centers got to
be a big expense in some industries. Finding ways to cut this cost
while improving service became a pretty worthwhile thing to work on.

Along comes the web. At first, not many people were hooked up
to it, and those that did mostly had slow connections. While that
didn’t make the browsing experience so great, it was perfect for email.
Compared with wading through an IVR’s (interactive voice response)
menu, giving up and then holding for one of the handful of operators
left after the rest were fired to pay for the IVR, email is a great
substitute for 80% of customer service transactions.

Next thing, managers come into work to find the
“customer_service@XYZCorp.com” mailbox full. Sorting through messages,
to figure out what each complaint is about and to whom to route it,
becomes painful and expensive itself. Into the market rush new ISV’s
with “Bayesian Inference Engines” to read and classify messages, and
workflow applications to route and track their resolution. CRM is born.
At one point, Kana, one of the early leaders, reaches a market
capitalization approaching $25 billion dollars (roughly two and a half
times GM’s at the time).

But the bloom comes off the CRM rose quickly. Email’s easy for
customers, so they send lots of it and get mad when it isn’t answered
quickly. Unfortunately, CRM vendors’ technologies and applications are
hyped ahead of what they can actually support. It turns out the
automated email readers aren’t very accurate, so loads of messages
still have to be read, sorted, and answered manually. And early
workflow capabilities are insufficiently flexible to fit target
business processes and handle exceptions. From a competitive angle, big
ERP vendors slap web UI’s on their old client-server CRM versions and
push their way into the market, crushing prices as they turn a product
category into a feature of their broader suites.

As web connections proliferate and speed up, attention turns
to “self-service”. Search-engine-based dot-coms figure this out and
start licensing their software to corporations for their customer
service “portals”. For a while, search is huge. But soon disappointment
sets in. Search isn’t very accurate, and less than half the answers are
documented and accessible. To address the latter problem, content and
document management vendors move out of their respective early niches
and become ubiquitous tools for employes corporate-wide. As they
publish and reveal themselves to be experts, intranet applications that
help them interact with each other become popular. But these are
initially deployed with an “if you build it, they will come” mentality.
ROI sucks, because in an empty bboard no one can hear your screams.
This is where ArsDigita came in (which is why I picked this example).
But that’s a story for a different paper.

As these waves play themselves out, CIO’s ramp up the pace at
which they pull their hair out. Remember, somebody has to make all of
these things talk to each other. Somebody’s got to fix them when they
break (almost never, of course). Somebody’s got to customize them to
fit what the business needs to do. And somebody has to train people to
use them. A handy rule of thumb, borne out in my experience, is that
lifetime costs (that is, over 3-4 years) of deploying and supporting an
application can run to 10x original license costs.

Hindsight’s surely 20-20. But there is a certain Homer-Simpsonesque doh!
to all of this, that can be vaguely guessed at in advance by thinking a
couple of steps ahead (without, IMHO, any of that fancy scenario
modelling software people hawk — remember, if you can’t put it in your
brain-stem…) Of course, not all of this happens perfectly
sequentially. These categories emerged in parallel in many cases.
However, their relative “hotness” does seem to have followed a pattern
that the above questions can help to puzzle out. Use caution though —
it hasn’t made me rich yet.

Ok, let’s generalize a bit.

First, enterprise software can help businesses realize value in four different ways:

  • present information
  • support analysis
  • enable coordination
  • provide automation

To identify these opportunities, you can ask questions like:

  • What’s going on that’s worth knowing about? (How could software help the user find out, or find out faster?)
  • What does this information mean? (How could software help a user interpret information more easily and usefully?)
  • What should the user do in consequence? (How could software
    help get the right people working on a problem, in the right order, and
    at the right times?)
  • Must a user get involved at all? (Can software perform a task better, cheaper, faster?)

Second, as we discussed, CIO’s have to make all of this work, and so they have needs as well:

  • easy integration
  • easy maintenance
  • easy training
  • easy customization

Again, some clarifying questions:

  • What systems are worth hooking up? (And consequently, what integration solutions make sense?)
  • Who’s got to look after things? What corollary requirements
    emerge? (e.g., security becomes a big deal in a web-based world) (What
    does the profile of the maintainer suggest would be valuable technical
    aids?)
  • Who’s going to use the solution? (What do they need to do, what do they already know?)
  • What extensions are likely, and who will build them? (What API’s make sense?)

Depending on where we are in a particular technology cycle, answers
to all these questions may come more or less easily. A few years ago,
when the Web was young (at least as most business types would think of
it), Everyone had an opinion and a business plan for it, but no one
really had a clue (remember public B2B exchanges?). In the midst of
this confusion, vendors hatched schemes and placed bets based on their
own vested interests and hunches about how technologies would evolve
(remember HP’s e-Speak?). As experience has sifted the more valuable
business ideas from the turn-of-the-millenium chaff, so have
technologies been winnowed to a few remaining options, and along with
them, to a still-consolidating roster of vendors.

Let’s turn to assessing competition.

Fear and Greed: Competition and the Role of Standards

“The ground means the location, the place of pitched
battle — gain the advantage and you live, lose the advantage and you
die…”

— Sun Tzu, “The Art Of War”, Chapter One, page 1.

The software playing field is most usefully thought of as a
“technology stack”, with each layer using the services of the layer
beneath it and enabling the services of the layer above it:

Applications
Middleware
Databases
Operating Systems
Hardware

(Note: vaguely inspired by the “ArsDigita Layer Cake”)

When the technologies in each layer aren’t changing much, firms
that commercialize those technologies focus on increasingly narrow
niches of problems to solve with them in a bid to distinguish
themselves from each other.

Periodically, new technologies sweep through and shake up the
competitive landscape in their respective layers (for example, what
optical technologies are doing to networking gear, as shipping plain
old light waves down fiber is eclipsed by “wave-division-multiplexing”,
and now “dense wave-division-multiplexing”). Sometimes the feature,
cost, and performance improvements are so great that they revolutionize
adjacent layers as well (as the advent of browser applications
ultimately did for everything in the layers below them).

When these waves wash through, vendors jockey for advantage (and more often survival. They do this in one of three ways:

  • they develop and promote the disruptive technology (or alternative variants) themselves
  • they “surf” the new technology by extending it with enabling
    tools. Sometimes this is symbiotic, and sometimes it’s an attempt to
    “embrace and extend” (see below)
  • they acquiesce, shifting their business focus from being a
    vendor of the penultimate technology to being more of an “agnostic”
    solutions provider for the new sheriff in town.

(A good example of the first approach is what BEA did with
application servers, and what Microsoft is doing with web services. The
last approach is exemplified by what IBM has done over the last decade
with Global Services.)

As technology waves pass, and once-whizzy must-haves attract
competition, price points collapse — often by a factor of ten on a
price/performance basis, in as little as a year or two (just ask any
app server vendor’s sales guys). To continue to grow, the latest
leaders at each layer tend to creep their product offerings “up the
stack” into the next layer. This desire to move up creates significant
opportunities for smaller players in the next layer up. By tailoring
their products to work especially well with those of the big guys
coming up, they position themselves at minimum as attractive to the
encroaching firms’ sales forces, as well for possible acquisition.

Since revolutionizing a layer can lead to huge profits, there
are frequently multiple aspirants, each with its own flavor of the core
technology, competing to lead the revolution. But since a fractured
landscape in a given layer diminishes the appeal of the new technology
to the layer above, these aspirants have to agree on standards for how
their technology will connect with adjoining layers. Of course,
depending on how these standards evolve, they can significantly favor
one player over another.

Competitors are constantly engaged in a tense game of
“co-opetition”, where they balance efforts to establish their
individual technologies and associated standards tweaks as dominant,
against the need to agree on a single scheme so that adoption by the
layer above is accelerated. In short, it’s the age old
size-versus-share-of-the-pie, on steroids. High fixed costs make for
very high marginal returns, so the macro-level brinksmanship and
field-level competition for customers, talent, and buzz are
extraordinarily aggressive.

If all of this seems pointy-headed to you, just look at the
way web application development tools have unfolded. We started out
with scripting languages. Remember Javascript applets? John Gage, Sun’s
Chief researcher, once told me the story of how Tcl (a server-side
web-scripting language) was nearly selected and advanced in
Javascript’s place. Early versions of the ArsDigita Community System,
along with other vendors’ products like Vignette’s, were built with
Tcl. We got hammered over the fact that Tcl was a dead language, and
our transition to Java was tortuous, expensive, and lengthy enough that
it costs us most of an important market window. But for the fateful
decisions in the backroom of a technical advisory board, there might be
a lot more Tcl developers in the world today.

We weren’t the only ones to lose at the standards game.
Consider once-white-hot Art Technology Group (ATG). ATG soared
initially because of its early support of Java for web development. But
ATG rode its non-J2EE-compliant Dynamo Server product too long and got
eclipsed by BEA, which evangelized EJB’s running on their WebLogic
product. Now the J2EE camp is struggling to catch up with Microsoft,
which has established a very likely insurmountable early lead in web
services. Standards are the playing fields on which the hand-to-hand
battles of modern software wars are fought. Companies are made and
crushed there. You ignore their evolution at your peril.

Standards become and stay popular because their sponsors make
them really useful. Let’s say I speak English really well. It’s in my
interest to get everyone else to speak English, because I’ll then have
a communications advantage over my less-articulate peers. Being
Shakespeare himself won’t help me in a world that speaks Swahili. If I
want the world to speak English, I’ll publish primers, and write lots
of appealing stories that force people to learn English to fully
understand them.

J2EE is popular not because of “write-once-run-anywhere” (an
empty promise in practice), but because Sun stimulated and suported a
lot of really useful software development using the Java language, and rolled that into the J2EE specification (for example, the Pet Store reference implementation). Similarly, the vendors behind the W3C
have been sufficiently supportive (some far-sightedly, most because
they know they have no choice) that they have stimulated the
contribution of a lot of really useful free software.

For vendors this is a double-edged sword. While supporting and
conforming to a popular standard can ease market entry, it can be
expensive, erode differentiation, and make it harder to lock up a
customer base. Big players sometimes have the clout to try to “embrace
and extend” standards, as Sun did with Unix in the early 1980’s and
Microsoft did with Java in the mid-1990’s. But small guys fear —
correctly — that they have no choice but to hew to the lingua franca
of the day, no matter how marginally valuable that may render what
they’ve done on top of the standards. For customers, this is actually
healthy, because it means vendors must distinguish themselves on how
well they have solved a business problem, and not on how tightly they
have locked up their customers with proprietary technology.

(“Embrace and extend” means supporting a popular
standard to reap the benefit of having people think that your products
will run software written on that standard, but extending the standard
in a way that allows software written to your variant to work only with
your products. So for example, applications written with Microsoft’s
version of Java only run on Windows. The big J2EE application server
vendors have done the same thing in a less public way: getting your
J2EE app to run on BEA’s WebLogic is no guarantee it will run on IBM’s
Websphere.)

Again, today’s software business is built on a model that can be
uniquely profitable for vendors that can establish large installed
bases. Marginal profit on software licensing can be over 90%, and speed
of technical evolution has numbed customers to accepting incomplete and
buggy products to start with and then paying for pricey “maintenance”
contracts so they can get (only some of) the bug fixes and new features
they need. Although there’s some grumbling about this, customers have yet to mount any serious response to this imbalance. If your software is a near de facto
standard, say, like Oracle among high-end RDBMS’s, it raises the cost
of switching even higher, and extends the window for above-average
profits.

Accordingly, razor-and-blade product structure and pricing is
a common tactic in the software business. Most vendors today either
give developer kits away for free or for a very modest charge to
encourage adoption. Bill Gates priced DOS licenses for ubiquity in the
early days. Getting DOS out there gave him a big market for Microsoft’s
applications, which themselves then became a standard (reinforced by
locking users into unique interface and proprietary document types).
With their establishment as a standard, he could then raise prices on
Windows, since you now have to have the MS-proprietary OS to run the
apps you’re hooked on. Sun promoted Java to not only consolidate the
fractured Unix world against inroads from Microsoft VB-on-Windows, but
also to extend its reach into its Unix competitors’ market shares:
“Write the app for your big HPUX server in Java, and it’s much easier
to swap out for a shiny Sun/ Solaris box when the time comes.” It even
represented an offensive threat into Microsoft’s server customer base
for the same reason, which partially motivated Microsoft’s “embrace and
extend” response.

(Scott McNealy thought he had Bill Gates on the
run, but Bill cheated by breaking the Java license and playing
“rope-a-dope” in court. Microsoft eventually settled for a sum that
paled beside the benefit to Microsoft of blunting the Java threat. Sun,
with greatly eroded market power, can’t do much today to prevent the
fracturing of the J2EE camp as app server vendors try to lock up their
customer bases with proprietary extensions that need “tweaked” JVM’s to
work. And Microsoft has now mounted its own offensive, trying to
establish .Net as the de-facto standard for implementing web
services. Whether Microsoft’s nakedly self-serving Passport component
will erode .Net’s prospects as a standard enough for alternatives like
SunOne and the Liberty Alliance to gain ground remains to be seen, but
I’m not betting against the folks in Redmond.)

A constant tension in playing the game is how much of the goods
to give away to tilt the field in your favor and drive adoption, versus
how much to hold back until people start writing checks. Historically
software companies have found hype to be a cheap but effective
substitute for samples, but this may be changing as the stakes for
using substandard software go up and pressure emerges from newer
quarters, like the open-source movement. Microsoft’s relative
generosity with .Net tools and education to date may be one possible
example that illustrates this trend.

So What’s Your Plan?

Let’s review the options again:

You can try to lead a technical revolution. But it’s guaranteed
that “if you build it, they won’t come” on their own. You’ve got to
bridge people’s adoption to your new widget. This may possibly mean
razor-and blade product structuring and pricing. It will certainly mean
major usability investments. Finally, be prepared to work with
competitors on common gateways — API’s for example — to each of your
respective products.

If you can’t have or be a standard yourself, there are two other less profitable but also less risky ways to make money.

“Plan A”: do a great job of solving a particular set of customer
problems, and be standards-agnostic. At ArsDigita, we used to say that
ACS, our software, was “data-model-plus-page-flow”, and that the tools
and standards on which it was built were just artifacts of what was
convenient to use for our circumstances at a given point in time. What
really counted was our experience in building highly-scalable
collaborative applications to support online communities. This was
great and we made a lot of money for a while (we grew to $20 million a
year in revenues in one year on internal profits alone). We managed
this even with open-source distribution of our software — designed, of
course, to establish us as a standard.

But after a while, it was clear a bus had left the station,
and it wasn’t the Tcl/AolServer one we were sitting in, even if we did
run on the Oracle line. For a time, sales pitches I did had a
depressingly similar pattern. The business folks loved us, but the tech
folks stopped us dead in our tracks when we said “Tcl” instead of
“Java”. Their bottom line: “we support Java, and only do business with
vendors with the same commitment.”

So “Plan B” is to build a
product or offer a service that may not be a direct solution for end
customers, but does a great job of making a standard technology more
usable. Examples here are visual development environments for the
popular languages, or more esoterically some of the caching
enhancements Oracle has made to Apache in its 9iAS product. At
ArsDigita, ACS for Java took a while to build, and by the time we did
others had stolen a march on our relative position as leaders in
solving the end customer’s problem, relegating us to a high-end niche.
But as a very sophisticated application development framework for the
Java world, it had significant appeal for all of the major J2EE
application server vendors. Even Microsoft approached us early on (in
.Net’s evolution) to encourage us to port it to .Net to speed adoption
of that infrastructure.

Software technologies evolve in predictable patterns which can
help you figure out which of these strategies might make more sense for
you. When technologies are young, people still are figuring out what
solutions that use these technologies will end up being valuable. So on
the margin they prefer toolkits and frameworks which permit them to
explore, to packaged applications that box them in. Vendors of young
technologies will be inordinately attracted to other complementary
vendors that make their toolkits more usable and higher-performing.
Later, as the technologies mature, and better ways of solving problems
with it become apparent, there’s a premium on being first with packages
that ship a good solution to as many end users as possible. The degree
of consensus and support for a given technology’s standards can be a
good indicator for which way to go and where the specific opportunities
might lie.

Log in