The Virtues of Inefficiency

One of the Internet’s chief virtues is inefficiency.

Best effort” packet routing – as Jonathan Zittrain describes it, the “bucket brigade” where each link in the network tries to pass packets to the next hop, but without guarantees – is less efficient than a protocol that seeks to guarantee transmission and thereby minimizes bandwidth used to communicate. Stateless protocols, such as HTTP, can be less efficient: the server doesn’t keep information about my client or its state and so, by default, each request is a new session. For those of us with a penchant for law and economics as an analytical tool, this state of affairs seems initially sub-optimal.

Yet inefficiency means that data is redundant – cached, preserved, more readily accessible. Years ago, in my life as a sysadmin, I managed to delete my department’s primary database while “cleaning up” our servers. Fortunately, the Lotus Notes replication model means that databases are typically “replicated” (copied and synchronized) widely within a network; I managed to find a nearly-up-to-date replica on a server in Singapore. Inefficiency removed a single point of failure. Many arguments favoring network neutrality emphasize inefficiency’s benefits: rather than tune transmission for high-priority or low-latency applications, “stupid networks” preserve flexibility.

Arguably, there are technical zones where the Internet should be less efficient. One of the characteristics of e-mail that makes spam cheap, and potent, is that a sender can transfer a single copy of a message intended for many recipients in a domain. The receiving mail server will helpfully copy that single copy into each recipient’s mailbox. Require a less efficient model – for example, one message per recipient – and spam’s economics shift dramatically.

My tentative conclusion here is that we should not set efficiency as a goal for legal or technical regulation of Internet-based information; in fact, we should be prepared, even eager, to embrace inefficiency. Hidden virtues are virtues nonetheless. But I’d love to get your thoughts (including, possibly, whether I’ve simply mangled the definition of efficiency!).

9 Responses to “The Virtues of Inefficiency”

  1. This is before my time, but wasn’t the whole idea behind the internet the idea that a bomb could take out any number of routers, but the paths would still be open for data transfer? I’d hate to get rid of that given our expanded reliance on the network.

  2. Interesting thought. . .on an even more kind of practical, real life level it is interesting to think about how many fun, interesting, beautiful people, places and things you came across simply because you weren’t “efficient” enough (got lost, etc.) I also appreciated this for the thought of you running around at Lotus looking for any recent data backup.

  3. It’s not at all clear that best-effort packet delivery is less efficient than pre-reserving bandwidth. There are at least two inefficiencies associated with reservations. (1) Somebody reserves bandwidth but then doesn’t use it, so it goes to waste even though somebody else could have used it. (2) Optimizing a reservation system requires central decisionmaking, which creates new information flows that have to be accomodated, not to mention the political and economic drawbacks of putting one entity in charge. I think the consensus among network designers is that best-effort is generally more efficient in a large, decentralized network.

  4. Thanks, great comments!

    1. To Michael: You’re right that the ARPANET was designed to route around damage. I think it’s an urban myth that the Internet itself was designed to survive a nuclear attack, but I’ve read that this was partially a factor in the design of TCP/IP. Your point here — that redundancy is a virtue we should sacrifice only after the most serious consideration — strikes me as absolutely right. This is Jonathan Zittrain’s underlying worry in his Generativity piece, and it’s what makes things like the Slammer worm so scary.

    2. To Ed: thanks, this is fascinating. The overhead of coordinating delivery is something I hadn’t thought enough about. In terms of wasted bandwidth, might this be mitigated to a degree by a prioritization scheme that penalized sources that reserved bandwidth and then failed to use it? (Of course, this might lead to a new type of DoS attack, where a destination requested a priority transfer and then cancelled it, but perhaps that might be made the responsibility of the source – to recognize and de-prioritize such requests.) Your point is very well taken: I may have incorrectly assumed that best-effort is inefficient when, in fact, it is efficient when one expands the framework.

    3. To Becky: serendipity is one of the virtues of the Net – we’re in agreement here. This was Cass Sunstein’s worry in Republic.com — the “Daily Me,” where we all read only stuff we know about or agree with — but some of the data from the Pew Internet project seems to suggest that this isn’t the case. (I believe Yochai Benkler points out that about 15% of links in the blogosphere go to sources / opinions that are on the “other side” of a debate; whether this is remarkable or depressing depends entirely upon one’s perspective.) And the Lotus thing was one of the first, but by no means the last, times that I was pretty sure I was going to get fired. Let’s just say my eyes got real big.

  5. The concept of network neutrality has been intentionally used by lobbyists for two endangered business species. The cable television operators are working to protect their subscriber and advertiser revenues, but they have made grave business errors by raising rates and blocking competitors from the multiple dwelling unit market. The media conglomerates who own newspapers and cable broadcasting systems are faced with the end of the daily newspaper. The next generation just does not read day old news.
    The network innovations of AT&T U-Verse, Verizon FIOS, and many other smaller telcos who already offer broadband television and integrated internet access are a truly disruptive technology. Just as Voice over an IP network is viewed as a disruption to the old circuit switched voice network model, high speed broadband network access is disrupting the old cable TV model. It is absolutely true that a packet based IP broadband connection is far more efficient than a dedicated digital or circuit switched line. It is also absolutely true that teens and young adults are very much attuned to interactive media and they are not amused by simply watching a long list of channels.
    So, as MySpace, Google, eBay, and others use the internet to make money, it is reasonable that these companies should pay for the traffic they generate.

  6. While we can argue about the relative technical efficiency of best effort packet routing vs guaranteed bandwidth or tiered levels of service in general, efficiency takes on a new context when applied to individual services. An obvious example is utilizing internet infrastructure for routing voice communications.

    In America and most countries with a history of reliable telephone switching infrastructure, there is an assumption on the part of the consumer, businesses, government agencies, and emergency services that a traditional telephone call will nearly always go through barring some sort of disaster. Aspects of society have been built around this guarantee a service, a guarantee that is highly inefficient, requiring, in the past, a huge build-out of spare capacity, the complete monopolization of a complete phone circuit for a single call (the majority of which was silence), etc. In this case, efficiency is sacrificed for reliability.

    The internet, tied together by agreed-upon protocols but lacking any sort of oversight by a standards body or governing institution, makes a best effort at packet routing between networks of highly variable reliability, redundancy, speed, and, quality only because people have agreed that it is in their best interests to do so. As a result of this system and the neutral network (a packet is a packet is a packet), we can not rely on the internet as we can on the telephone to provide the same level of reliability. We cannot point at one actor (the phone company) as the responsible party, and regulate it or punish it for failing to deliver service.

    I am still confused as to how some advocates of a neutral internet can at the same time suggest that the internet is also the future of more traditional services such as cable television, voice telephony, and emerging technologies for remote interaction and advanced telepresence such as that old standby example, remote surgery. In a neutral internet with no guarantees, a loose conglomeration of interests clustered around a set of agreed-upon (for now) protocols, and a pledge only to deliver a “best effort” at delivering data, can we really trust that services we have grown to rely on be carried in the same band, on the same wires, with the same amount of chaos?

    The answer I’ve heard when posing this question to people around the Berkman Center is that more bandwidth will solve the problem. I’m not convinced. Real Time operating systems are designed to be able to guarantee the completion of certain tasks in a certain minimum time, regardless of the circumstances. We trust such systems to control things like drive-by-wire jet controls or other critical systems because they are inefficient by design, such that they will always have the excess capacity necessary to do certain required tasks. The internet, while also inefficient by its neutral design, does not seem equipped to behave in the same way. That’s why on controlled corporate networks that implement Voice over IP, they segment the traffic and use Quality of Service tagging to give it a higher priority. It is also why academic networks de-prioritize the huge amount of peer-to-peer traffic their students put out, because they consider reliable and speedy email to be far more critical.

    Is the only answer to guaranteed QoS to keep such services off of the internet? If so, what does that say about the promise of the neutral net?

  7. Hi Carl – you make some excellent points. I agree that distinctions between types of telco services – for example, voice versus cable TV versus broadcast TV versus e-mail – are increasingly irrelevant. If we’re moving towards a future where most of these types of information move from source to consumer over Internet Protocol (a huge assumption, to be clear), then all of these things are applications, and regulating or treating them differently based on the wires / spectrum over which they once traveled is insensible.

    I’m not sure I agree about Yahoo!, MySpace, and such. The idea that they should pay for traffic they generate assumes that we’re in a situation of scarcity, where usage-based metering makes sense. That’s an empirical question, but as your point about FiOS suggests, I think it’s increasingly untrue. Moreover, what if we charged users for bandwidth consumed instead? Or do we think that, like the effects of FICA from a tax perspective, it’s irrelevant where we locate the burden since the final outcome is the same?

    If we are in a situation based on scarcity, I’ve no quarrel with charging based on usage, but I would worry if (given the current “two-wire” situation for pathways to homes in the U.S.) providers did not offer common carriage – that is, charging the same amount for the same usage, regardless of whether I’m using up bandwidth by streaming video or AOL is by streaming TimeWarner movies…

  8. Hi Danny – this is a really interesting point. I love that you’re raising a mixture of technical and policy reasons for being skeptical about both best effort and network neutrality!

    There are a few potential responses. First, as I believe Ed pointed out above, allocating bandwidth for high-priority applications may be grossly inefficient if those apps are rarely used. We dedicate a lot of spectrum, for example, to EMS services, and the military, even though it’s not used nearly as often as other (arguably less important) communications. (Note that there are strong suggestions that smart radios and spread-spectrum communications make this type of reservation unnecessary, though of course they impose an adoption cost on the users.) This is a classic policy tradeoff: we have to weigh wasted space against the benefits of having virtually guaranteed communication when it’s really needed.

    Second, perhaps the QoS-type features in IPv6 will mitigate these concerns, though the QoS bits are optional if I’m reading the spec right. (Corrections welcomed, please!)

    Finally, I wonder how “bad” the situation really is. I know a few people working on telemedicine in the stroke field, and they don’t have problems working with the Net as it currently exists. It’d be fascinating to try to validate / empirically judge the worries about remote surgery: what level of disruption / packet loss / blood loss would we expect on average? Would this be better or worse for patients than their currently, geographically-limited options? Is remote surgery important enough that we, as a society, should give up alternative communications for it?

    I don’t want to sound like a data geek, but as an old colleague of mine at Lotus liked to say, “If you can’t measure it, it doesn’t really exist.” I hope we’ll take a shot at measuring the concerns you raise as we make decisions about the tradeoffs you address.

  9. [...] Derek Bambauer is spending his last hours in Cambridge – literally – giving a final presentation at the Berkman Center. The lunch ends at 1:30pm, and he’s off to Detroit at 2pm to start a new career teaching intellectual property law at Wayne State law school. For his parting shot, he previews a paper he’s writing with Phil Malone on how the law currently limits – undesirably – software security research. He’s more clear, at this stage of his research, about the problem than about its potential solution. [...]