Round 2: Time Warner Gets It Wrong, and the French Follow the Model

Update: I should have read more carefully: Time Warner and Verizon confirmed they’re not going to block any Web sites. I’ve changed text below to reflect that.

Yesterday, I posted a quick analysis of the new policy (using the methodology I propose in a new draft paper) undertaken by Sprint, Verizon, and Time Warner Cable at the behest of New York Attorney General Andrew Cuomo: they’ll voluntarily block child porn. As more details emerge, though, I’m more skeptical about the plan. First, I held off assessing how narrow this filtering system would be (does it successfully block child porn, and only child porn?), since technical details are sketchy. But if the latest reports are to be believed, I’m ready to make a call: completely overbroad. Time Warner is going to eliminate all newsgroups by the end of the month. So, to block child porn, we’ll wipe out TW subscribers’ ability to talk about SCUBA diving, radio astronomy in India, or support for people with bipolar disorder? I think this is a great candidate for addition to the paper as an approach to filtering that is not narrow. This is, in fact, complete overkill.

Will Verizon and Sprint follow suit? Blogger Lauren Weinstein and News.com reporter Declan McCullagh say Verizon will cut off some groups (VZ is being unspecific), and Sprint will kill off all alt.* groups (so much for the 61 Google lists as pet-related). Both moves look unnecessarily broad.

The strange part is that Weinstein and McCullagh suggest state the ISPs will scour their servers for sites hosting child porn, but won’t engage in any filtering. (This NetworkWorld article implies the same thing; McCullagh’s article confirms it for VZ and TWC.) This makes sense for one reason: cost. Filtering effectively is going to be expensive. Eliminating Usenet feeds, and looking through their files (doubtless using the hash database) for unlawful images is relatively cheap. So, there’s both more (overblocking) and less (actual Web filtering) here than it first appears. It sounds like the initial press releases were a bit overhyped, and that AG Cuomo got 1) $1M+ in funding, 2) elimination of a lot of Usenet, and 3) a search of their servers from the ISPs. That’ll help, but not much.

As for the French: the government and ISPs have agreed to block access to child porn, terrorism, and racial hatred sites. (Good luck on #2: defining terrorism is exceptionally difficult. It’s hard even to arrive at a theoretical definition, let alone to decide between guerrilla movements, violent religious fundamentalist groups, and state-sponsored insurgents.) I suggested that this public-private partnership model is common for Western democracies, with a new “iron triangle” of ISPs, NGO watchdogs, and state regulators quietly agreeing on filtering via private or informal agreements. (The AP cites Britain, Sweden, Denmark, Norway, New Zealand, and Canada as employing similar structures.) The French system follows this model. According to a speech by Interior Minister Michele Alliot-Marie, French users will be able to flag certain sites as falling into one of these categories, and they’ll be compiled into a block list supplied to French ISPs. (This is a very interesting idea, used by open source projects such as the Open Directory Project and OpenDNS, in categorizing sites via volunteers who rate them. It reduces the workload for the government and ISPs, and can theoretically empower users to participate in decisionmaking.)

Interesting times for Internet filtering.

4 Responses to “Round 2: Time Warner Gets It Wrong, and the French Follow the Model”

  1. Well, that does take care of the overbreadth argument. That’s a pretty big net they’re fishing with – and an astonishing tolerance for false positives. It’s the sort of thing that will only work if there is no consumer exit or brand choice option (and I see no grounds for optimism on that front – the ISP’s have, as you noted, a huge incentive to come up with functionally equivalent policies here… and that’s for markets where there is more than one ISP option).

    So we are left with state action as the question for the constitutionality of this program.

    Two further thoughts:

    1. I agree that child porn is a hideous thing. But the regulatory responses here seem disproportionate. In this regard, the current round of filtering can be situated as part of a larger cultural narrative about smut on the Internet. The court in Mainstream Loudon (overruled by ALA), for example, pointed to the state’s failure to come up with more than two or three instances nationally of library patrons looking at pornography on publicly accessible internet terminals. Or, more generally, one might revisit the legislative history behind the CDA, where the (entirely bogus) Georgetown LR paper by Marty Rimm got picked up by Time Magazine and turned into the hurried passage of the CDA (for those who don’t know the narrative, I can’t figure out how to embed links as part of a comment, but the Wikipedia article on Rimm tells the basic story). I’m not sure what the cultural narrative that needs to be told exactly is, but I’m sure there is one…

    2. The above point seems especially worth making in light of the following: the Washington Post is reporting this morning (http://www.washingtonpost.com/wp-dyn/content/article/2008/06/10/AR2008061002544.html) that a forthcoming report will question whether such tactics will reduce access to child porn as much as they will drive it underground and make it harder to track. The relevant cultural narrative here would probably refer back to the history of the RIAA and P2P downloading: they killed Napster, and from that, decentralized P2P emerged. Cyberlibertarians would conclude from this that the Internet is unregulable; I think a more interesting line of thought to pursue would be about the ways in which indirect regulation is more effective online than direct (thus, following Lessig and Zittrain). I also realize that it is odd to characterize a filtering regime as a direct regulation, but given the technical specifics of it – blocking, carte blanche, entire newsgroups, it might be appopriate.

    Of course, this has become my talking point on the subject of filters: filtering regimes invariably both overblock (1st point, above; Derek seems right that this is driven by the economics of the situation: fine-grainedness is expensive), and simultaneously underblock (second point, although driving the material further underground isn’t how that point is usually articulated).

  2. Gordon, your point about the narrative here is really interesting, and I think is largely underdeveloped in Internet scholarship. The ways in which children are perceived / portrayed to interact with the Internet are either overly optimistic (Wikipedia / one laptop per child will give every child the power to realize a top-notch education) or overly threatening (1 in 5 kids has been targeted by a sexual predator online; access to porn; etc.). My instinct is that this is part of a larger cultural narrative about children, where the Internet is simply one effective frame for a set of fears and hopes. But that’s beyond my scope of expertise.

    I’m more optimistic than the Wash. Post article on the transaction / payment front, actually. It’s unrealistic to think that child porn can be choked off. But it’s a good intermediate step to increase the transaction costs of accessing it. Jack Goldsmith has said that regulation is often about increasing the cost of information, and I think that’s right. The analogy I explore briefly in the paper is about access to pharmaceutical drugs online without a prescription: if you cut off the standard payment systems, you have placed a serious impediment in front of potential consumers.

    And I agree with you that filtering is direct regulation; it’s just that regulation here has the Lessig “four forces” connotation rather than the mere legal code one.

  3. [...] for alternative enforcement regimes that are more effective. Consider that in New York, the state attorney general pushed major ISPs into dropping Usenet newsgroups over child pornography conc… while admitting that prosecuting those who produced and distributed the material was infeasible [...]

  4. [...] there will be more state-based filtering efforts, and soon. Pick your targeted material: a) child porn, b) terrorism materials, c) gambling, or d) “obscene” content. Any [...]