Great panel on filtering at CFP 2009 yesterday – we took up the question of whether John Gilmore is still right in that the “Net interprets censorship as damage and routes around it.” Ian Brown talked about Cleanfeed and how filtering operates, from the most basic to the most sophisticated. TJ McIntyre described the bizarre public? / private? status of the Internet Watch Foundation. Catherine Crump talked about the ACLU’s litigation regarding Tennessee schools that selectively filter pro-GBLT sites and Washington libraries that refuse to disable filters for adult patrons. And Nicole Wong shared how Google approaches demands such as those from Turkey (Block YouTube videos we find offensive, everywhere in the world!), and how each day’s e-mailed list of countries where Google or YouTube is now blocked is better than Red Bull or coffee as a morning kick. Wendy Grossman kept us on-time, no easy task…
I made a few points that I’ll share here. First, I think that Internet filtering has had three epochs:
- Filtering 1.0: filtering is technically impossible (Gilmore and the cyber-exceptionalists / cyber-libertarians)
- Filtering 2.0: filtering is possible, but only done by bad actors / authoritarian states (China, Iran, Saudi Arabia, etc.)
- Filtering 3.0: filtering becomes widespread, including in Western democracies, and we face hard questions about how to assess the practice’s legitimacy
Second, Australia is the beta for Filtering 3.0. The country is having useful, vehement disagreements over how filtering is implemented (what method is used? who pays? what trade-off in performance is acceptable?) and what gets blocked (who decides? why is certain content prohibited? how can one challenge censorship decisions?). The Rudd government, via Senator Conroy, seems to be backing down on two fronts – specifying that only Refused Classification (RC) material will be blocked in a mandatory fashion, and that a “voluntary” industry code to which all ISPs adhere could substitute for legislation – but a requirement to filter is still a government objective.
Finally, there’s a persistent myth that the U.S. is a filtering-free zone. I think this derives because what we block seems natural / inevitable / invisible. Google has to remove certain search results that link to infringing content to stay within the safe harbor of the Digital Millennium Copyright Act. This is like the dog that didn’t bark in Sherlock Holmes: how do you know what you’re missing? (To Google’s credit, the site includes notification that it has filtered results, and links to Chilling Effects so you can read the DMCA take-down notice.) Linking to a site that you know posts DeCSS is unlawful. Napster had to institute filtering to satisfy the district court in California (which it failed to do). Americans think that prohibiting copyright infringement just makes sense – but Saudi Arabia thinks this about porn, and France for hate speech, and Australia for euthanasia. We aren’t different, and that’s what makes Filtering 3.0 hard.
I’ve got an initial proposal for how to approach Filtering 3.0 (my paper Cybersieves, coming out this year in the Duke Law Journal) that looks at process rather than the content that’s banned. Filtering is coming: to Australia, to Germany, to Minnesota. Gilmore’s optimism no longer applies. We need to think about what comes next.