Australia’s Labor-led government won office promising to prevent access to unlawful content, such as child pornography, on the Internet. Now, the country is about to launch the second round of its filtering tests, amid reports that trials will attempt to block peer-to-peer (P2P) and BitTorrent along with child porn and other sensitive content. The filtering plan has been highly controversial, in part due to the ever-expanding scope of material deemed off-limits.
I’ve written a short draft paper (available on SSRN) that analyzes the political, legal, and technical aspects of the proposed filtering, and that offers an initial assessment of the program’s legitimacy. My conclusions:
Australia is moving to censor the Internet because the Labor Party won office partly on a promise to do so. The country will likely become the first Western democracy to block access to on-line material through legislative mandate, creating a natural experiment. However, this experiment raises concerns. The government has not been clear about what material will be blocked, and why. The censorship system’s accountability to citizens could be undercut by the combined effects of coalition government, outsourced content classification, and filtering’s inevitable transfer of power to those who design and implement its technology. Results from the first test of filtering in Tasmania should be a cautionary tale, guiding not just the technical deployment of censorship, but also highlighting political, social, and Internet policy issues that must be (and are being) vigorously debated.
Australia’s decision to censor Internet content pre-emptively is likely further evidence that the debate over filtering has shifted, from whether it should occur to how it should work. Cyberlibertarianism is alive and well, as the discussions in Australia’s press and Parliament prove, but it is no longer ascendant. This shift disguises an important change in focus for regulating information. Filtering looks easy and cheap, and calls to block access to material that is almost universally condemned – such as child pornography, extreme violence, or incitements to terrorism – are hard to resist. But this focus confuses means with ends. The key question is what set of measures best achieve the end, or combat the evil, at issue – and how tolerable their countervailing drawbacks will be. Democratic governance is well-positioned to debate these tradeoffs, and indeed Australia’s move is less worrisome than filtering in, for example, Great Britain, which implemented censorship through “voluntary” agreements between ISPs and government. The concern is that, as filtering is increasingly adopted in Western democracies, censorship that blocks access to material rather than legal measures that punish access after the fact will become increasingly seen as normal rather than problematic. As this essay, and other work on filtering by groups such as the OpenNet Initiative demonstrate, filtering carries considerable costs in overblocking, transparency, and accountability that may not be evident initially. Censorship can be an effective tool, but it is a dangerous one. Australia’s example will have much to teach about both aspects.
The paper is a working draft; your comments, suggestions, and thoughts are much welcomed. I will update it as the situation in Australia evolves.