Can Crowdsourcing Beat Academic Peer Review?

Interesting piece yesterday at the Chronicle of Higher Ed: Blog Comments and Peer Review Go Head to Head to See Which Makes a Book Better. UCSD communications professor Noah Wardrip-Fruin has written a 300-page book entitled Expressive Processing: Digital Fictions, Computer Games, and Software Studies, to be published by MIT Press. MIT Press, in keeping with standard scholarly publishing practice, is having the manuscript peer-reviewed by Professor Wardrip-Fruin’s colleagues in academia. But alongside that traditional peer review project, Professor Wardrip-Fruin has thrown open the gates for a parallel crowdsourced review track: he’s posting the complete manuscript, a few sections at a time, to the Grand Text Auto group blog. Readers of the blog can post their own comments on the draft, keyed to the numbered paragraphs of Wardrip-Fruin’s text.

Experiments in crowdsourced editing of academic work are known in the art. Larry Lessig, one of the giants of the cyberlaw/IP field, turned to the Internet for help updating his landmark book Code and Other Laws of Cyberspace. The text of the first edition was posted to a wiki, where users could add their own contributions. A year and a half later, the result of the process was Code Version 2.0, which was published both online and in hard copy. In one respect, Professor Wardrip-Fruin is taking a more conservative route than Professor Lessig, allowing Grand Text Auto’s readers to comment on, but not to edit, his text.

What’s novel and exciting about Professor Wardrip-Fruin’s experiment, though, is how it puts crowdsourced editing in (presumably friendly) competition with the traditional process of academic peer review. It should prove fascinating to hear, at the end of the day, how Professor Wardrip-Fruin rates the relative quality of the feedback he received through both mechanisms.

[More at Voir Dire and TaxProf Blog.]

5 Responses to “Can Crowdsourcing Beat Academic Peer Review?”

  1. On a tangent here: Not so long ago, I wrote a review for a fledgling online journal, Plagiary, of a book about collaboration and the digital economy published by MIT Press. The book, whose main title, like Lessig’s first, is CODE, was a collection of articles by scholars from a range of disciplines, and the editor claims to have taken an “‘open source’ approach” to his task, by which he means he was not concerned to impose a uniform tone or lexicon on the final text.

    My closing remarks took the publisher to task for its miserable editorial work, manifest as a multitude of typos and a pathetically spare index. So I’m wondering whether the novelty and excitement surrounding crowdsourced editing, like an “‘open source’ approach” to editing, will distract from the more mundane business of producing a useful, at least ordinarily refined end product. (Granted, editing for style and nit-picky grammatical glitches is distinct from peer-reviewing for substance and attribution…but it’s not wholly distinct.)

  2. Great observation, Dean. Open-source guru Eric Raymond wrote that you can’t let the crowds do everything; even open-source projects need a “benevolent dictator,” someone sitting at the top of the heap to provide quality checks and assure uniformity. I’m sure that Wardrip-Fruin will be filling that role for his own book, as Lessig himself did with Code Version 2.0. The lack of such a “benevolent dictator” guiding the project and ensuring accuracy is at the root of a lot of the criticisms of Wikipedia, which seems to have consciously decided not to follow Raymond’s advice on this point.

  3. I really have to wonder whether top down control is required to maintain quality or just having a more homogenous pool of talent working together. Something between full out wikipedia style and open source dictators (which leads to serious issues with fragmentation of the code base in the worst cases). For example, instead of just letting people comment or full on letting people edit, let people comment until the “dictator” has seen enough from that person to know that they are competent to add to the dialogue via the editing process… just my thoughts after watching several of these projects go through the wringer and back out again.

  4. [...] My journal is starting up for the semester, which means I’ll be spending quality time with this paper and wondering if there are better ways to do this… [...]

  5. [...] see this project sparking discussion elsewhere, as summarized in blog posts from MIT Press (citing Info/Law, ReadWriteWeb, Scholarly Communication, Sources and Methods, and Voir Dire) and the Chronicle’s [...]