Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).
The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)
Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?
My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:
- Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
- Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
- Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)
In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.
Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?
As always, comments appreciated.