You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'web 2.0' Category

“Born Digital” and “Digital Natives” Project Presented at OECD-Canada Foresight Forum

2

Here in Ottawa, I had the pleasure to speak at the OECD Technology Foresight Forum of the Information, Computer and Communications Policy Committee (ICCP) on the participative web – a forum aimed at contributing to the OECD Ministerial Meeting “The Future of the Internet Economy” that will take place in Seoul, Korea, in June 2008.

My remarks (what follows is a summary, full transcript available, too) were based on our joint and ongoing HarvardSt.Gallen research project on Digital Natives and included some of the points my colleague and friend John Palfrey and I are making in our forthcoming book “Born Digital” (Basic Books, 2008).

I started with the observation that increased participation is one of the features at the very core of the lives of many Digital Natives. Since most of the speakers at the Forum were putting emphasis on creative expression (like making mash-ups, contributing to Wikipedia, or writing a blog), I tried to make the point that participation needs to be framed in a broad way and includes not only “semiotic democracy”, but also increased social participation (cyberspace is a social space, as Charlie Nesson has argued for years), increased opportunities for economic participation (young digital entrepreneurs), and new forms of political expression and activism.

Second, I argued that the challenges associated with the participative web go far beyond intellectual property rights and competition law issues – two of the dominant themes of the past years as well as at the Forum itself. I gave a brief overview of the three clusters we’re currently working on in the context of the Digital Natives project:

  • How does the participatory web change the very notion of identity, privacy, and security of Digital Natives?
  • What are its implications for creative expression by Digital Natives and the business of digital creativity?
  • How do Digital Natives navigate the participative web, and what are the challenges they face from an information standpoint (e.g. how to find relevant information, how to assess the quality of online information)?

The third argument, in essence, was that there is no (longer a) simple answer to the question “Who rules the Net?”. We argue in our book (and elsewhere) that the challenges we face can only be addressed if all stakeholders – Digital Natives themselves, peers, parents, teachers, coaches, companies, software providers, regulators, etc. – work together and make respective contributions. Given the purpose of the Forum, my remarks focused on the role of one particular stakeholder: governments.

While still research in progress, it seems plain to us that governments may play a very important role in one of the clusters mentioned above, but only a limited one in another cluster. So what’s much needed is a case-by-case analysis. I briefly illustrated the different roles of governments in areas such as

  • online identity (currently no obvious need for government intervention, but “interoperability” among ID platforms on the “watch-list”);
  • information privacy (important role of government, probably less regarding more laws, but better implementation and enforcement as well as international coordination and standard-setting);
  • creativity and business of creativity (use power of market forces and bottom-up approaches in the first place, but role of governments at the margins, e.g. using leeway when legislating about DRM or law reform regarding limitations and exceptions to copyright law);
  • information quality and overload (only limited role of governments, e.g. by providing quality minima and/or digital service publique; emphasis on education, learning, media & information literacy programs for kids).

Based on these remarks, we identified some trends (e.g. multiple stakeholders shape our kids’ future online experiences, which creates the need for collaboration and coordination) and closed with some observations about the OECD’s role in such an environment, proposing four functions: awareness raising and agenda setting; knowledge creation (“think tank”); international coordination among various stakeholders; alternative forms of regulation, incl. best practice guides and recommendations.

Berkman Fellow Shenja van der Graaf was also speaking at the Forum (transcripts here), and Miriam Simun presented our research project at a stand.

Today and tomorrow, the OECD delegates are discussing behind closed doors about the take-aways of the Forum. Given the broad range of issues covered at the Forum, it’s interesting to see what items will finally be on the agenda of the Ministerial Conference (IPR, intermediaries liability, and privacy are likely candidates.)

New OECD Must-Read: Policy Report On User-Created Content

ø

The OECD has just released what – in my view – is the first thorough high-level policy report on user-created content. (Disclosure: I had the pleasure to comment on draft versions of the report.) From the introduction:

The concept of the ‘participative web’ is based on an Internet increasingly influenced by intelligent web services that empower the user to contribute to developing, rating, collaborating on and distributing Internet content and customising Internet applications. As the Internet is more embedded in people’s lives ‘users’ draw on new Internet applications to express themselves through ‘user-created content’ (UCC).

This study describes the rapid growth of UCC, its increasing role in worldwide communication and draws out implications for policy. Questions addressed include: What is user-created content? What are its key drivers, its scope and different forms? What are new value chains and business models? What are the extent and form of social, cultural and economic opportunities and impacts? What are associated challenges? Is there a government role and what form could it take?

No doubt, the latest OECD digital content report (see also earlier work in this context and my comments here) by Sacha Wunsch-Vincent and Graham Vickery of the OECD’s Directorate for Science, Technology and Industry is a must-read that provides plenty of “food for thought” – and probably for controversy as well, as one might assume.

Managing Corporate Risks in an E-Environment

ø

My colleague Daniel Haeusermann and I just released a new paper entitled “E-Compliance: Towards a Roadmap for Effective Risk Management.” In the article, which is largely based on consulting work we’ve been doing, we argue that the widespread use of digital communication technology on the part of business organizations leads to new types of challenges when it comes to the management of risks at the intersection of law, technology, and the marketplace. In order to effectively manage these challenges and associated risks in diverse areas such as security, privacy, consumer protection, IP, and content governance, we call for an integrated and comprehensive compliance concept in response to the structural and substantive peculiarities of the digital environment in which corporations – both in and outside the dot-com industry – operate today. See also this post. The conclusion section of the paper reads as follows:

Through significant efforts, the legal system has adjusted to the changes in the information and communications technology of daily corporate life—changes at the intersection of the market, technology, and law. Organizations must make adjustments on their part as well in order to deal with the consequences resulting from these changes in the legal system. The observation that led to this essay was that these adjustments represent a greater challenge than the already decreasing entropy surrounding concepts such as “e-commerce law” or “cyberlaw” would suggest. Our initial foray into the concept, characteristics, responsibilities and organizational guiding principles of e-Compliance confirms this observation.

E-Compliance, as discussed in this article, is confronted with the phenomenon of a close interconnection between law and technology, a prominent dynamization of the law, massive internationalization of issues and legal problems, as well as a strong increase in the significance of soft law. These characteristics, which in part may also apply to traditional areas of compliance such as financial market regulation, call in their interplay for the further development of compliance concepts as well as adaptation of the affected aspects of corporate organization. Due to the increasing amalgamation of corporate organizational nexus and ICT, the symbiotic relations between traditional compliance and e-Compliance will be increasingly amplified. The view that e-Compliance represents merely a single risk area among the many of compliance is therefore outdated in our opinion. E-Compliance is actually a multidimensional and multidisciplinary task, although there are certainly areas of law that are particularly affected by digitization (or also which particularly impact digitization) and therefore are of particular importance for the field of e-Compliance.

Thus, in conclusion, the authors do not posit a special “e-Sphere” within or without existing compliance departments. Rather, we argue for an integrated and comprehensive compliance concept that appropriately makes allowance for the structural and substantive peculiarities of e-Compliance as outlined in this essay and stays abreast with the pace of digitization.

Please contact Daniel or me if you have comments.

Social Signaling Theory and Cyberspace

3

Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).

The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)

Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?

My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:

  • Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
  • Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
  • Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)

In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.

Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?

As always, comments appreciated.

The Mobile Identity Challenge – Some Observations from the SFO mID-Workshop

1

I’m currently in wonderful San Francisco, attending the Berkman Center’s Mobile Identity workshop – a so-called “unconference” — led by my colleagues Doc Searls, Mary Rundle, and John Clippinger. We’ve had very interesting discussions so far, covering various topics ranging from Vendor Relationship Management to mobile identity in developing countries.

In the context of digital identity in general and user-centric identity management systems in particular, I’m especially interested the question as to what extent the issues related to mobile ID are distinct from the issues we’ve been exploring in the browser-based and traditionally wired desktop-environment. Here’s my initial take on it:

Although mobile identity can be best understood as part of the generic concept of digital identity and despite the fact that identity as such has some degrees of mobility by definition, I would argue that mobile (digital) identity has certain characteristics that might (or should) have an impact on the ways we frame and address the identity challenges in this increasingly important part of the digitally networked environment. I would argue that the characteristics, by and large, may be mapped onto four layers.

  • Hardware layer: First and most obviously, mobile devices are characterized by the fact that we carry them with us – from location to location. This physical dimension of mobility has a series of implications regarding identity management, especially at the logical and content layer (see below), but also with regard to vulnerabilities such as theft and loss. In addition, the devices themselves have distinct characteristics – ranging from relatively small screens, small keyboards to limited computing power, but also SIM cards, among other things — that might shape the design of the identity management solution.
  • Logical layer: One of the consequences of location-to-location mobility and multi-mode devices is that identity issues have to be managed in a heterogeneous wireless infrastructure environment, which includes multiple providers of different-generation cellular networks, public and private WiFi, Bluetooth, etc., that are using different technologies and standards, and are operating under different incentive structures. This links back to our last week’s discussion about ICT interoperability.
  • Content layer: The characteristics of mobile devices have ramifications at the content layer. Users of mobile devices are limited in what they can do with these devices. Arguably, mobile device users tend to carry out rather specific information requests, transactions, tasks, or the like – as opposed to open, vague and time-consuming “browsing” activities. This demand has been met on the supply-side with application and service providers offering location-based and context-specific content to mobile phone users. This development, in turn, has increased the exchange of location data and contextual information among user/mobile device and application/service providers. Obviously, the increased relevance of such data adds another dimension to the digital ID and privacy discussion.
  • Behavioral layer: The previous remarks also make clear that different dimensions of mobility and the characteristics of mobile devices lead to different uses of mobile devices when compared to desktop-like devices. The type and amount of personal information, for example, that is disclosed in a mobile setting is likely to be distinct from other online settings. Furthermore, portable devices get more often lost (or stolen) than non-portable devices. These “behavioral” characteristics might vary among cultural contexts – a fact that might add to the complexity of mobile identity management (Colin Maclay, for instance, pointed out that sharing cell phones is a common practice in low income countries.)

Today, I got the sense that the technologists in the room have a better understanding of how to deal with the characteristics of mobile devices when it comes to digital identity management. At least it appears that technologists have identified both the opportunities and challenges associated with these features. I’m not sure, however, whether we lawyers and policy people in the room have fully understood the implications of the above-mentioned characteristics, among others, with regard to identity management and privacy issues. It only seems plain that many of the questions we’ve been discussing in the digital ID context get even more complicated when we move towards ubiquitous computing. (One final note in this context: I’m not sure whether we focused too much on mobile phones at this workshop – ID-relevant components of the mobile space such as RFID tags, for instance, have remained largely unaddressed – at least in the sessions I attended.)

What Is Web 2.0? Interviews with CEOs

ø

Techcrunch’s Michael Arrington recently interviewed CEOs and executives of start ups about Web 2.0. Participating in the discussion were Aaron Cohen (Bolt), Scott Milener and Steven Lurie (Browster), Keith Teare (edgeio), Steven Marder (Eurekster), Joe Kraus (JotSpot), Jeremy Verba (Piczo), Auren Hoffman (Rapleaf), Chris Alden (Rojo), Gautam Godhwani (Simply Hired), Jonathan Abrams (Socializr), David Sifry (Technorati), Matt Sanchez (Video Egg) and Michael Tanne (Wink). They explored quesions such as what is Web 2.0? Are we in a bubble? What are the business models that will work on the web today? What is the role of publishers in a user generated world? How important and how big is the early adopter crowd?

The video is here.

Slater and McGuire on (Taste) Sharing

ø

Mike McGuire and Derek Slater have released an interesting Gartner/Berkman Center report entitled “Consumer Taste Sharing Is Driving the Online Music Business and Democratizing Culture” that analyzes the extent of peoples’ use of consumer-to-consumer recommendation tools such as playlists. Here’s their prediction:

By 2010, 25 percent of online music store transactions will be driven directly from consumer-to-consumer taste-sharing applications, such as playlist publishing and ranking tools built into online music stores or external sites with links to stores.

Check also Derek Slater’s playlist on this topic, and his comments here.

Top 10 Sources Up and Runnin’

ø

Check it out – Top10 Sources has launched today. It’s fascinating, it’s promising, and it’s also somewhat surprising since it reintroduces editors in the old sense in the age of decentralized information production. Here’s the brief description from the homepage:

Top10 Sources is a directory of sites that bring you the freshest, most relevant content on the Web. We know it’s impossible for anyone to keep track of the 20 million+ online sources of information. So our editors search Web 2.0 — blogs, podcasts, wikis, news sites, and every kind of syndicated sources online — by hand. Our Top10 lists are updated frequently as great new sources come online.

I’m very interested in the further development of Top 10 Sources, esp. against the backdrop of my information quality research. In any event, congrats – as well as good luck – to my friend John Palfrey and his team, including Wendy.

More on the Controversial OECD Music Report

ø

Check out the Berkman Center’s website for reactions. It turns out that the entertainment industry still does not like the study. We’ve also made public our comments on the draft OECD report on digital music.

OECD Music Industry Report

ø

Find here a terrific report by the OECD on the digital music industry (pre-release.) The report includes, inter alia, references to Terry Fisher’s seminal book Promises to Keep as well as to the Berkman Center’s iTunes case study.

The report concludes that online music distribution will grow significantly over the next few years, will force the music industry to reconsider their business models, and will continue to pose regulatory challenges to governments. The study includes a detailed impact analysis of digital music distribution on artists, consumers, the record industry, and new intermediaries.

The OECD underlines the positive potential of digital distribution, both as a new business model and a cultural phenomenon. It’s report further concludes that Internet-based piracy may be reduced, if licensed file-sharing and new forms of superdistribution evolve.

The study, part of the OECD Project on Digital Broadband Content, is the outcome of work involving a wide range of stakeholders, including many governments. It’s among the first roadmaps exploring as to how public policy should be re-evaluated in this space.

The Berkman Center’s Digital Media Team was invited to comment on a draft version of this report. Today, we congratulate the study’s authors to a thorough multi-stakeholder analysis, written in a challenging environment.

Stay tuned.

Update: The OECD report is also featured in the latest edition of The Economist (subscription required.) See also WIRED News with reactions from IFPI.

Log in