Archive for the 'futurology' Category

How Can Law Foster Innovative Entrepreneurship? A Blueprint for a Research Program

1

I just got back from a conference on “Legal Institutions and Entrepreneurship” at Stanford, organized by the Gruter Institute for Law and Behavioral Science and the Kauffman Foundation. Experts from various disciplines, including biology, neuro-economics, zoology, and business studies, among others, discussed the question how innovative entrepreneurship (in the Schumpeterian sense) can be facilitated by legal institutions and alternative institutional arrangements like, for instance, reputation systems.

In my contribution, I presented the idea of a “legal lab” analogous, for instance, to the MIT’s media lab, which would be devoted to the study of innovations within the legal/regulatory system itself and would experiment with innovative institutional regimes (e.g. using virtual worlds such as Second Life as rich social environments). Together with my St. Gallen collaborators Herbert Burkert and Patrick Gruendler as well as with my colleagues and friends at the Berkman Center, John Palfrey and Colin Maclay, I’ve been working on this idea for some months, and I’m thrilled that several conference participants – including Judith Donath and Oliver Goodenough – will help us to work towards a project proposal in the weeks to come.

In my formal presentation, I attempted to frame the main research topics at the heart of the law & entrepreneurship debate by offering an initial mental map, consisting of three related, but analytically distinct clusters of research.

1. The first cluster deals with a set of rather fundamental questions concerning the basic relationship between the legal system and entrepreneurship.

Traditionally – and in the US in particular – law has been perceived as a constraint on behavior. Entrepreneurs, in contrast, are in the rule-breaking business. Entrepreneurship is very much about creative anarchy, as Deborah Spar eloquently described it, and from this angle law is usually perceived as an obstacle to innovation and entrepreneurship. However, a number of scholars – most prominently Viktor Mayer-Schoenberger in a recent paper – have demonstrated that the relation and interaction between the legal system and entrepreneurship is more complex.

In my view, the relation is at least three-dimensional: (a) law can foster entrepreneurship innovation (e.g. by providing incentives for creativity = IPR), (b) it can be in a neutral relationship, or (c) may indeed hinder innovation (e.g. overly protective labor laws). Where law has a positive impact, it does so, as Mayer-Schoenberger argues, in its potential function as a leveler (e.g. lowering market entry barriers), protector (e.g. property rights, limitation of liability), or enforcer (esp. in case of contractual arrangements).

2. A second area of research seeks to gain a deeper, much more granular understanding of the interactions among the legal system, innovation, and entrepreneurship.

Within this cluster, one might roughly distinguish between two research initiatives: First, there are attempts aimed at exploring the various elements of the legal ecosystem and its impact on entrepreneurship. Such attempts need to be sensitive to varying contexts, sectors, and cultures (e.g. interplay among the elements is different in ICT market vs. biotech sector; or picture may look very different when it comes to low-income vs. high-income countries).

One example in this category is an earlier Berkman project on digital entrepreneurship that focused on low-income countries. Based on case studies of national innovation policies and successful entrepreneurial projects, we identified the relevant elements and aspects of the legal ecosystem and evaluate their influence on entrepreneurship. We clustered the elements in two basic categories: substantive areas of law and legal process issues. Our big-picture take-away: When it comes to the impact of law on entrepreneurship, much depends on the specific economic, societal, and cultural circumstances.

The second debate with this research cluster relates to the different approaches and regulatory techniques that can be used by law – and their promises and limits when it comes to entrepreneurship. This includes research on different types and forms of regulation, such as direct vs. indirect regulation (e.g. regulation of capital markets); framework regulation, self-regulation, incentive-based regulation, command-and-control, etc. Cross-sectional challenges that occur when law seeks to regulate innovation and entrepreneurial activities, too, fall into this category, including questions such as justification of legal intervention (e.g. fostering economic growth, encouraging spillover effects), prioritization (good legislation as a scarce resource!), timing, trade-offs (e.g. between innovation and risk prevention), how to ensure that the legal system can learn, etc.

3. The third cluster is less analytical and more design-oriented. Again, one can differentiate between two perspectives: One the one hand, how to optimize existing legal institutions to foster entrepreneurship. On the other hand, what are more radical innovations within the legal system itself aimed at facilitating innovative entrepreneurship?

As far as the first aspect – optimization or improvements – is concerned, a number of law reform projects on both sides of the Atlantic are illustrative, all of which claim to facilitate entrepreneurship. Currently, the probably hottest topic is the reform of the patent system in the U.S. Several tax reform projects in Europe are also linked to entrepreneurship. In corporate law, the creation of exemptions for smaller companies – aimed at reducing the regulatory burden, esp. in areas such as accounting and reporting obligations – are further examples.

But there’s a more fundamental design question lurking in the background: Are we working with the right assumptions when creating legal rules aimed at fostering entrepreneurship? Essentially, there are two black boxes when it comes to innovation and entrepreneurship:

(1) Regulators often have an over-simplified understanding of the creative processes that lead to innovation. The case in point is certainly the digitally networked economy, with the prominent phenomenon of collaborative creativity and the innovative potential of networks. Behavioral law & economics is in this context particularly important when we seek to understand the underlying mechanisms, and the findings have relevance for instance in the area of IPR systems (with its traditional single inventor/author paradigm, linear innovation as archetype), but also for corporate law (e.g. providing fora for new, highly dynamic, network-based forms of collaboration.)

(2) We don’t understand the entrepreneur’s calculus very well. Mayer-Schoenberger in the paper mentioned above has made this point: How important is predictability and legal certainty? How does risk evaluation really work in the case of innovative entreprneurs? How can law shape these processes? This research cluster is less about substantive areas of law rather than about key variables, such as “incentives”, “risks” and “flexibility”, which may be shaped by using different legal tools (ranging from safe harbor provisions to innovative licensing schemes).

4. Looking forward and in conclusion, I propose the building of an international network of researchers who work on the three clusters mentioned above. In a first step, it would be important to take stock and share existing findings based on which a shared research agenda can be developed.

From a legal/regulatory perspective, a research agenda could focus on three tasks and topics, respectively:

  • First, drafting a number of case studies based on which the interactions between legal institutions and entrepreneurship can be studied in greater detail, across different setting and cultures. Macro-level case studies on national legislative programs and policies (e.g. Singapore, Hong Kong) would be supplemented by micro-level case studies about successful entrepreneurs and their projects/firms/etc.
  • Based on this research, the research network could second work towards a theory of law, innovation, and entrepreneurship, which would include both normative and analytical/methodological components.
  • Third, the research network could establish a “legal lab” that deals with innovation within the legal system itself (see above). Virtual worlds like SL could be used for experiments with alternative institutional designs and to measure their impact on innovation in complex environments.

The Future of Books in the Digital Age: Conference Report

5

Today, I attended a small, but really interesting conference chaired by my colleagues Professor Werner Wunderlich und Prof. Beat Schmid from the Institute for Media and Communication Management, our sister institute here at the Univ. of St. Gallen. The conference was on “The Future of the Gutenberg Galaxy” and looked at trends and perspectives of the medium “book”. I’ve learned a big deal today about the current state of the book market and future scenarios from a terrific line-up of speakers. It was a particular pleasure, for instance, to meet Prof. Wulf D. von Lucus, who’s teaching at the Univ. of Hohenheim, but is also the Chairman of the Board of Carl Hanser Verlag, which will be publishing the German version of our forthcoming book Born Digital.

We covered a lot of terrain, ranging from definitional question (what is a book? Here is a legal definition under Swiss VAT law, for starters) to open access issues. The focus of the conversation, though, was on the question how digitization shapes the book market and, ultimately, whether the Internet will change the concept “book” as such. A broad consensus emerged among the participants (a) that digitization has a profound impact on the book industry, but that it’s still too early to tell what it means in detail, and (b) that the traditional book is very unlikely to be substituted by electronic formats (partly referring to the superiority-of-design-argument that Umberto Eco made some time ago).

I was the last speaker at the forum and faced the challenge to talk about the future of books from a legal perspective. Based on the insights we gained in the context of our Digital Media Project and the discussion at the forum, I came up with the following four observations and theses, respectively:

Technological innovations – digitization in tandem with network computing – have changed the information ecosystem. From what we’ve learned so far, it’s safe to say that at least some of the changes are tectonic in nature. These structural shifts in the way in which we create, disseminate, access, and (re-)use information, knowledge, and entertainment have both direct and indirect effects on the medium “book” and the corresponding subsystem.

Some examples and precursors in this context: collaborative and evolutionary production of books (see Lessig’s Code 2.0); e-Books and online book stores (see ciando or Amazon.com); online access to books (see, e.g., libreka, Google Book Search, digital libraries); creative re-uses such as fan fiction, podcasts, and the like (see, e.g., LibriVox, Project Gutenberg, www.harrypotterfanfiction.com).

Law is responding to the disruptive changes in the information environment. It not only reacts to innovations related to digitization and networks, but has also the power to actively shape the outcome of these transformative processes. However, law is not the only regulatory force, and to gain a deeper understanding of the interplay among these forces is crucial when considering the future of books.

While fleshing out this second thesis, I argued that the reactions to innovations in the book sector may follow the pattern of ICT innovation described by Debora Spar in her book Ruling the Waves (Innovation – Commercialization – Creative Anarchy – Rules and Regulations). I used the ongoing digitization of books and libraries by Google Book Search as a mini-case study to illustrate the phases. With regard to the different regulatory forces, I referred to Lessig’s framework and used book-relevant examples such as DRM-protected eBooks (“code”), the use of collaborative creativity (“norms”), and book-price fixing (“markets”) to illustrate it. I also tried to emphasis that the law has the power to shape each of the forces mentioned above in one way or another (I used examples such as anti-circumvention legislation, the legal ban on book-price fixing, and mandatory copyright provisions that preempt certain contractual provisions.)

The legal “hot-spots” when it comes to the future of the book in the digital age are the questions of distribution, access, and – potentially – creative re-use. The areas of law that are particularly relevant in this context are contracts, copyright/trademark law, and competition law.

Based on the discussion at the forum, I tried to map some of the past, current, and emerging conflicts among the different stakeholders of the ecosystem “book”. In the area of contract law, I focused on the relationship between authors and increasingly powerful book publishers that are tempted to use their unequal bargaining power to impose standard contracts on authors and transfer as many rights as possible (e.g. “buy out” contracts).

With regard to copyright law, I touched upon a small, but representative selection of conflicts, e.g. the relation between right holders and increasingly active users (referring to the recent hp-lexicon print-version controversy); the tensions between right holders and (new) Internet intermediaries (e.g. liability of platforms for infringements of their users in case of early leakage of bestsellers; e.g. interpretation of copyright limitations and exemptions in case of full-text book searches without permission of right holders); the tension between publishers and libraries (e.g. positive externalities of “remote access” to digital libraries vs. lack of exemptions in national and international copyright legislation – a topic my colleague Silke Ernst is working on); and the tension between right holders and educational institutions (with reference to this report).

As far as competition law is concerned, I sketched a scenario in which Google Book Search would reach a dominant market position with strong user lock-in due to network effects and would decline to digitize and index certain books or book programs, for instance due to operational reasons. Based on this scenario, I speculated about a possible response by competition law authorities (European authorities in mind) and raised the question whether Google Book Search could be regarded, at some point, as an essential facility. (In the subsequent panel discussion, Google’s Jens Redmer and I had a friendly back-and-forth on this issue.)

Not all of the recent legal conflicts involving the medium “book” are related to the transition from an analog/offline to a digital/online environment. Law continues to address book-relevant issues that are not new, but rather variations on traditional doctrinal themes.

I used the Michael Baigent et al. v. Random House Group decision by the London’s High Court of Justice as one example (has the author of Da Vinci Code infringed copyright by “borrowing” a theme from the earlier book Holy Blood, Holy Grail?), and the recent Esra-decision by the German BVerfG as a second one (author’s freedom of expression vs. privacy right of a person in a case where it was too obvious that the figure used in a novel was a real and identifiable person and where intimate details of the real person were disclosed in the book.)

Unfortunately, we didn’t have much time to discuss several interesting other issues and topics that were brought up and related to the generation born digital and its use of books – and the consequences of kids’ changed media usage in a changed media environment, e.g. with regard to information overload and the quality of information. Topics, to be sure, that John Palfrey and I are addressing in our forthcoming book.

In sum, an intense, but very inspiring conference day.

Update: Dr. David Weinberger, among the smartest people I’ve ever met, has just released a great article on ebooks and libraries.

Social Signaling Theory and Cyberspace

0

Yesterday, I attended a Berkman workshop on “Authority and Authentication in Physical and Digital Worlds: An Evolutionary Social Signaling Perspective. Professor Judith Donath from the MIT Media Lab and Berkman Center’s Senior Fellow Dr. John Clippinger were presenting fascinating research on trust, reputation, and digital identities from the perspective of signaling theory – a theory that has been developed in evolutionary biology and has also played an important role in economics. I had the pleasure to serve as a respondent. Here are the three points I tried to make (building upon fruitful prior exchanges with Nadine Blaettler at our Center).

The starting point is the observation that social signals – aimed at (a) indicating a certain hidden quality (e.g. “would you be a good friend?,” “are you smart?”) and (b) changing the believes or actions of its recipient – are playing a vital role in defining social relations and structuring societies. Viewed from that angle, social signals are dynamic building blocks of what we might call a “governance system” of social spaces, both offline and online. (In the context of evolutionary biology, Peter Kappeler provides an excellent overview of this topic in his recent book [in German.)

Among the central questions of signaling theory is the puzzle of what keeps social signals reliable. And what are the mechanisms that we have developed to ensure “honesty” of signals? It is obvious that these questions are extremely relevant from an Internet governance perspective – especially (but not only) vis-à-vis the enormous scale of online fraud and identity theft that occurs in cyberspace. However, when applying insights from social signaling theory to cyberspace governance issues, it is important to sort out in what contexts we have an interest in signal reliability and honest signaling, respectively, and where not. This question is somewhat counterintuitive because we seem to assume that honest signals are always desirable from a societal viewpoint. But take the example of virtual worlds like Second Life. Isn’t it one of the great advantages of such worlds that we can experiment with our online representations, e.g., that I as a male player can engage in a role-play and experience my (second) life as a girl (female avatar)? In fact, we might have a normative interest in low signal reliability if it serves goals such as equal participation and non-discrimination. So, my first point is that we face an important normative question when applying insights from social signaling theory to cyberspace: What degree of signal reliability is desirable in very diverse contexts such as dating sites, social networking sites, virtual worlds, auction web sites, blogs, tagging sites, online stores, online banking, health, etc.? Where do we as stakeholders (users, social networks, business partners, intermediaries) and as a society at large care about reliability, where not?

My second point: Once we have defined contexts in which we have an interest in high degrees of signal reliability, we should consider the full range of strategies and approaches to increase reliability. Here, much more research needs to be done. Mapping different approaches, one might start with the basic distinction between assessment signals and conventional signals. One strategy might be to design spaces and tools that allow for the expression of assessment signals, i.e. signals where the quality they represent can be assessed simply by observing the signal. User-generated content in virtual worlds might be an example of a context where assessment signals might play an increasingly important role (e.g. richness of virtual items produced by a player as a signal for the user’s skills, wealth and available time.)
However, cyberspace is certainly an environment where conventional signals dominate – a type of signal that lacks an inherent connection between signal and the quality it represents and is therefore much less reliable than an assessment signal. Here, social signaling theory suggests that the reliability of conventional signals can be increased by making dishonest signaling more expensive (e.g. by increasing the sender’s production costs and/or minimizing the rewards for dishonest signaling, or – conversely – lowering the recipient’s policing/monitoring costs.) In order to map different strategies, Lessig’s model of four modes of regulation might be helpful. Arguably, each ideal-type approach – technology, social norms, markets, and law – could be used to shape the cost/benefit-equilibrium of a dishonest signaler. A few examples to illustrate this point:

  • Technology/code/design: Increasing punishment costs of the sender by way of building efficient reputation systems based on persistent digital identities; use of aggregation and syndication tools to collect and “pool” experiences among many users to lower policing costs; lowering transaction costs of match-making between a user who provides a certain level of reliability and a transaction partner who seeks that level of reliability (see, e.g., Clippinger’s idea of a ID Rights Engine, or consider search engines on social networking sites that allow to search for “common ground”-signals where reliability is often easier to assess, see also here, p. 9.)
  • Market-based approach: Certification might be a successful signaling strategy – see, e.g., this study on the online comic book market. Registrations costs, e.g. for a social networking or online dating sites (see here, p. 74, for an example), might be another market-based approach to increase signal reliability (a variation on it: creation of economic incentives for new intermediaries – “YouTrust” – that would guarantee for certain degrees of signal reliability.) [During the discussion, Judith made the excellent point that registration costs might not signal what we would hope for while introducing it – e.g. it might signal “I can afford this” as opposed to the desired signal of “I’m willing to pay for the service because I have honest intentions”.)]
  • Law-based approach: Law can also have an impact on the cost/benefit equilibrium of the interacting parties. Consider, e.g., disclosure rules such as requiring the online provider of goods to provide test results, product specifications, financial statements, etc.; warranties and liability rules; trademark laws in the case of online identity (see Professor Beth Noveck’s paper on this topic.) Similarly, the legal system might change the incentives of the platform providers (e.g. MySpace, YouTube) to ensure a certain degree of signal reliability. [Professor John Palfrey pointed to this decision as a good illustration of this question of intermediary liability.)

In sum, my second point is that we should start mapping the different strategies, approaches, and tools and discuss their characteristics (pros/cons), feasibility and interplay when thinking about practical ways to increase signal reliability in cyberspace.

Finally, a third point in brief: Who will make the decisions about the degrees of required signal reliability in cyberspace? Who will make the choice among different reliability-enhancing mechanisms outlined above? Is it the platform designer, the Linden Labs of this world? If yes, what is their legitimacy to make such design choices? Are the users in power by voting with their feet – assuming that we’ll see the emergence of competition among different governance regimes as KSG Professor Viktor Mayer-Schoenberger has argued in the context of virtual world platform providers? What’s the role of governments, of law and regulation?

As always, comments appreciated.

Supreme Futurology

0

In yesterday’s New York Times Magazine, Jeffrey Rosen published a terrific piece on John G. Robert’s confirmation hearings, suggesting that the Senate should ask the questions that will matter in 2015. The scenarios outlined by Rosen include: Brain fingerprinting and the future of privacy rights; genetic screening and the future of personal autonomy; DNA and the future of affirmative action, and–in our context particularly interesting–”property, free expression and the right to tinker”, quoting Larry Lessig and Ed Felten, among others.

New Berkman Report on Digital Media Industry

0

The Berkman Center’s Digital Media Project team has released an in-depth analysis of the impacts of policy choises on emerging business models in the music and film industries. Here’s the link to the paper and the abstract:

Content and Control: Assessing the Impact of Policy Choices on Potential Online Business Models in the Music and Film Industries

The online environment and new digital technologies threaten the viability of the music and film industries’ traditional business models. The industries have responded by seeking government intervention, among other means, to protect their traditional models as well as by developing new models specifically adapted to the online market. Industry activity and public debate have focused on three key policy areas related to copyright holders’ control of content: technical interference with and potential liability of P2P services; copyright infringers’ civil and criminal liability; and legal reinforcement of digital rights management technologies (DRM).

This paper seeks to support policymakers’ decision making by delineating the potential consequences of policy actions in these areas. To do so, it assesses how such action would impact relevant social values and four business models representative of current and emerging attempts to generate viable revenues from digital media. The authors caution that government intervention is currently premature because it is unlikely to strike an appropriate balance between achieving industry goals while supporting other social values, such as consumer rights, the diversity of available content, and technological innovation.

Special thanks — and congratulations — to Derek Slater and Meg Smith of the Berkman team for their work.

Palfrey on Cyberlaw & Digital Media

1

Berkman Center’s Executive Director John Palfrey lectured earlier today at Cornell’s University Computer Policy and Law Program. In the first session, he made a strong case why, in fact, it makes sense to teach “cyberlaw” rather than the “law of the horse”. John started with an analysis of three contemporary legal and regulatory issues that are Internet-specific: Spam, the digital media crises, and VoIP. From there, he moved to a more abstract level and discussed some of the basic characteristics – phenomena such as large-scale infringements, uncertainty surrounding the applicability of traditional legal doctrines such as fair use, high costs of enforcement and coordination, and global reach of the medium, among others – which make the law of the Internet (at least in part) different from other areas of law. John also used variations on Lessig’s theme of the four modalities of regulation to illustrate what makes Internet law special.

In the second lecture, John Palfrey offered a thoughtful and comprehensive overview of the current digital media crisis. Starting with the Napster saga, he moved forward to the current state of affairs, discussing from a comparative law perspective, among other things, the Berkman Center’s iTunes case study and recent case law at the intersection of copyright and contract law as well as technological protection measures. Finally, John discussed possible scenarios for the future of digital media.

Both lectures provide a great opportunity to get an expert’s overview where cyberlaw stands and what some of today’s hottest topics are; highly recommended, also to the audience abroad. And even if you are a scholar working in the same field, you’ll enjoy Palfrey’s presentation, since it’s one of the increasingly rare occasions to re-think some of the fundamental assumptions and concepts of cyberlaw. Thanks, John!

Log in