You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'digital media' Category

Born Digital: How To Deal With Online Aggression?

2

Almost synchronously with the release of Born Digital in the U.S., the Swiss conservative party CVP has made headlines with a position paper that outlines actions to proactively deal with the problems associated with online aggression in Switzerland. The strategy proposed by the conservative party focuses on youth and addresses Internet violence (including cyberbullying) in general and violent games in particular. Among the measures suggested in the position paper are:

  • Introduction of a nation-wide, harmonized rating and classification system for movies, games and MMS services, analogous to the Dutch NICAM model;
  • Amendment of the Swiss Penal Code, sec. 135, in order to ban the sale and making available of games with violent or (other) adult content to children and teenagers;
  • Incorporation of a federal Media Competence Center for electronic media that would administer the classification system, run information and prevention campaigns to educate parents, teachers, etc., and study online addiction, among other things;
  • Commission and release of a study on cyberbullying by the Swiss Federal Council;
  • Formalized collaboration among the Swiss cantons in order to protect youth from violent content;
  • Mandatory inclusion of media literacy classes into the curriculum at public schools (including sessions on the effects of extensive use of media);
  • Information campaign to educate parents and teachers;
  • Conversations between teachers and parents in cases of under-performance of school children due to excessive media usage.

We’ve discussed several of these strategies in Born Digital, chapter 9. The summary paragraph of our analysis reads as follows:

The best regulators of violence in our society, whether online or not, are parents and teachers, because they are the people closest to Digital Natives themselves. Parents and teachers have the most time with kids—and, ideally, their trust. As in other contexts, parents and teachers need to start by understanding what their Digital Natives are up to. From there, it’s important to set limits, especially for young children, on gaming and exposure to violent activities. Parents and educators can and should work overtime to channel the interest of Digital Natives in interactive media into positive directions. But companies need to step up, too, and to exercise restraint in terms of what they offer kids. And despite the hard free-speech questions implicated by these kinds of interventions, the government needs to be involved, too. As we’ve emphasized throughout the book, the answer isn’t to shut down the technologies or reactively to blame video games for every tragedy, but rather to teach our kids how to navigate the complex, fluid environments in which they are growing up. That’s easier said than done, but we don’t have much choice but to take this problem head on. The stakes could not be higher

With regard to the role of governments – also in the current debate about the Swiss party’s position paper the most controversial issue –, we write:

Governments can play a role through educational efforts, whether via schools or at the level of general public awareness. Governments can also help to foster collaborative efforts by public and private parties to work to reduce unwanted exposure by young kids to extreme violence. The Safer Internet Plus program, sponsored by the European Commission, is one such initiative that combines a series of helpful educational and leadership functions by governments. If all else fails, governments should restrict the production and dissemination of certain types of violent content in combination with instituting mandatory, government-based ratings of these materials. The production and distribution of extreme types of violent content—including, for instance, so-called snuff movies, in which people are filmed being killed—can and should be banned by law. Similar restrictions on access to such materials, based on age ratings, are in place in Germany, the United Kingdom, Canada, and Australia, among other places. These types of controls must be very narrowly tailored to pass constitutional muster in the United States, appropriately enough, given the force and breadth of First Amendment protections. We already have most of the legal tools needed to mitigate the effects of this problem, but rarely are these tools use effectively across the relevant platforms that mediate kids’ exposure.

Interestingly, the position paper presented by the Swiss CVP (disclosure: of which I am not a member) is getting pretty close to what we have envisioned in Born Digital. Obviously though, devil is in detail, and the proposal by the CVP has to be analyzed in much greater detail over the weeks and months to come. In any event, CVP certainly deserves credit for starting a public conversation about violence in the digital society and for making a strong case that we all share responsibility.

Study Released: ICT Interoperability and eInnovation

ø

John Palfrey and I released today in Washington D.C. a White Paper and three case studies on ICT Interoperability and eInnovation (project homepage here.) The papers are the result of a joint project between Harvard’s Berkman Center and the Research Center for Information Law at St. Gallen, sponsored by Microsoft. Our research focused on three case studies in which the issues of interoperability and innovation are uppermost: digital rights management in online and offline music distribution models; various models of digital identity systems (how computing systems identify users to provide the correct level of access and security); and web services (in which computer applications or programs connect with each other over the Internet to provide specific services to customers).

The core finding is that increased levels of ICT interoperability generally foster innovation. But interoperability also contributes to other socially desirable outcomes. In our three case studies, we have studied its positive impact on consumer choice, ease of use, access to content, and diversity, among other things.

The investigation reached other, more nuanced conclusions:

  • Interoperability does not mean the same thing in every context and as such, is not always good for everyone all the time. For example, if one wants completely secure software, then that software should probably have limited interoperability. In other words, there is no one-size-fits-all way to achieve interoperability in the ICT context.
  • Interoperability can be achieved by multiple means including the licensing of intellectual property, product design, collaboration with partners, development of standards and governmental intervention. The easiest way to make a product from one company work well with a product from another company, for instance, may be for the companies to cross license their technologies. But in a different situation, another approach (collaboration or open standards) may be more effective and efficient.
  • The best path to interoperability depends greatly upon context and which subsidiary goals matter most, such as prompting further innovation, providing consumer choice or ease of use, and the spurring of competition in the field.
  • The private sector generally should lead interoperability efforts. The public sector should stand by either to lend a supportive hand or to determine if its involvement is warranted.

In the White Paper, we propose a process constructed around a set of guidelines to help businesses and governments determine the best way to achieve interoperability in a given situation. This approach may have policy implications for governments.

  • Identify what the actual end goal or goals are. The goal is not interoperability per se, but rather something to which interoperability can lead, such as innovation or consumer choice.
  • Consider the facts of the situation. The key variables that should be considered include time, maturity of the relevant technologies and markets and user practices and norms.
  • In light of these goals and facts of the situation, consider possible options against the benchmarks proposed by the study: effectiveness, efficiency and flexibility.
  • Remain open to the possibility of one or more approaches to interoperability, which may also be combined with one another to accomplish interoperability that drives innovation.
  • In some instances, it may be possible to convene all relevant stakeholders to participate in a collaborative, open standards process. In other instances, the relevant facts may suggest that a single firm can drive innovation by offering to others the chance to collaborate through an open API, such as Facebook’s recent success in permitting third-party applications to run on its platform. But long-term sustainability may be an issue where a single firm makes an open API available according to a contract that it can change at any time.
  • In the vast majority of cases, the private sector can and does accomplish a high level of interoperability on its own. The state may help by playing a convening role, or even in mandating a standard on which there is widespread agreement within industry after a collaborative process. The state may need to play a role after the fact to ensure that market actors do not abuse their positions.

While many questions remain open and a lot of research needs to be done (including empirical studies!), we hope to have made a contribution to the ongoing interoperability debate. Huge thanks to the wonderful research teams on both sides of the Atlantic, especially Richard Staeuber, David Russcol, Daniel Haeusermann, and Sally Walkerman. Thanks also to the many advisors, contributors, and commentators on earlier drafts of our reports.

The Future of Books in the Digital Age: Conference Report

9

Today, I attended a small, but really interesting conference chaired by my colleagues Professor Werner Wunderlich und Prof. Beat Schmid from the Institute for Media and Communication Management, our sister institute here at the Univ. of St. Gallen. The conference was on “The Future of the Gutenberg Galaxy” and looked at trends and perspectives of the medium “book”. I’ve learned a big deal today about the current state of the book market and future scenarios from a terrific line-up of speakers. It was a particular pleasure, for instance, to meet Prof. Wulf D. von Lucus, who’s teaching at the Univ. of Hohenheim, but is also the Chairman of the Board of Carl Hanser Verlag, which will be publishing the German version of our forthcoming book Born Digital.

We covered a lot of terrain, ranging from definitional question (what is a book? Here is a legal definition under Swiss VAT law, for starters) to open access issues. The focus of the conversation, though, was on the question how digitization shapes the book market and, ultimately, whether the Internet will change the concept “book” as such. A broad consensus emerged among the participants (a) that digitization has a profound impact on the book industry, but that it’s still too early to tell what it means in detail, and (b) that the traditional book is very unlikely to be substituted by electronic formats (partly referring to the superiority-of-design-argument that Umberto Eco made some time ago).

I was the last speaker at the forum and faced the challenge to talk about the future of books from a legal perspective. Based on the insights we gained in the context of our Digital Media Project and the discussion at the forum, I came up with the following four observations and theses, respectively:

Technological innovations – digitization in tandem with network computing – have changed the information ecosystem. From what we’ve learned so far, it’s safe to say that at least some of the changes are tectonic in nature. These structural shifts in the way in which we create, disseminate, access, and (re-)use information, knowledge, and entertainment have both direct and indirect effects on the medium “book” and the corresponding subsystem.

Some examples and precursors in this context: collaborative and evolutionary production of books (see Lessig’s Code 2.0); e-Books and online book stores (see ciando or Amazon.com); online access to books (see, e.g., libreka, Google Book Search, digital libraries); creative re-uses such as fan fiction, podcasts, and the like (see, e.g., LibriVox, Project Gutenberg, www.harrypotterfanfiction.com).

Law is responding to the disruptive changes in the information environment. It not only reacts to innovations related to digitization and networks, but has also the power to actively shape the outcome of these transformative processes. However, law is not the only regulatory force, and to gain a deeper understanding of the interplay among these forces is crucial when considering the future of books.

While fleshing out this second thesis, I argued that the reactions to innovations in the book sector may follow the pattern of ICT innovation described by Debora Spar in her book Ruling the Waves (Innovation – Commercialization – Creative Anarchy – Rules and Regulations). I used the ongoing digitization of books and libraries by Google Book Search as a mini-case study to illustrate the phases. With regard to the different regulatory forces, I referred to Lessig’s framework and used book-relevant examples such as DRM-protected eBooks (“code”), the use of collaborative creativity (“norms”), and book-price fixing (“markets”) to illustrate it. I also tried to emphasis that the law has the power to shape each of the forces mentioned above in one way or another (I used examples such as anti-circumvention legislation, the legal ban on book-price fixing, and mandatory copyright provisions that preempt certain contractual provisions.)

The legal “hot-spots” when it comes to the future of the book in the digital age are the questions of distribution, access, and – potentially – creative re-use. The areas of law that are particularly relevant in this context are contracts, copyright/trademark law, and competition law.

Based on the discussion at the forum, I tried to map some of the past, current, and emerging conflicts among the different stakeholders of the ecosystem “book”. In the area of contract law, I focused on the relationship between authors and increasingly powerful book publishers that are tempted to use their unequal bargaining power to impose standard contracts on authors and transfer as many rights as possible (e.g. “buy out” contracts).

With regard to copyright law, I touched upon a small, but representative selection of conflicts, e.g. the relation between right holders and increasingly active users (referring to the recent hp-lexicon print-version controversy); the tensions between right holders and (new) Internet intermediaries (e.g. liability of platforms for infringements of their users in case of early leakage of bestsellers; e.g. interpretation of copyright limitations and exemptions in case of full-text book searches without permission of right holders); the tension between publishers and libraries (e.g. positive externalities of “remote access” to digital libraries vs. lack of exemptions in national and international copyright legislation – a topic my colleague Silke Ernst is working on); and the tension between right holders and educational institutions (with reference to this report).

As far as competition law is concerned, I sketched a scenario in which Google Book Search would reach a dominant market position with strong user lock-in due to network effects and would decline to digitize and index certain books or book programs, for instance due to operational reasons. Based on this scenario, I speculated about a possible response by competition law authorities (European authorities in mind) and raised the question whether Google Book Search could be regarded, at some point, as an essential facility. (In the subsequent panel discussion, Google’s Jens Redmer and I had a friendly back-and-forth on this issue.)

Not all of the recent legal conflicts involving the medium “book” are related to the transition from an analog/offline to a digital/online environment. Law continues to address book-relevant issues that are not new, but rather variations on traditional doctrinal themes.

I used the Michael Baigent et al. v. Random House Group decision by the London’s High Court of Justice as one example (has the author of Da Vinci Code infringed copyright by “borrowing” a theme from the earlier book Holy Blood, Holy Grail?), and the recent Esra-decision by the German BVerfG as a second one (author’s freedom of expression vs. privacy right of a person in a case where it was too obvious that the figure used in a novel was a real and identifiable person and where intimate details of the real person were disclosed in the book.)

Unfortunately, we didn’t have much time to discuss several interesting other issues and topics that were brought up and related to the generation born digital and its use of books – and the consequences of kids’ changed media usage in a changed media environment, e.g. with regard to information overload and the quality of information. Topics, to be sure, that John Palfrey and I are addressing in our forthcoming book.

In sum, an intense, but very inspiring conference day.

Update: Dr. David Weinberger, among the smartest people I’ve ever met, has just released a great article on ebooks and libraries.

Hong Kong Conversations: Digital Natives, Media Literacy, Rights and Responsibilities

ø

Today in Hong Kong, I’ve had the pleasure to catch up with some of my colleagues and friends who are living and working in Asia. The conversation with Rebecca MacKinnon, my former Berkman fellow Fellow and now assistant professor at the University of Hong Kong’s Journalism and Media Studies Center, resonates in particular. We touched upon several themes and topics in which we share an interest, ranging from Chinese culture, U.S. foreign politics, to corporate social responsibility, among many others. We then started talking about the digital natives project(s), and youth and new media research questions (Rebecca actually teaches “new media” at HKU). Starting from different places and looking from different perspectives, we concluded that two (related) sets of question will likely end up being on our shared research agenda for the months to come.

  • First, media literacy and education of digital natives. While media education in the digital environment has become an important topic especially in the U.K. through the work of Ofcom and experts like Professor David Buckingham and Professor Sonia Livingstone, it’s still in its infancy in many other parts of the world. From all I’ve learned now in the context of our digital native project – and from what I know about the current state of neuroscience with regard to cognitive and emotional development – its seems crucial to start with media education at pre-school or primary school level at latest. If anyone has pointers to good web resources, case studies and/or curricula in this area, please drop me a note.
  • Second, users rights and responsibilities in the digital environment. This issue is obviously related to the first one and concerns the question as to what extent our societies do provide mechanisms to have a discourse about our rights, but also responsibilities (and that’s where it gets tricky from a political perspective) as empowered users in the digitally networked environment. While great work has been done with regard to the “rights”-part of the discussion – largely driven by NGOs and consumer protection organizations (see here for a recent example) – we may need to figure out in the near future how to address also the question of the new responsibilities as “speakers” that are associated with the fundamental shift from passive consumers to active users. Interestingly, the role of citizens as producers of information has reportedly been addressed in a (if I recall correctly: still unpublished) draft of an information freedom act in an Eastern European country. Legislation, however, is most likely not the right starting place for such a discussion, I would argue.

In short, more food for thought – and additional research tasks for our digital native team. (Thanks, Rebecca, for a great conversation.)

Second Berkman/St. Gallen Workshop on ICT Interoperability

1

Over the past two days, I had the pleasure to co-moderate with my colleagues and friends Prof. John Palfrey and Colin Maclay the second Berkman/St. Gallen Workshop on ICT Interoperability and eInnovation. While we received wonderful initial inputs at the first workshop in January that took place in Weissbad, Switzerland, we had this time the opportunity to present our draft case studies and preliminary findings here in Cambridge. The invited group of 20 experts from various disciplines and industries have provided detailed feedback on our drafts, covering important methodological questions as well as substantive issues in areas such as DRM interoperability, digital ID, and web service/mash ups.

Like at the January workshop, the discussion got heated while exploring the possible roles of governments regarding ICT interoperability. Government involvement may take many forms and can be roughly grouped into two categories: ex ante and ex post approaches. Ex post approaches would include, for example, interventions based on general competition law (e.g. in cases of refusal to license a core technology by a dominant market player) or an adjustment of the IP regime (e.g. broadening existing reverse-engineering provisions). Ex ante strategies also include a broad range of possible interventions, among them mandating standards (to start with the most intrusive), requiring the disclosure of interoperability information, labeling/transparency requirements, using public procurement power, but also fostering frameworks for cooperation between private actors, etc.

There was broad consensus in the room that governmental interventions, especially in form of intrusive ex ante interventions, should be a means of last resort. However, it was disputed how the relevant scenarios (market failures) might look like where governmental interventions are justified. A complicating factor in the context of the analysis is the rapidly changing technological environment that makes it hard to predict whether the market forces just need more time to address a particular interoperability problem, or whether the market failed in doing so.

In the last session of the workshop, we discussed a chart we drafted that suggests steps and issues that governments would have to take into consideration when making policy choices about ICT interoperability (according to our understanding of public policy, the government could also reach the conclusion that it doesn’t intervene and let the self-regulatory forces of the market taking care of a particular issue). While details remain to be discussed, the majority of the participants seemed to agree that the following elements should be part of the chart:

  1. precise description of perceived interoperability problem (as specific as possible);
  2. clarifying government’s responsibility regarding the perceived problem;
  3. in-depth analysis of the problem (based on empirical data where available);
  4. assessing the need for intervention vis-à-vis dynamic market forces (incl. “timing” issue);
  5. exploring the full range of approaches available as portrayed, for example, in our case studies and reports (both self-regulatory and regulation-based approaches, including discussion of drawbacks/costs);
  6. definition of the policy goal that shall be achieved (also for benchmarking purposes), e.g. increasing competition, fostering innovation, ensuring security, etc.

Discussion (and research!) to be continued over the weeks and months to come.

New OECD Must-Read: Policy Report On User-Created Content

ø

The OECD has just released what – in my view – is the first thorough high-level policy report on user-created content. (Disclosure: I had the pleasure to comment on draft versions of the report.) From the introduction:

The concept of the ‘participative web’ is based on an Internet increasingly influenced by intelligent web services that empower the user to contribute to developing, rating, collaborating on and distributing Internet content and customising Internet applications. As the Internet is more embedded in people’s lives ‘users’ draw on new Internet applications to express themselves through ‘user-created content’ (UCC).

This study describes the rapid growth of UCC, its increasing role in worldwide communication and draws out implications for policy. Questions addressed include: What is user-created content? What are its key drivers, its scope and different forms? What are new value chains and business models? What are the extent and form of social, cultural and economic opportunities and impacts? What are associated challenges? Is there a government role and what form could it take?

No doubt, the latest OECD digital content report (see also earlier work in this context and my comments here) by Sacha Wunsch-Vincent and Graham Vickery of the OECD’s Directorate for Science, Technology and Industry is a must-read that provides plenty of “food for thought” – and probably for controversy as well, as one might assume.

Law, Economics, and Business of IPR in the Digital Age: St. Gallen Curriculum (with help from Berkman)

1

The University of St. Gallen has been the first Swiss university that has implemented the principles and standards set forth in the so-called Bologna Declaration aimed at harmonizing the European Higher Education System (more on the Bologna process here.) As a result, the St. Gallen law school offers two Master programs for J.D. students: Master of Arts in Legal Studies, and Master of Arts in Law and Economics.

Recently, I have been heavily involved in the law and economics program (I should mention that St. Gallen doesn’t follow the rather traditional approach to law and economics that is predominant among U.S. law schools. Click here for a brief description of the St. Gallen interpretation of law and economics). Today is a special day for the program’s faculty and staff, because the first generation of students enters the final 10th semester of the Bologna-compatible Master program. Arguably, this 10th semester is rather unique as far as structure and content is concerned. Instead of providing the usual selection of courses for graduate students, we have designed what we call an “integrating semester” in which all students are required to take three (but only three) full-semester courses aimed at “integrating” the knowledge, skills, and methods they have acquired over the past few years. All three seminars – together worth 30 credits – are designed and taught by an interdisciplinary group of faculty members from the University of St. Gallen and beyond, including legal scholars, economists, business school profs, technologists, etc. The first seminar, led by Professor Peter Nobel, Thomas Berndt, Miriam Meckel and Markus Ruffner, is entitled Law and Economics of Enterprises and deals with risk and risk management of multinational corporations. The second seminar, led by Professor Beat Schmid and me, concerns legal, economic, and business aspects of intellectual property rights in the digital age. Professors Hauser, Waldburger, and van Aaken, finally, are teaching the third seminar entitled Law and Economics of Globalization, addressing issues such as world market integration of low-income countries, foreign investments, global taxation, and regulation of multinational enterprises.

My seminar on law and economics of IPR in the digital age starts with a discussion of basic concepts of economic analysis of intellectual property law and a stock-taking of the main IPR-problems associated with the shift from an analog/offline to a digital/online environment. It then follows a module in which we will explore three key topics in greater detail: digital copyright, software and business methods patents, and trademarks/domain names. Towards the end of the semester, we will then try to tie all the elements together and develop a cross-sectional framework for economic analysis and assessment of IPR-related questions in the digitally networked environment. In this context, we will also be visiting the Swiss Federal Institute of Intellectual Property (in charge, among other things, with working on IP legislation in Switzerland), where we will discuss the promises and limits of economic analysis of IP law with the Institute’s senior legal advisor and the senior economic advisors.

Clearly, we have a very ambitious semester ahead. I’m particularly thrilled that a wonderful group of colleagues from Europe and abroad is helping me to do the heavy lifting (of course, my wonderful St. Gallen team is very involved, too, as usual.). My colleague and friend John Palfrey, Clinical Professor of Law at Harvard Law School, the Berkman Center’s executive director, and member of the board of our St. Gallen Research Center for Information Law, will be discussing with us thorny digital copyright issues and future scenarios of digital media. Klaus Schubert, partner of WilmerHale Berlin, will be guiding us through the software patents and business methods patents discussion. Last but not least, Professor Philippe Gillieron from the University of Lausanne will be speaking about trademark law in the digital age, focusing on domain name disputes.

All sessions are (hopefully) highly interactive. The students will contribute, among other things, with discussion papers, term papers, group presentations, and will participate in mock trials (one on Google’s recent copyright case in Europe), Oxford debates, and the like. Unfortunately, the Univ. of St. Gallen is still using a closed online teaching system called StudyNet, but if you’re interested in the Syllabus, check it out here. Comments, thoughts, suggestions, etc. most welcome!

ICT Interoperability and Innovation – Berkman/St.Gallen Workshop

1

We have teamed up with the Berkman Center on an ambitious transatlantic research project on ICT interoperability and e-innovation. Today, we have been hosting a first meeting to discuss some of our research hypotheses and initial findings. Professor John Palfrey describes the challenge as follows:

This workshop is one in a series of such small-group conversations intended both to foster discussion and to inform our own work in this area of interoperability and its relationship to innovation in the field that we study. This is among the hardest, most complex topics that I’ve ever taken up in a serious way.

As with many of the other interesting topics in our field, interop makes clear the difficulty of truly understanding what is going on without having 1) skill in a variety of disciplines, or, absent a super-person who has all these skills in one mind, an interdisciplinary group of people who can bring these skills to bear together; 2) knowledge of multiple factual settings; and 3) perspectives from different places and cultures. While we’ve committed to a transatlantic dialogue on this topic, we realize that even in so doing we are still ignoring the vast majority of the world, where people no doubt also have something to say about interop. This need for breadth and depth is at once fascinating and painful.

As expected, the diverse group of 20 experts had significant disagreement on many of the key issues, especially with regard to the role that governments may play in the ICT interoperability ecosystem, which was characterized earlier today by Dr. Mira Burri Nenova, nccr trade regulation, as a complex adaptive system. In the wrap-up session, I was testing – switching from a substantive to a procedural approach – the following tentative framework (to be refined in the weeks to come) that might be helpful to policy-makers dealing with ICT interoperability issues:

  1. In what area and context do we want to achieve interoperability? At what level and to what degree? To what purpose (policy goals such as innovation) and at what costs?
  2. What is the appropriate approach (e.g. IP licensing, technical collaboration, standards) to achieve the desired level of interoperability in the identified context? Is ex ante or ex post regulation necessary, or do we leave it to the market forces?
  3. If we decide to pursue a market-driven approach to achieve it, are there any specific areas of concerns and problems, respectively, that we – from a public policy perspective – still might want to address (e.g. disclosure rules aimed at ensuring transparency)?
  4. If we decide to pursue a market-based approach to interoperability, is there a proactive role for governments to support private sector attempts aimed at achieving interoperability (e.g. promotion of development of industry standards)?
  5. If we decide to intervene (either by constraining, leveling, or enabling legislation and/or regulation), what should be the guiding principles (e.g. technological neutrality; minimum regulatory burden; etc.)?

As always, comments are welcome. Last, but not least, thanks to Richard Staeuber and Daniel Haeusermann for their excellent preparation of this workshop.

Log in