You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'innovation' Category

Testifying on Swiss DRM-Protection Bill

2

Earlier this week, I had the opportunity to provide expert testimony before the Legal Affairs Committee of the Swiss Council of States (roughly equivalent to the U.S. Senate) regarding Switzerland’s implementation of the WIPO Internet Treaties and revision of the copyright act, respectively. It doesn’t come as a surprise that the bill is hotly debated among different stakeholders, and the committee members confirmed that they have received many letters and e-mails in the run-up to the hearing.Right after a presentation by Apple’s iTMS Switzerland Managing Director, I testified about alternative business models for the distribution of digital content that don’t (primarily) rely on DRM protection. Of course, I was also talking about the Berkman Center’s Digital Media Exchange Project. After the presentations, the committee members asked a series of excellent questions about technological, economic, and legal aspects of DRM. Since the debates are traditionally confidential, I can’t go into details here. Instead, I would like to point to some of the characteristics of the bill that I find particularly commendable:

  • The bill only prohibits the circumvention of effective technological protection measures aimed at protecting copyrighted materials.
  • The bill includes a definition of the effectiveness criterion.
  • The ban cannot be enforced against individuals who circumvent TPMs in order to make use of the work in a way that is traditionally permitted by the copyright act (e.g. making a private copy).
  • In contrast to the EUCD, all the exceptions and limitations also apply to on-demand services.
  • Although the bill creates civil and criminal liability, it adheres to the principle of proportionality with regard to sanctions and penalties. In the context of criminal sanctions in the case of circumvention of TPMs, intent (“Absicht”) is required.

On the other hand, several areas of concern remain (see here and here for background information):

  • It’s unclear as to what extent the beneficiaries of a copyright exception can make use of it vis-a-vis TPM. An earlier draft created an innovative and powerful enforcement mechanism (see former draft art. 39b and art. 62, translated here), but the revised draft before the parliament now proposes the establishment of an oversight body (“Beobachtungsstelle”) that facilitates discussion between the stakeholders and might have the power, upon authorization by the Swiss Federal Council, to intervene (e.g. by way of recommendations) in the case of DRM misuse if the “public interest” would require it.
  • The encryption exception has been mentioned in materials, but not in the bill itself.
  • The ban of trafficking in circumvention devices is absolute.
  • The bill doesn’t address transparency and interoperability issues – although I agree that the copyright act is not the best place to deal with these issues.

Besides these TPM-related issues, it is noteworthy that downloading files from P2P services remains legal (private copying exception) under the current version of the bill. In this context, one might also want to mention that the bill doesn’t seem to build on the (contested) assumption that DRM and anti-circumvention laws will reduce piracy. Here as in all other areas, it will be interesting to observe – given the lobbying efforts by the copyright industry – how the draft legislation further evolves once it is debated in public by our national law-makers.

Power of Search Engines: Some Highlights of Berlin Workshop

ø

I’ve spent the past two days here in Berlin, attending an expert workshop on the rising power of search engines organized by Professor Marcel Machill and hosted by the Friedrich Ebert Stiftung, and a public conference on the same topic.

I much enjoyed yesterday’s presentations by a terrific group of scholars and practitioners from various countries and with different backgrounds, ranging from informatics, journalism, economics, and education to law and policy. The extended abstracts of the presentations are available here. I presented my recent paper on search engine law and policy. Among the workshop’s highlights (small selection only):

* Wolfgang Schulz and Thomas Held (Hans Bredow Institute, Univ. of Hamburg) discussed the differences between search-based filtering in China versus search engine content regulation in Germany. In essence, Schulz and Held argued that procedural safeguards (including independent review), transparency, and the requirement that legal filtering presupposes that the respective piece of content is “immediately and directly harmful” make the German system radically different from the Chinese censorship regime.

* Dag Elgesem (Univ. of Bergen, Department of information science) made an interesting argument with regard to the question how we (as scholars) perceive users as online searchers. While the shift from passive consumers to active users has been debated in the context of the creation/production of information, knowledge, and entertainment (one of my favorite topics, as many of you know), Dag argues that online searchers, too, have become “active users” in Benkler’s sense. In contrast, so Dag’s argument, much of our search engine policy discussion has assumed a rather passive user who just types in a search term and uses what he gets in response to the query. Evidently, the question of the underlying conception of users in their role as online searchers is important because it impacts the analysis whether regulatory interventions are necessary or not (e.g. with regard to transparency, market power, and “Meinungsmacht” of search engines.)

* Boris Rotenberg (DG Joint Research Center, European Commission, Sevilla) linked in an interesting way the question of the search engine user’s privacy – as expression of informational autonomy – with the user’s freedom of expression and information. He argues, in essence, that the increased use of personal data by search engine operators in the course of their attempts to personalize search might have a negative impact on freedom of information in at least three regards. First, extensive use of personal data may lead to user-side filtering (Republic.com scenario). Second, it might produce chilling effects by restricting “curious searches”. Third, personalization tends to create strong ties to a particular (personalized) search engine, hindering the user to use alternative engines (“stickiness”-argument).

* Benjamin Peters (Columbia University) used the Mohammed cartoon controversy to explore three questions: (1) As to what extent do search engines eliminate the role of traditional editors? (2) Do algorithms have any sort of in-built ethics? (Benjamin’s answer, based on David Weinberger’s notion of links as acts of generosity: yes, they have). (3) What are the elements of a “search engine democracy”?

* Dirk Lewandowski (Department of information science, Heinrich-Heine Univ.) provided a framework for assessing a search engine’s quality. He argues that the traditional measurement “precision” – as part of retrieval quality – is not a particularly useful criterion to evaluate and compare search engines’ quality, because the major search engines produce almost the same score on the precision scale (as Dirk empirically demonstrated.) Dirk’s current empirical research focuses on the search engine’s index quality, incl. elements such as reach (e.g. geographic reach), size of the index, and actuality/frequency of updates.

* Nadine Schmidt-Maenz (Univ. of Karlsruhe, Institute for Decision Theory and Management Science) presented the tentative results of an empirical long-term study on search queries. Nadine and her team have automatically observed and analyzed the live tickers of three different search engines and clustered over 29 million search terms. The results are fascinating and the idea of topic detection, tracking, and – even more interestingly – topic prediction (!) highly relevant for the search engine industry, both from a technological and business perspective. From a different angle, we also discussed the potential impact of reliable topic forecasting on agenda-setting and journalism.

* Ben Edelman (Department of Economics, Harvard Univ.) empirically demonstrated that search engines are at least in part responsible for the wide spread of spyware, viruses, pop-up ads, and spam, but that they have taken only limited steps to avoid sending users to hostile websites. He also offered potential solutions to the problems, including safety labeling of the individual search results by the search engine providers, and changes in the legal framework (liability rules) to create the right incentive structure for search engine operators to contribute to overall web safety.

Lot’s of food for thought. What I’d like to explore in greater detail is Dag’s argument that users as online searchers, too, have become highly (inter-)active, probably not only in the sense of active information retrievers, but increasingly also as active producers of information while being engaged in search activities (e.g. by reporting about search experiences, contributing to social search networks, etc.)

YJoLT-Paper on Search Engine Regulation

ø

The Yale Journal of Law and Technology just published my article on search engine regulation. Here’s the extended abstract:

The use of search engines has become almost as important as e-mail as a primary online activity. Arguably, search engines are among the most important gatekeepers in today’s digitally networked environment. Thus, it does not come as a surprise that the evolution of search technology and the diffusion of search engines have been accompanied by a series of conflicts among stakeholders such as search operators, content creators, consumers/users, activists, and governments. This paper outlines the history of the technological evolution of search engines and explores the responses of the U.S. legal system to the search engine phenomenon in terms of both litigation and legislative action. The analysis reveals an emerging “law of search engines.” As the various conflicts over online search intensify, heterogeneous policy debates have arisen concerning what forms this emerging law should ultimately take. This paper offers a typology of the respective policy debates, sets out a number of challenges facing policy-makers in formulating search engine regulation, and concludes by offering a series of normative principles which should guide policy-makers in this endeavor.

As always, comments are welcome.

In the same volume, see also Eric Goldman‘s Search Engine Bias and the Demise of Search Engine Utopianism.

New OECD Reports on Digital Media Policy

1

Two new documents by OECD on digital media policy. The first report is the official summary of the OECD – Italy MIT Conference on the Future Digital Economy: Digital Content, Access and Distribution (see Terry Fisher’s main conclusions and the interesting policy items at the end – monopoly of search engines, DRM, user-created content).

The second report is an OECD study on Digital Broadband Content: Digital Content Strategies and Policies. As complement to the above conference, this OECD study identifies and discusses six groups of business and public policy issues and illustrates these with existing and potential OECD Digital Content Strategies and Policies.

Some Highlights of Yale’s A2K Conference

ø

Our colleagues and friends from the Information Society Project at Yale Law School have organized a landmark conference on Access to Knowledge, taking place this weekend at Yale Law School, that brings together leading thinkers and activists on A2K policy from North and South and is aimed at generating concrete research agendas and policy solutions for the next decade. The impressive program with close to 20 plenary sessions and workshops, respectively, is available here. Also check the resources page and the conference wiki (with session notes.)

Here are some of Friday’s and yesterday’s conference highlights in newsflash-format:

  • Jack Blakin’s framework outlining core themes of the A2K discourse. The three main elements of a theory of A2K: (1) A2K is a demand of justice; (2) A2K is an issue of economic development as well as an issue of individual participation and human liberty; (3) A2K is about IP, but it is also about far more than that. Balkin’s speech is posted here.
  • Joel Mokyr’s lecture on three core questions of A2K: (a) Access to what kind of knowledge (propositional vs. prescriptive)? (b) Access by how many users? Direct or indirect access? (question of access intermediaries and the control of their quality) (c) Access at what costs? (Does a piece of knowledge that I need exist? If yes, where; who has it? How to get it? Verification of its trustworthiness.)
  • Yochai Benkler’s fast-paced presentation on the idea of A2K as a response to 4 long-term trends (decolonization->increased integration; rapid industrialization->information knowledge economy; mass media monopolies->networked society; communism and other –isms->human dignity), the reasons why we should care about it (justice and freedom), the sources of the A2K movement as a response to the 4-long term trends (incl. access to medicine, internet freedo movement, information commons, FOSS, human genome project, spectrum commons, open access publications, digital libraries, … ), and the current moment of opportunity in areas such as regulation of information production and telecommunication policy.
  • Eric von Hippel’s discussion of norm-based IP systems and a recent study on cultural norms shared among Michelin-starred French chefs that regulate – as a substitute to copyright law – how they protect ownership of their recipes.
  • Keith Maskus’ lecture on the interplay between trade liberalization and increased IP protection of technologies and an overview of econometric studies regarding key IPR claims in this zone (transparent and enforceable IP regimes do seem to encourage increase in IT investments and associated export growth, both at the aggregate and micro-level; however, claim is conditioned, i.e., holds in middle-income countries, but no evidence for low income developing countries).
  • Eli Noam’s talk on the evolution of firms from the pre-industrial age to today’s digitally networked environment, in which organizations are increasingly defined by information. More on the MacLuhanization of the firm here.
  • Suzanne Scotchmer’s presentation on the design of incentive systems to manage possible conflicts among incentive goals such as the promotion of R&D, the promotion of its use, and trade policy goals. Scotchmer’s lecture was based on her book Innovation and Incentives.
  • Michael Geist’s overview of the current controversies surrounding the idea of a two-tiered Internet – hot topics, among others,: VoiP, content control, traffic shaping, public vs. private internet, and website premium – and his discussion of the core policy questions (is legal protection from Internet tiering required? Is tiering needed for network building and management? Is it a North-South issue?)
  • Susan Crawford’s discussion of the different perspectives of the Bellheads versus the Netheads and the clash of these world views in the Net neutrality debate. Susan’s key arguments are further discussed in this paper.
  • Pam Samuelson’s lecture on the history of the WIPO Internet Treaties, the battles surrounding the DMCA and the EUCD, the fight against database protection in the U.S., and the lesson we can learn form these earlier tussles with regard to the A2K movement (first of all, don’t be polemic –engage in thorough research.) [Update: excellent notes of Pam’s lecture taken by Susan Crawford.]
  • Jamie Love’s action points for the A2K movement, including the following (see here): (1) Stop, resist or modify the setting of bad norms; (2) change, regulate, and resist bad business practices; (3) create new modes of production (commercial and non-commercial) of knowledge goods; (4) create global frameworks and norms that promote A2K.
  • Natali Helberger’s discussion of the proposed French provision on interoperability (Art. 7 of the IP Act) as an expression of cultural policy and national interests.

Professor Fisher Presents Conclusions on OECD Digital Content Conference

ø

Professor Terry Fisher has the difficult job, as the Day 1 Rapporteur, to present in 10 minutes the OECD conference conclusions. Here are the main points he made a few minutes ago:

A. Points of agreement (or at least substantial consensus)

(a) Descriptive level:
o We’re entering a participatory culture, active users, explosion of blogs; differences in web usage.

(b) Predictive level:
o Consensus that we’ll see a variety of applications that will florish; the shift to biz models that incl internet distribution will have long tail effects, increase diversity

(c) Level of aspiration:
o We should aim for a harmonized, global Internet – single, harmonized global approach (vs. competing legal/regulatory frameworks)
o Governments should stay out, but broad consensus of 6 areas where governmental intervention is desirable: (1) Stimulating broadband; (2) fostering universal access (bridging dig.div.); (3) educating consumers; (4) engage in consumer protection against fraud, spam; (5) fostering competition; (6) promoting IP to achieve an optimal balance
o We should attempt to achieve “biz model neutrality” (TF’s personal comment: appealing idea, but infeasible, there’s no way to achieve it.)

B. Points of disagreement

(a) Descriptive level
o Whether IP currently does strike optimal balance (yes, middle ground, no – spectrum of positions)

(b) Predictive level
o Which biz strategy will prevail: pay-per-view; subscription; free-advertisement based model?

(c) Level of aspiration:
o Network neutrality: required or not as a matter of policy
o TPM: Majority: yes, smaller group: no; intermediate group: only under certain conditions.
o Should governments be in the biz of interoperability?
o Using government power to move towards open doc format?
o Government intervention to achieve an Internet that is open vs. variations of a walled-gardened net?

On Grokster, Finally

ø

Late, very late, but hopefully not too late — finally online available some thoughts on Grokster by Harvard Law School Clinical Professor John G. Palfrey, Jr. and me. It’s a piece written for a non-U.S., non-IP-law-audience with a general interest in the topic. Here’s the abstract:

In summer 2005, the United States Supreme Court issued a decision which is surely destined to play a significant role in the interrelation between law and technology in the coming years. The case, Metro-Goldwyn-Mayer Studios Inc., et al. v. Grokster, Ltd., et al., pitted copyright holders against the operators of certain peer-to-peer online file-sharing services and was awaited by many in both the legal and technology communities as a referendum on the landmark legal precedent set in the “Sony-Betamax” case. The Sony case came to represent the legal standard for determining when manufacturers of “dual-use technology”—technology capable of both legally noninfringing and infringing uses—should be given a safe harbor from liability for acts on the part of their consumers which violated copyright law.

Surprisingly, the Supreme Court’s decision did not center around an affirmation or rejection of the Sony ruling; rather the Court based their opinion on a common law principle which, they held, was not preempted by the holding in Sony. The “inducement” to infringe copyright, although not a completely novel cause of action, has been perceived by some commentators to introduce a change in the legal landscape of secondary liability for copyright infringement. In this article, we provide an extensive exposition of the Court’s decision and discuss the disposition of the decision including the implication of the two concurring opinions. We also speculate on the impact that the Court’s decision will have on the technology sector and on technological innovation in particular. Ultimately, we grapple with new questions which the decision has presented for industry and the continued existence of peer-to-peer file-sharing.

Regulating Search? Call for a Second Look

ø

Here is my second position paper (find the first one here) in preparation of the upcoming Regulating Search? conference at ISP Yale. It provides a rough arc of a paper I will write together with my friend and colleague Ivan Reidel. The Yale conference on search has led to great discussions on this side of the Atlantic. Thanks to the FIR team, esp. Herbert Burkert and James Thurman, Mike McGuire, and to Sacha Wunsch-Vincent for continuing debate.

Regulating Search? Call for a Second Look

1. The use of search engines has become almost as important as email as a primary online activity on any given day, according to a recent PEW survey. According to an another survey, 87% of search engine users state that they have successful search experiences most of the time, while 68% of users say that search engines are a fair and unbiased source of information. This data combined with the fact that the Internet, among very experienced users, ranks even higher than TV, radio and newspapers as an important source of information, illustrates the enormous importance of search engines from a demand-side perspective, both in terms of actual information practices as well as with regard to users’ psychological acceptance.

2. The data also suggests that the transition from an analog/offline to a digital/online information environment has been accompanied by the emergence of new intermediaries. While traditional intermediaries between senders and receivers of information—most of them related to the production and dissemination of information (e.g. editorial boards, TV production centers, etc.)—have diminished, new ones such as search engines have entered the arena. Arguably, search engines have become the primary gatekeepers in the digitally networked environment. In fact, they can effectively control access to information by deciding about the listing of any given website in search results. But search engines not only shape the flow of digital information by controlling access; rather, search engines at least indirectly engage in the construction of the messages or meaning by shaping the categories and concepts users’ use to search the Internet. In other words, search engines have the power to influence agenda setting.

3. The power of search engines in the digitally networked environment with corresponding misuse scenarios is likely to increasingly attract policy- and lawmakers attention. However, it is important to note that search engines are not unregulated under the current regime. Markets for search engines regulate their behavior, although the regulatory effects of competition might be relatively weak because the search engine market is rather concentrated and centralized; a recent global user survey suggests that Google’s global usage share has reached 57.2%. In addition, not all search engines use their own technology. Instead, they rely on other search providers for listings. However, search engines are also regulated by existing law and regulations, including consumer protection laws, copyright law, unfair competition laws, and—at the intersection of market-based regulation and law-based regulation—antitrust law or (in the European terminology) competition law.

4. Against this backdrop, the initial question for policymakers then must concern the extent to which existing laws and regulations may feasibly address potential regulatory problems that emerge from search engines in the online environment. Only where existing legislation and regulation fails due to inadequacy, enforcement issues, or the like, the question of new, specific and narrowly tailored regulation should be considered. In order to analyze existing laws and regulation with regard to their ability to manage problems associated with search engines, one might be well-advised to take a case-by-case approach, looking at each concrete problem or emerging regulatory issue (“scenario”) on the one hand and discussion relevant to incumbent legal/regulatory mechanisms aimed at addressing conflicts of that sort on the other hand.

5. Antitrust law might serve as an illustration of such an approach. While the case law on unilateral refusals to deal is still one of the most problematic and contested areas in current antritrust analysis, the emergence of litigation applying this analytical framework to search engines seems very likely. Although most firms’ unilateral refusals to deal with other firms are generally regarded as legal, a firm’s refusal to deal with competitors can give rise to anti-trust liability if such firm possesses monopoly power and the refusal is part of a scheme designed to maintain or achieve further monopoly power. In the past, successful competitors like Aspen Skiing Co. and more recently Microsoft have been forced to collaborate with competitors and punished for actions that smaller companies could have probably gotten away with. In this sense, search engines might be the next arena where antitrust laws with regard to unilateral refusals to deal are tested. In addition to the scenario just described, the question arises as to whether search engines could be held liable for refusal to include particular businesses in their listings. Where a market giant such as Google has a “don’t be evil” policy and declines from featuring certain sites in its PageRank results because it deems these sites to be “evil,” there is an issue of whether Google is essentially shutting that site provider out of the online market through the exercise of its own position in the market for information. Likewise, the refusal to include certain books in the Google Print project would present troubling censorship-like issues. It is also important to note that Google’s editorial discretion with regard to its PageRank results was deemed to be protected by the First Amendment in the SearchKing case.

6. In conclusion, this paper suggests a cautious approach to rapid legislation and regulation of search engines. It is one of the lessons learned that one should not overestimate the need for new law to deal with apparently new phenomena emerging from new technologies. Rather, policy- and lawmakers would be well-advised to carefully evaluate the extent to which general and existing laws may address regulatory problems related to search and which issues exactly call for additional, specific legislation.

Regulating Search? Discussion Paper I

ø

I have the pleasure to participate in a terrific conference on “Regulating Search?” organized and hosted by our friends at the Information Society Project at Yale Law School. Here is my first discussion paper. I will post a second one later on:

Regulating Search?
Sketching a Normative Framework for Assessing Regulatory Proposals

1. The question of this symposium – Regulating Search? – can be approached from various angles and at different levels. In any event, one might expect, inter alia, that several proposals of legal and/or regulatory actions aimed at regulating search engines, ranging from consumer protection laws, IP reform, etc., will be up for discussion. Presumably, the respective proposals will pursue different policy goals and use different regulatory techniques.

2. In a later phase, proposals like this are likely to enter into competition with one another. Lawmaking and regulation are costly processes, requiring that choices about goals and means be made. Against this backdrop, a systematic comparison and isolated evaluations of regulatory proposals become essential in order to make well-informed and sustainable decisions. A look back at the history of what has been termed “cyberlaw,” however, reveals a prevalent lack of thorough assessment of legislative and/or regulatory actions, in part because such an assessment requires an open discussion and shared understanding of what fundamental policy objectives should underlie today’s information society in the first place. This failure should not be repeated in the future and with regard to a potential regulation of “search.”

3. I would like to suggest three core values (or policy goals) of a democratic information ecosystem that may serve as the benchmarks for assessing proposals aimed at regulating search engines in particular and search more generally: Autonomy, diversity, and quality. Informational autonomy includes at least three elements. First, an individual must have the freedom to make choices among alternative sets of information, ideas, opinions, and the like. This includes the freedom to decide what information someone wants to receive and process. Second, informational autonomy as an aspect of individual liberty necessitates that everyone has the right to express her own beliefs and opinions. Third, autonomy in the digitally networked environment arguably requires that every user can participate in the creation of information, knowledge, and entertainment.

4. The development of an individual’s own personality and self-fulfillment intersects with a second core value of the digitally networked ecosystem: its diversity. Diversity in the sense of a wide distribution of information from a great variety of competing sources can either be seen as a valuable mechanism to attain truth, or as a crucial instrument for protecting democratic process and deliberation. In the digital environment, however, the diversity of information, knowledge, and entertainment is an important aspect of the broader concept of cultural diversity.

5. As individuals, groups, and societies, we heavily depend in our decision-making processes on information, which is increasingly acquired over the Internet. In order to make good decisions, we depend on quality information, i.e., information that meets the functional, cognitive, aesthetic, and ethical requirements of different stakeholders such as users, creators, experts, and administrators. Consequently, legal and regulatory regimes should contribute to the creation and further development of a high-quality information ecosystem.

6. Each proposal that seeks to regulate search in general and search engines in particular can be evaluated based on these normative criteria. Even with this normative framework in place, however, the assessment of alternative governance regimes gets complicated, since the three policy goals “autonomy,” “diversity,” and “quality” are not necessarily always aligned. Unleashed diversity in the digitally networked environment, for instance, might have negative feedback effects on user autonomy because it increases an individual’s risk to be exposed to undesired information. A regulatory approach aimed at ensuring high-quality information, by contrast, might be in tension with informational autonomy, because it may impose a quality requirement leading to a level of quality that does not meet an individual’s informational needs.

7. As a consequence, governance proposals for search engines and their environments face the challenge of achieving a balance among three policy goals that are not perfectly aligned. In the case of search engine regulation, this problem is accentuated by the fact that search engines simultaneously affect all three aspects. For example, since search engine users often do not know in advance what specific piece of information they are looking for, the quality of the information that users get depends to a great extent on search engines. Consequently, the quality of information is intertwined with the quality of the search engine that defines which information becomes available based on any given query. Similarly, search engines have effects on autonomy and diversity in the digitally networked environment. Against this backdrop, regulation of search (engines) is a particularly complex task because each regulatory intervention focusing on one issue almost certainly affects another element of the normative framework.

8. In conclusion, this discussion paper calls not only for a careful design of legal or regulatory actions aimed at governing “search,” but also for a thorough assessment of legislative and/or regulatory proposals and their potential effects against the backdrop of core values of a democratic digital environment (system of “moving elements”). In that sense, the paper also advocates for a systemic view of “search” regulation, where “search” is understood as one element that interacts with other elements of the digitally networked environment, including decentralized content production and peer-to-peer distribution of digital content.

Comments welcome.

Breaking News: EU Parliament rejects Software Patents Directive

ø

The European Parliament has voted by a massive majority (648 votes to 14, with 18 abstentions) to reject the software patents directive. More here, here, and here.

Log in