Copyright failure: terms are much much much too long; solution needed
David Gerard recently pointed out that despite recent expansion of the global commons of “freely-licensed knowledge”, all license terms still last for much too long. “Free licenses” still rely on copyright laws which impose restrictions on reuse for unreasonably long term lengths: currently “Life of the author + 70 years” in most countries — roughly 10-50x as long as the average commercial lifespan of a new work.
Economists and researchers studying copyright have often noted that copyright terms have been extended with little justification, always on the request of the publishing industry, since the first copyright term (14 years) was set centuries ago. And that there is no data to suggest that longer copyright terms are good for society or useful in encouraging creative work.
The social memes of “free culture” and “free knowledge” have been shaped in large part by a community that bought into the idea of copyleft in the past decades: a derivative of copyright law which defines the copyrights the author wishes to exercise in a way that lets people reuse their work, as long as they release the result under the same license.
We should figure out a reasonable maximum term for the sort of rights that are currently covered by copyright – say, something no more than 14 years – and embed that term into the most-recommended free culture licenses. That includes all Creative Commons and free-culture and other FOSS licenses. All of these licenses should explicitly transition to the Public Domain before the ultralong default term enshrined in international law.
(In practice this could mean automatically switching to a CC0 license at the end of the shorter term.)
Related discussions about license reform
David’s comments started a recent discussion on the Wikimedia-l mailing list, about whether Wikimedians should help push for a saner copyright term. Mike Linksvayer noted similar discussions on the Creative Commons licenses list from last December – part of brainstorming how to improve those licenses.
Two people made comments along these lines: “Shortening the copyright term is totally infeasible in the near term; instead we should encourage people to switch to free licenses.“
This misses two key points. Firstly, free culture groups are now some of the largest around; they include major content providers and platforms; and Creative Commons itself is a powerful global brand. Secondly, while convincing slow, conservative national governments to change their laws is hard, almost everyone who is not working/lobying for content publishers — including the vast majority of content creators — feels copyright terms are too long. So this is an obvious place for citizen innovation to come first, and legislation second.
A few publishers are already adopting limited terms. O’Reilly Books uses a license that switches to CC-BY after 14 years.
Some free culture groups have taken a position here as well: Sweden’s Pirate Party advocates for a maximum term of 5 years. Richard Stallman of the FSF recommends a maximum of 5 or 10 years (though only for society as a whole; and only if it comes with open source requirements for proprietary software).
What can we do? Won’t this make free licenses harder to use?
Adding an explicit term after which works become PD should not complicate the “opt-in commons”, to use Mike’s term. This could be implemented with a few simple changes (I am imagining how CC could implement this; as they have great authority to recommend licensing norms):
- Define “PD-friendly” licenses as those which become PD in at most N years.
- Define the PD-date of a composite work as the latest of its component sources.
- Ask people to use a PD-friendly license.
Within that framework, people can use terms that make sense to them; some may want a license with a fixed PD date, so that a large group can collaborate on a shared work which is set to become PD in 2020. Ongoing collaborations like Wikipedia could use a license set to become PD after 8 years – so the latest version of a project would always be under a CC-SA license, but one from today would become PD in 2020.
Creative Commons and others could then promote the use of PD-friendly licenses. Collaboratives like Wikimedia communities, and publishers like O’Reilly, could switch to those licenses for their projects and works. Together we would return to building a true intellectual and artistic Commons — something which in the US has been starved of almost all works produced in the past 35 years.
How will YOU use 12M bibliographic records?
Harvard Libraries recently released bibliodata from their collections – 12 million works in all – under a CC-0 license, which lets other sites and researchers reuse that data in any way possible.
This is the biggest release of bibliographic data of its kind — four times the size of a similar release by the British Library in late 2010. (Without an explicit release under a free license, such collections of metadata are covered by ‘database rights‘).
How would you reuse these records in your own work and dreams? Some quick ideas:
- WP or Wikisource could create 12 million stubs with those records
- Open Library will improve and update its own metadata collection, which was built from scraped subsets of such data
- We can write scripts that autogenerate “lists of works” for authors and authors or categories for works
- We can automatically find mismatches between our person-data and title-data and those in MARC
- We can publicly clean up mistakes in the MARC catalog and suggest updates
Aaron Swartz vs. United States
(echoes of a broken system)
UPDATE: Aaron committed suicide on January 11, 2013.(!) More on his life here.
Aaron Swartz is a friend and Cambridge-area polymath whose projects focus on access to knowledge, open government, and an informed civil society. He has worked as a software architect, digital archivist, social analyst, Wikipedia analyst, and political organizer. Last year he co-founded the Progressive Change Campaign Committee and the non-profit political advocacy group Demand Progress.
He is also currently charged with computer fraud by the US Attorney’s office, in what appears to be the latest example of “a sweeping expansion of federal criminal jurisdiction” based on the broad applicability of wire fraud and computer fraud statutes. An overview:
Aaron has studied institutional influence and ways to work with large datasets. In 2008, he founded watchdog.net, “the good government site with teeth“, to aggregate and visualize data about politicians – including where their money comes from. That year he also worked with Shireen Barday at Stanford Law School to assess “problems with remunerated research” in law review articles (i.e., articles funded by corporations, sometimes to help them in ongoing legal battles), by downloading and analyzing over 400,000 law review articles to determine the source of their funding. The results were published in the Stanford Law Review. Most recently, he served for 10 months as a Fellow at Harvard’s Safra Center for Ethics, in their Lab on Institutional Corruption.
He contributed to the field of digital archiving, designing and implementing the Open Library, which serves as a global digital resource today, and as a foundation for any digital libraries in the future. And he collected 2 million public-domain court decisions from the US PACER system — a system that nominally makes all such decisions available to the public, but in practice keeps them hidden behind a paywall – to add to Carl Malamud’s collection at resource.org. (That work in turn gave rise to the crowdsourced RECAP project.)
The Case of the Over-Downloader
Last week, Aaron was charged by a grand jury with computer fraud , for allegedly downloading millions of academic articles hosted by the journal archive JSTOR, and exceeding authorization on MIT and JSTOR servers to do so.
JSTOR claims no interest in pursuing a legal case. However they are not part of the prosecution, and Aaron faces a possible fine and up to 35 years in prison, with trial set for September. You can support his legal efforts online.
The Association of College and Research Libraries notes that both the prosecution and Swartz’s supporters have characterized the trial with “superficial, and deeply incorrect, messages about libraries and licensed content“.
So how did this come to pass, and what does it mean for the Internet?
Details of the case and public reactions it inspired, after the jump.
How I became a Wikipedian
I had forgotten the long essay I wrote about this transition here on my blog… or rather, on my first law school blog, when blogs.law was new and cuddly. My transition to the current wordpress skin made it more visible, new-found visibility online made it a repeated spam target, and I rediscovered it today. So spam has done something good for me. Thanks, spam king!
For those of you who missed it the first time around in early 2004, before I knew how wikipedia works or even that it was community owned and run. Here it is again: On Multilingual Encyclopedia and Dictionary (public domain).