You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

⚐ Cyber Dialogue 2014 – Working group on “Cyber war & international diplomacy issues”

ø

I am just back from the CyberDialogue conference, an event presented by the Canada Centre for Global Security Studies at the Munk School of Global Affairs, University of Toronto, that convenes actors « from government, civil society, academia and private enterprise to participate in a series of facilitated public plenary conversations and working groups around cyberspace security and governance ». The conference was frankly awesome – I’ve always been fan of the Citizen Lab, they deserve major applause for putting together a very productive and unique conference. They did a truly impressive job at convening actors with very different backgrounds and points of view – something that is much needed in that space, as conversations unfortunately tend to happen within echo chambers of like-minded experts.

I was lucky to be among an impressive group of folks for a working group on cyber war and diplomacy. We had a whole day to talk through differences in perspectives, to identify priorities and fault lines and to come up with a short statement of things we could agree on – then point to things we couldn’t reach consensus on.

Below are three points we managed to agree on. It reads a bit mild, because it had to be common grounds and consensual, yet it’s interesting because it stresses the necessity address cyber conflict norms in times of peace.

“1. There exists a need to establish viable norms for state behavior in peacetime regarding cyber security.  Much attention has been dedicated to the threshold for cyber action regarding the Law of Armed Conflict, but less about the steady state beneath armed conflict.

2. Attention should be paid to the development of humanitarian principles for cybersecurity.  For example, are Computer Emergency/Incident Response Teams (CERTs) off limits for targeting?  Can International Committee of the Red Cross principles – neutral, impartial, and independent – be applied to the first responders of cyberspace?

3. International activity on cybersecurity, such as the UN Group of Government Experts on cyber and other multi-national initiatives, should include a mechanisms for inclusionary participation and distributed responsibility beyond government and industry.”

Many thanks to our group moderator, Chris Bronk from Rice University, for compiling these remarks. I am pasting the framing notes of the working group below, along with the conference topic for this year. To keep groups small enough to function, the initial working group topic been separated into “Surveillance and accountability” and “Cyber war and international diplomacy issues”. On one hand, that clearly made us more effective, on the other hand I would have loved to finally see a conversation on how cyberwar institutions and legal frameworks feed into the dynamics of Governmental surveillance. The « domestic » surveillance conversation is often held separately from the « cyberwar » international one, which obscures how deeply they are linked.

More on this later: if more details of the discussion (under Chatham House rules though) are posted, I’ll share links. In the meantime:

Conference website: http://www.cyberdialogue.ca/2014-agenda/

2013 Cyber Dialogue video gives a clear idea of what the conf. feels like: http://vimeo.com/77650794

Citizen Lab website is filled with great research: http://citizenlab.org/

Bits of conference wisdom can be found on Twitter hashtag: https://twitter.com/search?q=%23cd2014

Bonus Twitter feed: https://twitter.com/cyberdiaDoge

✎ WORKING GROUP TOPIC: From Surveillance to Cyber War: What are the Limits and Impacts?

Moderators: Jan Kleijssen (Council of Europe) and Chris Bronk (Rice University)

Description: The Snowden revelations have touched off widespread criticism and alarm of government organized mass surveillance and computer network exploitation and attacks. Yet even liberal democratic governments require well-equipped law enforcement, intelligence, and armed forces to enforce the law and secure themselves from threats abroad. The world can be a nasty place, and we have to live in it. Both mass and targeted surveillance, including computer network exploitation and attacks, are likely going to be a part of the that world for the foreseeable future. What are the proper limits and safeguards of lawful intercept? Do we need new forms of oversight and accountability? How do we reconcile the seemingly conflicting missions of agencies charged to protect domestic critical infrastructure from attack while developing ways to compromise networks abroad? Is there an arms race in cyberspace? How do we control it? Can we develop norms to limit global cyber espionage?

CONFERENCE TOPIC: After Snowden, Whither Internet Freedom?

A recent stream of documents leaked by former NSA contractor Edward Snowden has shed light on an otherwise highly secretive world of cyber surveillance. Among the revelations — which include details on mass domestic intercepts and covert efforts to shape and weaken global encryption standards — perhaps the most important for the future of global cyberspace are those concerning the way the U.S. government compelled the secret cooperation of American telecommunications, Internet, and social media companies with signals intelligence programs.

For American citizens, the NSA story has touched off soul-searching discussions about the legality of mass surveillance programs, whether they violate the Fourth and Fifth Amendments of the U.S. Constitution, and whether proper oversight and accountability exist to protect American citizens’ rights. But for the rest of the world, they lay bare an enormous “homefield advantage” enjoyed by the United States — a function of the fact that AT&T, Verizon, Google, Facebook, Twitter, Yahoo!, and many other brand name giants are headquartered in the United States.

Prior to the Snowden revelations, global governance of cyberspace was already at a breaking point. The vast majority of Internet users — now and into the future — are coming from the world’s global South, from regions like Africa, Asia, Latin America, and the Middle East. Of the six billion mobile phones on the planet, four billion of them are already located in the developing world. Notably, many of the fastest rates of connectivity to cyberspace are among the world’s most fragile states and/or autocratic regimes, or in countries where religion plays a major role in public life. Meanwhile, countries like Russia, China, Saudi Arabia, Indonesia, India, and others have been pushing for greater sovereign controls in cyberspace. While a US-led alliance of countries, known as the Freedom Online Coalition, was able to resist these pressures at the Dubai ITU summit and other forums like it, the Snowden revelations will certainly call into question the sincerity of this coalition. Already some world leaders, such as Brazil’s President Rousseff, have argued for a reordering of governance of global cyberspace away from U.S. controls.

For the fourth annual Cyber Dialogue, we are inviting a selected group of participants to address the question, “After Snowden, Whither Internet Freedom?” What are the likely reactions to the Snowden revelations going to be among countries of the global South? How will the Freedom Online Coalition respond? What is the future of the “multi-stakeholder” model of Internet governance? Does the “Internet Freedom” agenda still carry any legitimacy? What do we know about “other NSA’s” out there? What are the likely implications for rights, security, and openness in cyberspace of post-Snowden nationalization efforts, like those of Brazil’s?

As in previous Cyber Dialogues, participants will be drawn from a cross-section of government (including law enforcement, defence, and intelligence), the private sector, and civil society. In order to canvass worldwide reaction to the Snowden revelations, this year’s Cyber Dialogue will include an emphasis on thought leaders from the global South, including Africa, Asia, Latin America, and the Middle East.

⚐ Harvard International Law Journal Symposium: “Sovereignty in Cyberspace” Panel

ø

March 7th 2014, Cambridge, MA

The Harvard International Law Journal Symposium kindly invited me to their symposium to participate in a panel entitled: “Beyond State Boundaries: Challenges of International Law in Cyberspace”.

We discussed the blurred lines between cyber-crime, cyber-terrorism, cyber-espionnage, cyber-war, and what was at stake in our ability to provide clear answers to navigate these questions both domestically and internationally (hint: democracy). We also addressed the cyber-security concerns of businesses in this context.

I’ve learned a lot from my co-panelists and we have decided to keep this conversation going beyond our panel. Their expertise and remarks are very helpful to shape the cyberpeace framework: more soon on the follow-up.

The Symposium website is here, panel description and panelists bios are pasted below, and I am grateful to Michelle Ha & Sarah Lee who invited me to the event and did a rockstar job in putting the symposium together.

Panel: Beyond State Boundaries: Challenges of International Law in Cyberspace

This panel will address the implications that the evolution of the Internet and technology in general, which are thought to defy states’ ability to control their territorial boundaries and laws or policies within their states, present to the notion of sovereignty. From cyber war to cyber surveillance, the NSA’s practice and policies to ongoing online trade espionage, the panelists will refer to main recent events to explore the existing laws and their potential shortcomings in governing the online sphere.Panel sponsored by Sullivan & Cromwell. Susan Brenner (NCR Distinguished Professor of Law and Technology, University of Dayton School of Law)

Jeffrey Carr (Founder and CEO of Taia Global Inc., Author of “Inside Cyber Warfare: Mapping the Cyber Underworld)

Leo Clarke (Senior Vice President and General Counsel at Washington Federal, Expert on cyber risk management)

John Evangelakos (Partner, Sullivan & Cromwell)

Camille François (Fellow at the Berkman Center for Internet and Society)

Moderator: Charles R. Nesson (William F. Weld Professor of Law, Harvard Law School)

– * –

Susan W. Brenner is NCR Distinguished Professor of Law and Technology at the University of Dayton School of Law. She specializes in two distinct areas of law: grand jury practice and cyberconflict, i.e., cybercrime, cyberterrorism and cyberwarfare. In 1996, Professor Brenner and then Assistant U.S. Attorney Gregory Lockhart published Federal Grand Jury: A Guide to Law and Practice, which was a treatise addressing the various aspects of grand jury practice in the federal system. Professor Brenner has spoken at numerous events, including two Interpol Cybercrime Conferences, the Middle East IT Security Conference, the American Bar Association’s National Cybercrime Conference and the Yale Law School Conference on Cybercrime. She was a member of the European Union’s CTOSE project on digital evidence and served on two Department of Justice digital evidence initiatives. She also chaired a Working Group in an American Bar Association project that developed the ITU Toolkit for Cybercrime Legislation for the United Nation’s International Telecommunications Union. She is a senior principal for Global CyberRisk, LLC. Professor Brenner is a member of the American Academy of Forensic Sciences. In 2010, Praeger published her most recent book, Cybercrime: Criminal Threats from Cyberspace.

Jeffrey Carr, founder and CEO of Taia Global Inc., is the author of “Inside Cyber Warfare: Mapping the Cyber Underworld” (O’Reilly Media 2009 and 2011 (2nd edition)). His book has been endorsed by General Chilton, former Commander USSTRATCOM and the Forward to the second edition was written by former Homeland Secretary Michael Chertoff. Jeffrey is an adjunct professor at George Washington University where he taught a course in Cyber Conflict. He has spoken at over 100 conferences and seminars (e.g. the US Army War College, Air Force Institute of Technology, Chief of Naval Operations Strategic Study Group, the Defense Intelligence Agency, the CIA’s Open Source Center).

Leo Clarke has been counseling clients on cyber risk management since 1998. He has published fifteen scholarly and practice-oriented works on cyber-related issues and has spoken at dozens of conferences throughout the U.S. and from Estonia to Dubai. Leo has also served as general counsel of regional banks, been counsel of record in complex lawsuits in twelve states, and taught at five law schools. He graduated with honors from Stanford University and UCLA Law School.

John Evangelakos is a Partner in Sullivan & Cromwell’s Mergers and Acquisitions Group. He is also co-head of the Firm’s Intellectual Property Group. He has led transactions in a wide variety of industries, but in recent years much of his work has been in the financial services sector. He was resident in the Firm’s Hong Kong office between 1994 and 1997. Evangelakos graduated from Harvard University in 1981 and the New York University School of Law in 1985. He also clerked for Hon. Jerre S. Williams, U.S. Court of Appeals, Fifth Circuit (1985 – 1986).

Camille François is a fellow at Harvard Law School’s Berkman Center, working on surveillance and cybersecurity issues, cyberwar and cyberpeace, and public policy issue in robotics (especially drones & self-driving cars). A Fulbright Fellow, she is also a Visiting Scholar at Columbia University’s Saltzman Institute for War and Peace Studies. She helped structure the School of Public and International Affairs program in Cybersecurity and worked for the US Defense Advanced Research Projects Agency (DARPA), organizing the Expert Workshop on Privacy in Cyberspace at the agency’s headquarters. In 2013, she won first place for Columbia at the Atlantic Council Cyber 9/12 National Challenge in Cyber Policy. She previously worked for Google in Europe, managing research on market insights, key policy and privacy trends. In her home country of France, she has worked mainly in politics, serving two years in the Parliament as a legislative aide and holding leadership positions in national and local campaigns. She also participated in the main research project on religious politics in the French suburbs, published by the think tank L’Institut Montaigne. She holds a Master’s degree in International Public Management from Sciences-Po Paris University, and a Master’s degree in International Security from the Columbia School of Public and International Affairs. She completed her Bachelor at Sciences-Po Paris, with a year as a visiting student at Princeton University, and received legal education at Paris II – Sorbonne Universités.

Charles Nesson is the William F. Weld Professor of Law at Harvard Law School and joined the HLS faculty in 1966. He is the founder of the Berkman Center for Internet & Society at HLS and of the Global Poker Strategic Thinking Society. He authored Evidence, with Murray and Green, and has participated in several cases before the U.S. Supreme Court, including the landmark case Daubert v. Merrell Dow Pharmaceuticals. Professor Nesson has an A.B. from Harvard College and a J.D. from Harvard Law School.

 

 

⚐ RightsCon 2014: Cyberpeace Panel

ø

I’m back from RightsCon 2014, where I discussed  the need for a cyberpeace framework at a panel entitled: “Cyberpeace: Moving Beyond A Narrative of Global Threats”.

 

Panel framing

“The Internet security narrative is generally structured around threats and problems, and as a result, often leads to one-sided, top-down, control-oriented priorities. These lead to pervasive surveillance, siloed technology, balkanized networks, and other impediments to openness and a global, community Internet. We need to start climbing back to the top. In this session, we’ll talk about whether and how that could include a new doctrine of cyberpeace, to foster mutual peace, trust, and transparency online, and to reduce incentives and opportunities to build boxes and closed environments of control and conflict.”

Participants: 

Camille Francois, Fellow, Harvard Berkman Center for Internet & Society
Megan Garcia, Nuclear Security Initiative Program Officer, Hewlett Foundation
Tim Maurer, Research Fellow, Open Technology Institute, New America Foundation
Aaron Shull, Counsel & Corporate Secretary, The Centre for International Governance Innovation

Alex Fowler, Global Privacy and Public Policy Leader, Mozilla (Moderator) 

The cyberpeace framework was welcomed with enthusiasm & the audience made great points. More to come soon on the follow-up of that RightsCon panel.

Thanks

The awesome visualization at the top of this short blog post comes from Willow Brugh‘s magic viz skills: see original posting on her blog here.

I am deeply grateful to Mozilla’s support for this panel.

 

⚐ Talk at Columbia: “Augmented Humanity, Drones, Self-Driving Cars, Furbys and Robotic Politics: Freedom and Security in the Robotics Age”

ø

This lunch talk was an intro of robotics policy issues for SIPA: I bundled the different Robotics policy issues in “phases”, that also correspond to the chronological evolution of these concerns. An object that focused policy attention is chosen for each phase: “Furby phase”, the “Self-driving car phase”, the “Drone phase” and the “Transhumanist phase”.

Written notes to be posted soon!

Abstract As we surround ourselves with robots, autonomous or not, from the ground to the sky, we are facing policy questions we thought pertained to the realm of science-fiction. We build drones for both war and investigative journalism, plan to put self-driving cars on the road and design social robots to care for elders: what are the implications for our freedom and security? What are the social, ethical and policy questions we must address? For instance, how does the current debate about privacy, data collection and surveillance play out in an age in which we surrender more and more of our autonomy to machines?

✏︎ http://www.siwps.com/events/augmented-humanity-drones-self-driving-cars-furbys.html

✏︎ http://www.columbia.edu/event/augmented-humanity-drones-self-driving-cars-furbys-robot-politics-freedom-and-security-robotic.html

◓ Cryptivism: Voluntary Botnet Bitcoin Mining Fundraising?

ø

A theoretical question on my mind: has anyone ever tried putting a voluntary botnet at work to mine crypto-currencies for philanthropic fundraising purposes? 

Botnets mining bitcoins (botcoins!) is no new idea. Though usually there not voluntary botnets, and they mine for profit, which makes it a criminal activity. See ZeroAccess  for instance. Or ESEA, the gaming company that got caught mining behind its user’s backs (“serious gamers like ESEA’s customers made excellent soldiers for a botnet army: Gaming machines have powerful graphical processing units that are pretty good at bitcoin mining”). They got sued for it in the States, which gives us a nice peak into a legal discussion around non-voluntary botnet bitcoin mining.

Bitcoin mining by voluntary botnet for for-profit purposes also seems to have been tried but in more or less shady ways, see Security researchers Brian Krebs and Xylitol on FeodalCash, which promised to make your computer work in the botnet and gives you shares of what has been mined:

“Dear slave masters, check your wallets you should have received your shares now.
We are glad you’re working with us.
Regards, FeodalCash”

If you trust an organization enough that you would join their voluntary botnet, which would be like saying “Hey, I trust you, here is a little it of my computer power, we can do a lot together”, then theoretically this organization could mine its way through a successful fundraising campaign.

I wonder how many groups have access to voluntary botnets though: botnets that people have willingly joined. I can think of Anonymous’ Low Orbit Ion Cannon, LOIC, but I’m sure there are many smaller initiatives, like Computer Science labs whose students would have formed voluntary botnets (and who could mine for pizza?). 

I also wonder how profitable the operation would be. Bitcoin entrepreneurs friends suggested that, in the current setting, putting a botnet of one million computers at work on the basis that they would mine when they’re not used by their users for other purposes would bring $50 000 a week. Research on botnet mining bitcoins (see this paper for instance) suggests that other sorts of cryptocurrencies would be more profitable to bot-mine. It seems very hard to model returns predictably.

There are a couple other challenges on the NGO’s side, for instance it’s not always easy to accept bitcoin donations. See EFF’s complicated bitcoin donation story. That being said, there is an impressive list of organizations that would accept bitcoin donations.

So there are a couple challenges on the road, but that would be an interesting case of useful clicktivism (or cryptivism?)…

 

✏︎ A perspective on cyberwar for the BBC

ø

January 29th, 2014

Tara McKelvey from the BBC, whom I met as she joined us at the Drone Conference last October to moderate a panel on “Life Under Drones”, called me today with a question about cyber threats. Her great article, “Hackers, spies, threats and the US spies’ budget” is online here.

“Are cyber threats overblown?” is an common question: it is the one we are being forced to debate at each passing of budget or review of threats. The usual narrative goes: the US needs to protect itself against an “cyber Pearl Harbor”, or an “cyber 9/11”. A quick Google search will point you to a myriad of articles debating whether theses threats are a myth (one I really like here, from Henry Farrell), or a dire priority for us to address. I’m grateful to have been consulted on this point.

My work on cyberwar also operates in a slightly different dimension. Cyber threats are indeed real. Experts are right therefore to attempt to evaluate these threats with greater precision, and to budget the corresponding security spendings. However, I’m looking at cyberwar from a different angle: not primarily as a threat (may it be overblown or imminent) but as an ideological framework that is shaping both our institutional and legal reality, and our public debate.

 

◓ Google & DeepMind: Society, too, must ask ethical questions

3

Google just bought a new Artificial Intelligence firm, DeepMind. Not a surprising move, but every step Google takes in the Robots / AI direction makes the need to consider the ethical and legal implication of these activities more urgent.

In a nutshell, DeepMind is a Singularity-inspired (see co-founder Shane Legg’s talk at 2010 Singularity Summit) and London-based AI firm. This is reported to be a talent aquisition.

Founded by neuroscientists, DeepMind’s goal is to create computers that can function as human brains. Legg sees this happenning by 2013 – of course, in the process of making intelligent machines, they also wonder about what exactly is intelligence, and how to measure it (see Legg’s paper here).

In the AI lingo, this is called “strong AI”. The following short description stresses that Strong AI is about replicating Human Intelligence in geenral, not solving specific problems (like: How can Google’s search engine can give you better ads based on what they already know about you from your emails?):

Strong AI is a hypothetical artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that could successfully perform any intellectual task that a human being can.  It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Strong AI is also referred to as “artificial general intelligence” or as the ability to perform “general intelligent action.” Strong AI is associated with traits such as consciousness, sentience, sapience and self-awareness observed in living beings.

Some references emphasize a distinction between strong AI and “applied AI”: the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to simulate the full range of human cognitive abilities.

(Wikipedia)

What does this matter? So DeepMind’s three top talents will join ranks with other brilliant AI inventors at Google, including Singularity pioneer Ray Kurzweil who joined in 2012 as Engineering Director. Google has a secret lab on campus that deals with moonshot ideas, Google X, and that already gave us the self-driving Google car. Google’s Andy Rubin has a carte blanche to create a robotics revolution. Google bought a firm that builds robots and that used to be primarily funded by grants from the US Military (via DARPA): Boston Dynamics. Regina Dugan, DARPA’s former Director, Regina Dugan, has also joined in 2012. Google also bought a firm that builds cute Humanoid robots, Meka. Since their acquisition, their website just says: “We have been acquired by Google and are busy building the robot revolution.” These are a couple items on a much longer list.

Are all of these people working together to build a giant Skynet-like organization? Probably not. Are these completely unrelated acquisitions from Google executives seeking to invest in tomorrow’s exponential businesses? Even if Google has a track record of having people work in silo, it’s tough to assume that there won’t be synergies in these domains.

Google’s intent with these aquisitions rather seems to mimic DARPA’s core purpose: “To work in vigorous pursuit of [one] mission: making the pivotal early technology investments that ultimately create and prevent decisive surprise.” (April 2013 DARPA letter from the Office of Director). Unless that with Google, it is not to prevent surprises “for U.S. National Security” but for Google’s business. That makes things quite different.

It doesn’t have to go wrong, but the move raises legitimate ethical and legal concerns. This new ecosystem that Google is building, bringing together the best minds in robotics and AI and providing them enough budgetary leeway to make all fantasies come true, more or less behind closed and opaque doors, deserves an open debate.

The most interesting comment on Google’s acquisition of DeepMind is that DeepMind have reportedly asked for an Ethical Board to be set up within Google in order to evaluate how Google could/should work on AI. In 2011, DeepMind’s Shane Legg was already evaluating as “too low” the “current level of awareness of possible risks from AI”. He warned: “it could well be a double edged sword: by the time the mainstream research community starts to worry about this issue, we might be risking some kind of arms race if large companies and/or governments start to secretly panic. That would likely be bad.”.

It is a good news that Google is taking steps to set up an Ethical Board to think about these questions, but society should also take the hint. Google’s behavior, its leader’s declarations and recent acquisition tell us: it is time for society, too, to ramp up their ethical and legal thinking of these questions.

C.

* Jan. 29th update –  More in that direction: THE VERGE reported that as Google was selling Motorola to Lenovo, its “Advanced Technology and Projects” division of about 100 people, led by Dugan, will remain at Google and join the Android teams.

Photo credit – http://www.flickr.com/photos/fallentomato/6178390745/sizes/z/in/photostream/

 

◐ Robots Conference @ Columbia University Saltzman Institute for War and Peace Studies

ø

On Dec. 9th, I gave at talk at Columbia’s School of Public and International Affairs (SIPA) on Freedom and Security in the Robotics Age.

In this intro to Robotics Policy issues, I presented four ‘phases’ of public policy concerns in robotics, each illustrated by a an object that embodied each wave of concerns: first Furbys, then Self-Driving Cars, then Drones and finally… Cyborgs. My talk was moderated by Captain Shawn Lonergan (US Army), below are links to the event and an abstract, but I should be able to publish a write-up of the intervention soon.

Abstract – As we surround ourselves with robots, autonomous or not, from the ground to the sky, we are facing policy questions we thought pertained to the realm of science-fiction. We build drones for both war and investigative journalism, plan to put self-driving cars on the road and design social robots to care for elders: what are the implications for our freedom and security? What are the social, ethical and policy questions we must address? For instance, how does the current debate about privacy, data collection and surveillance play out in an age in which we surrender more and more of our autonomy to machines?

Links – http://www.siwps.com/events/augmented-humanity-drones-self-driving-cars-furbys.html

http://www.columbia.edu/event/augmented-humanity-drones-self-driving-cars-furbys-robot-politics-freedom-and-security-robotic.html

◓ [Morning Reads] Questioning the NSA’s bulk collection programs’ efficiency for counter-terrorism

ø

My NY-Cambridge commute this morning has me read two reports questioning the NSA’s PRISM and phone-metadata-bulk-collection program’s efficiency for counter-terrorism efforts, thanks to Lorenzo’s great piece in Mashable

(1) A New America Study lead by Peter Bergen (32pages PDF doc here, one page Web overview there), the National Security analyst who produced the first TV interview with Osama bin Laden in 1997 and the director of the National Security program at New America Foundation. It concludes:

“Our review of the government’s claims about the role that NSA ‘bulk’ surveillance of phone and email communications records has had in keeping the United States safe from terrorism shows that these claims are overblown and even misleading,” (…)

“Traditional investigative methods, such as the use of informants, tips from local communities, and targeted intelligence operations, provided the initial impetus for investigations in the majority of cases, while the contribution of NSA’s bulk surveillance programs to these cases was minimal,”

(2) A report by Marshall Erwin, a counter-terrorism expert having worked for the Intelligence Community (bio here), based at the Hoover Institution as a Fellow (the Hoover Institution is viewed as a conservative and national-security oriented place and is packed with people who served in Government and Agencies working on these issues – I was looking forward to their take on this question). Erwin focused on section 215 of the Patriot Act and places his study in the context of US Federal Judges disputing the legality of these programs – a question that could ultimately be resolved by the US Supreme Court (or not, we’ll see).

He looks deeper into the two examples often cited by the Intelligence Community to demonstrate the efficiency of the collection programs:

My conclusion is simple: neither of these cases demonstrates that bulk phone records collection is effective. Those records did not make a significant contribution to success against the 2009 plot because at the point at which the NSA searched the bulk records database, the FBI already had sufficient information to disrupt the plot. It is also unlikely that bulk collection would have helped disrupt the 9/11 attacks, given critical barriers to information sharing and as demonstrated by the wealth of information already available to the intelligence community about al-Mihdhar.

His report in PDF here, his post for Just Security presenting the study there.  

Serious National Security analysis are fundamental to assess the costs/benefits of NSA’s invasive programs. Simply put: many feel weirded out about the NSA violations but wonder if listening to everyone isn’t necessary, or ‘worth it’, to protect us from terrorism. These two analysis say: no, not worth it, not efficient. In the long run, it can even prove counter-productive: “If we want to ensure the long-term viability of counterterrorism efforts and our continued success against al-Qaeda, we must increasingly prune away those programs and activities that have not helped keep us safe”, Erwin writes.

This is not a US-focused message: it should be heard by all European policymakers trying to figure out what amount of surveillance to tolerate (from US agencies & others) in order to make us all safer.

◒ A Reality Check on Cyberspace: Punks, War and Ideologies

ø


Commentary invited by editors of Scientific American

What Is War in the Digital Realm? A Reality Check on the Meaning of “Cyberspace”

By Camille François | November 26, 2013 |  Comments2
ShareShare  ShareEmail  PrintPrint


Credit: Wikimedia Commons/NASA

Cyber is everywhere: in political speeches, in newspapers, at dinner conversations. There’s cyberwar and cybersex and cybercafés (they still exist, I promise), and there’s the U.S. Cyber Command. Once in a while, there is a new surge of articles arguing that the word “cyber” is vague, dated and that we just should get rid of it in favor of more precise terminology.

That is wishful thinking: we might lack clear definitions of the cyber prefix, but for whatever reason cyber seems here to stay, which is why we should take a moment to explore what meanings and ideologies we have been infusing in this word to better inform our debates about technology.

Cyber’s most popular namechild certainly is cyberspace (always cited and never defined), and it has been here with us for more than 30 years. It’s time for a short review of its origins, its many variations and what’s hiding behind the term.

Cyberspace was a term brought to us by literature, and its trajectory traveled through poetry, academic analysis, politics and ideologies. It is now pervasively used by anyone who wishes to discuss security and democracy in a networked society. The stakes are crucially important. Using vague, misunderstood and meaningless language tools to articulate these debates hinders our ability to think critically about technology, something we can’t afford when we should be having informed debates about our expectations on surveillance, privacy or freedom of speech.

Origins in Cyberpunk
Given the lack of clear definition, it’s not surprising that the Wikipedia entry on cyberspace offers a very abstract explanation: that cyberspace is “the idea of interconnectedness of human beings through computers and telecommunications, without regard to physical geography.”

“Cyberspace” was popularized by novelist William Gibson, father of the literary genre known as cyberpunk. He didn’t mean to forge a political concept, though, and he later noted that his word was “evocative and essentially meaningless.”

In his 1982 short story Burning Chrome, the word cyberspace makes its first appearance as the name of a machine: the “workaday Ono-Sendai VII, the ‘Cyberspace Seven.’”

In his 1984 novel Neuromancer, it becomes more than a computer’s pet name and is described in more conceptual terms:

“Cyberspace. A consensual hallucination experienced daily by billions of legitimate operators, in every nation, by children being taught mathematical concepts…. Unthinkable complexity. Lines of light ranged in the non space of the mind, clusters and constellations of data.”

Read it one more time. This sentence has all the seeds of topics still being discussed today, it holds all the complexity of how cyberspace could be interpreted: Is it athing, handled by its operators? Is it a space, an abstraction uniting the minds across all the nations? Is it a place, organized in clusters, or it is a political ideology, these hallucinations that take strength in consensus? Let’s explore all of the above.

Cyberspace is not a thing
You can’t “fix” cyberspace, and it doesn’t sound right to talk about The Cyberspace— cyberspace is not a thing. You can’t really use the term “cyberspace” to replace “Internet”: the first one is more abstract that the technology described by the last one. And since we’re here, “Internet” is not a thing either: it’s a set of protocols, a technology that enabling computers to talk to each other.

Could cyberspace be a “space”?
What it could be, though, is a metaphorical space emerging from the technology. “Cyberspace” could describe the abstract space in which the conversations of people using Internet are happening. It could be the name of the theoretical online salon, the public square that one can access in a couple of clicks, even though that opens questions like: whose space is it, who is ruling it, who is excluded from it, and are all the people thinking that they are in cyberspace truly in the same salon? This understanding is a fun conceptual alley to explore, and its road is paved with great academic research.

Could cyberspace be a “place”?
Now, what’s the difference between a space and a place? A space is much more of an abstract and moving concept than a place; a place is more structured, has rules, people, frontiers. A place is closer to the idea of a territory.

Europe can be analyzed as a space—its people share some sort of common history and principles, but when its frontiers and ideology are discussed, they evolve with various political projects. Tracing its borders, defining its rulers, declaring its principles, institutionalizing power in it, and making it a territory (the European Union) becomes a political act.

Calling a space a “place” is making a political statement; it imprints an ideology on it. This is why “cyberwar” is an ideological turn.

Can Cyberspace be at war?
The cyberwar rhetoric turns the abstraction of cyberspace into a new zone of combat—and aligns it with land, sea, air and space. Most of the definition problems around cybersecurity and cyberwar have to do with their first five letters: if you can’t define cyber, what are you going to secure? What are you declaring war on?

In 1996, The Advent of Netwar is described in a RAND report explaining that we must protect “The Net”, and that for such a task offense will be our best defense. These elements are deep at the core of the cyberwar rationale.

Cyberwar is the political ideology that proposes new principles for the space, new actors to rule it. Cyberwar is an ideology that hides behind the discourse of reality: there are, indeed, very real cyber-attacks, and there are security concerns for critical infrastructures connected to the network, but what does it mean to declare war oncyber? Cyberwar paints a metaphorical space as the subject of threats; it depicts the cyberspace as a proper place in which power has to be deployed and conquered.

Cyberspace as an ideology
It may seem that “space” or “place” is a minor distinction. But this small change in perspective indicates a significant change and an ideological turn. “Cyberspace”, by that standard, could also be seen as first ideology to take network society as a battleground.

By the mid-1990s, the word “cyberspace” had transformed from a vague poetical and literary concept into a concrete political utopia. John Perry Barlow’s Declaration of Independence of Cyberspace (1996) captures the significance of this shift. As Barlow writes: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

In this declaration, “placehood” is an aspiration made clear. Cyberspace is a “home” for the mind, where some will “gather.” Cyberspace is an alternative to state power, a place to build to escape authorities and rules of the state. It is a political utopia, both etymologically (ὐ – τόπος ,“no-place” in Greek) and philosophically (“an imagined place in which everything is perfect” writes the Oxford English Dictionary). Six years before that text, in 1990, Barlow had co-founded the Electronic Frontier Foundation, the first advocacy group for digital rights, who outlined this framework by calling itself the “first line of defense” of cyberspace’s “frontier”: very geographical vocabulary.

Cyberspace vs. cyberwar
Today, making cyberspace a harbor free of states’ influence seems like a lost battle. The unfolding debate on state surveillance, Internet censorship and the many other manifestations of state power exercising sovereignty over the network make that very idea sound foolish, or outdated.

It wasn’t back in the time – How did it look then? What was the Internet in 1996, and what did the space for conversation it enabled look like?

In its early days, Internet created a forum for like-minded intellectuals over privileged parts of the wired world, an infrastructure mainly used to share academic material. When developed in 1989, the World Wide Web (a specific application of Internet protocols enabling people to view and navigate pages on a browser) was the solution Tim Berners-Lee envisioned for sharing the CERN’s research papers to strengthen academic collaboration with other institutes. Publishing this research on paper proved to be very expensive because it needed to be constantly updated. If considering the development and adoption of TCP/IP as a landmark for the birth of Internet infrastructure, the 1996 Internet was a 14-year-old teenager. The Web was a six-year-old child.

At this time, there was not yet anything critical to steal or protect on the Internet. No real-world political battle was fought there yet—cyberspace, as a political project, still stood a chance. There was limited incentive for states to truly deploy power there, even if the intent had always been considered.

In March 1995, Time Magazine’s cover story Welcome To Cyberspace describes the new trends of the Net, at risk of “turning into a shopping mall”, but still concludes: “At this point, however, cyberspace is less about commerce than about community. The technology has unleashed a great rush of direct, person-to-person communications, organized not in the top-down, one- to-many structure of traditional media but in a many-to-many model that may – just may – be a vehicle for revolutionary change. In a world already too divided against itself – rich against poor, producer against consumer – cyberspace offers the nearest thing to a level playing field.”

This is not what the Internet looks like today. It changed a lot with its growth and democratization. There is plenty to steal and plenty to protect. People’s credit-card numbers, terrorists’ emails, nuclear plant and air-traffic control systems–they are all connected to the Internet.  And if you look to the ways in which states use the Internet for political advantage—as a tool of espionage, as a way of winning hearts and minds, or as a tool of war against other states—it becomes clear that cyberspace has been unable to realize itself as a bastion against state encroachment.

That, of course, is truly disappointing for those who aspired for the technology to provide a safe harbor from the state’s power. Yet, as history many times has taught us, we must oppose our principles to the ideologies that rise against what we believe in. If cyberspace is colonized by war, there is one essential question: what doescyberpeace look like?

*

◐ View original post on Scientific American’s website here: http://blogs.scientificamerican.com/guest-blog/2013/11/26/what-is-war-in-cyberspace-a-reality-check-on-the-meaning-of-cyber/

Log in