Last Tuesday night’s class was a smorgasbord of Cybersecurity and privacy cross-cutting-themes, fittingly on the day that Google announced it will begin to withdraw censorship from China.
We met with Michael Fertik, CIO of Reputation Defender and Ebele Okobi-Harris, Yahoo! Director of Business and Human Rights–we talked with Mark Surman, Aza Raskin and Julie Martin general counsel, all from Mozilla. We also heard from Ryan Calo, a SLS Fellow at CIS and Lauren Gelman, also with CIS and teaching next quarter. We were also joined by Carl Malamoud, who was present for the law.gov workshop happening at Stanford earlier in the day.
The first issue we tackled was information users voluntarily give out to websites. Are these users aware of where their data is going? How long it stays there? Where else it could be allocated to?
Mozilla seeks to undermine the current obfuscation of privacy messages towards users. Their Privacy Icons project is in very nascent stages and is actively seeking feedback from the public. They seek to inform users about the information they’re giving out online, via icons that allow users to comprehend the most significant pieces of privacy statements and terms of service (ToS). As a browser with hundreds of millions of users, Mozilla’s Firefox is situated in a unique but significant standpoint to attack the obscurity of privacy and ToS statements. Mozilla aims for these standards to be normative.
What types of icons would you implement? What type of icons would you have? Would you use a feature like this? What is the best way to foster a relationship between companies and a project like this going forward?
These problems have been addressed before, in projects like the Platform for Privacy Preferences, but solutions have not met widespread usage.
The class members pointed out the perceived disconnect between anonymity and privacy. On the first day of class, students had posted predictions for the day’s “difficult problems” but many felt that it was an invasive and surprising behavior when Professor Zittrain opened up the wiki with the day’s predictions submitted, by name, by members of the course. Though the information was “public” many were surprised when the supposedly buried and obscure information became the central point of conversation.
How can a user interface be changed to make users more aware of their actions and repercussions?
We discussed privacy from the standpoint of a schism in the Creative Commons (CC) community: is the point of CC to offer authors choice about how they want to license their stuff, ranging anywhere from all rights reserved (at which point you don’t need CC, it’s effectively copyright) to attribution only necessary, or is it more about representing a certain normative view of the world and encouraging the world to adopt this view?
The outlook on privacy rights can be viewed the same way–is it about choice for the user or is it about an ideology represented by a certain standard of norms?
We also heard from Ebele Okobi-Harris of Yahoo! who spoke about the Global Network Initiative (GNI) as an excellent vehicle for crisis situations and direct action. She also described how it is appropriate that GNI offers a roadmap for comapnies to make decisions but at the same time is not GNI’s place to make decisions for the companies. Each company in GNI has a different approach. Yahoo! unlike many other companies has its own department that heads up human rights for the company (of which Okobi-Harris is in charge). One of the specific issues she spoke about was an ongoing lawsuit in Belgium wherein law enforcement officials requested that Yahoo! hand over information (more details here) which is a human rights concern.
Do you agree with Yahoo!’s decision to withhold information? Do you feel that other companies of similar stature and influence should maintain a human rights department? What are other potential and effective approaches? What are advantages to having a department dedicated to this issue?
We finally landed on Reputation Defender, a fix for the other side of the ‘privacy and information ownership’ spectrum. Michael Fertik, CEO of the company explained that a vast amount of content about a person is not necessarily created nor controlled by that person themselves. The company will not erase records of a person (everything ranging from news stories to sex offender data cannot be erased) but can deal with writable areas of content such as discussion boards. Fertik explained that less than 1/3 of 1% of all revenue comes from this destroy feature of the website–fascinating since so much media attention to the company relates directly to this feature.
Users of his site can sign up for an internet version of the “do not call” list for telemarketers to not call particular phone numbers. Fertik sees privacy in three steps or areas online, first, virus protection then E-commerce (for example, credit card security features) and the third stage emerges as more and more aspects of life move online there arises the need to protect the privacy of web users themselves. In this third realm, users can become aware of and mitigate the ability of other actors to intercept and analyze information created both by users themselves and by others about them.
Is Reputation Defender a service that you would use? Is your online reputation something you worry about? Do you feel that your privacy is ever or could ever be infringed by companies or people accessing information about you online?