You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'Ubiquitous Human Computing' Category

Wrap-up

ø

In the final days of class before presentations and papers were due, we dove into our cross-cutting themes and discussed takedown procedures and other processees, potential “rights” and interactions between individuals and entities. We talked about the platforms and sites that enable cooperation and trust and potential problems that arise out of situations like this. In these final few sessions we didn’t solve the difficult problems, but got to ask questions and learn from the answers.

The solutions proposed by the class for:

  • Global Network Initiative
  • Ubiquitous Human Computing
  • Future of Wikipedia
  • Cybersecurity

will be posted here in the coming days.

DisputeFinder: crowdsourcing controversy

1

DisputeFinder is a Firefox extension, that is a collaboration between Intel Research and UC Berkeley. Its basic premise is to allow readers of web content to understand the broader context of claims made on websites. If a claim about a controversial topic (think global warming, gun control or a “healthy” new diet) is made on a site, users of this plugin will be immediately notified by colored text that there are conflicting viewpoints on that particular topic. Users can submit topics on specific sites as controversial, and support the opposing viewpoint with evidence from another site. Users also have the ability to vote this content up or down, in terms of how useful/accurate they find the new data.

Our class got to have a long talk with John Mark Agosta and Rob Ennals the project lead. We discussed the benefits of citizens (or “webizens” if you will) with good intentions, who spread accurate information online, and the ways that DisputeFinder leverages these intentions to forward the goals of information dissemination.

DisputeFinder sits at an interesting point–rather than between the web browser and the server, it sits between the user and the page, (albeit connecting with lists of disputed claims) not altering the site content at all. Currently DisputeFinder achieves all of its content via avid activists and dedicated citizens, but hopes to create a large enough collection of disputed claims and a precise and smart enough detector of these phrases to implement the functionality of the plugin without the need for individual clicks on every site.

DisputeFinder doesn’t aim to provide conclusions on disputed claims, but rather, hopes to give citizens well-rounded information about topics that are controversial.

How could a browser plugin be leveraged even further to meet these goals? What problems or concerns could a browser-level source of information bring about? How are the struggles of a plugin like this one similar or different to the struggles of Wikipedia?

Another similar service is Turkopticon, which is both a community of Amazon Mechanical Turk users, and with them a Firefox plugin. The plugin allows users to see reviews and ratings for those requesting work on the site. In this way, Turkers (or users of Amazon Mechanical Turk) can decide who to work for based on community standards and also can view the status of their pay for previous jobs.

One of the critical questions about browser plug-ins is how to achieve a large user base. Many services like DisputeFinder in its current iteration depend on crowdsourced information to make their plugin useful.

How do applications like this get critical acclaim, enough to get a vast following like Wikipedia or Yelp? How does a project facilitate activists in ways that form community?

Understanding Human Ubiquitous Computing

1

Yesterday we had a conversation about Human Ubiquitous Computing (Human UbiComp) with Lukas Biewald founder of CrowdFlower, Bjoern Hartman creator of the Mechanical Turk Cats Book and Aaron Koblin a conceptual artist who works with Amazon Mechanical Turk (AMT) to create art such as Ten Thousand Cents and The Sheep Market.

Human UbiComp has been described by Professor Zittrain as “fungible networked brainpower, ” or the ability for strangers to pay other strangers small amounts of money to complete menial tasks on the internet. The concept and the potential problems are described in a video by Professor Zittrain and more academically, in a paper.

One of the first things we discussed were the quality control metrics of Crowdflower that are absent on other Human UbiComp sites such as Amazon Mechanical Turk. This allows for quality control and also offers higher reliability of work as an incentive for cost of use of the platform.

CrowdFlower sends tasks specifically to African refugee camps, in areas where data plans are a low expense and work is difficult to come by. There is an app that works in conjunction with this program called Give Work, that allows users to complete the same tasks sent to these refugee camps for quality matching, to determine subjective factors such as cultural idioms or understandings that make certain tasks difficult for international communities to complete.

In CrowdFlower, just like in all successful UbiComp platforms, successful tasks must be clear enough that they almost have a “pass/fail” nature to them, said Biewald. But at the same time, these tasks inherently contain some degree of ambiguity,

Bjoern Hartman, after completing a book completely created with content from Mechanical Turk, amazing cat stories to be exact, began wondering what the opinion of the Turkers (as they’re called amongst those in the know) was of his use of their paid content for a book. He did the obvious, and asked them for their opinion of the book, on the site itself, for a small amount of pay, just like any other job on the site. He found that there was more criticism in the comments of a Boing-Boing entry about his work than he found from posting within the community.

We repeated this protocol, but about all of Human UbiComp for our course, and the results can be found here.

Aaron Koblin, used Mechanical Turk to solicit 10, 000 individual sheep drawings. He received 662 non-sheep total and only one of these drawings asked him, the creator of the task “why are you doing this?” He also received numerous emails after the project from people also wanting to draw and submit sheep, for free. He found that some people would spend up to 45 minutes on a particular sheep drawing, while others would complete their drawing in one minute or less.

Are these types of work legitimate labor? What types of concerns does it raise for the workers, the employers and the websites that facilitate these types of actions? We’ve divided up the three most tangible, pressing problems into three separate posts so that you can leave your comments on each accordingly:

Is government use of UbiComp ethical?

Should political and civic action be crowdsourced?

Is UbiComp denying workers’ rights?

Crowdsourcers respond to Crowdsourcing

1

Workers on Amazon Mechanical Turk were asked

What are the biggest challenges for crowdsourced work?

What are the biggest challenges or problems facing mechanical turk
workers like you today? What don’t you like about working on tasks?
What problems have you run into?

Write at least one sentence, at most one paragraph.

by Bjoern Hartmann, the creator of the crowdsourced book Amazing But True Cat Stories during our course discussion. He received many interesting responses quite quickly. Responses are below, in chronological order.

The biggest challenge is there is not enough credible work loaded into the system on a regular basis. I find the tasks that require me to sign-up for any form of service to be quite annoying. I wish there was more work for higher pay and that there was also more data entry tasks. The only problem I have is the lack of communication between
the requestor and the worker.

It is difficult to find tasks that pay enough to be worth the effort.

What irritates me the most is getting work rejected for no good reason. I shouldn’t complain as, overall, I have a high acceptance rate. But when I can’t determine what I might have done wrong, it makes me wonder if I’m being cheated. I often stay away from HITs where the “right” answer is determined by the level of agreement among those who answer the question. The “crowd” isn’t always well informed. Finally, I wish the people who post HITs would allow a generous amount of time to complete them, unless there is a reason for keeping completion time short. There have been too many occasions when I’ve been working away and time ran out unexpectedly. I try to remember to check the countdown, but don’t always succeed.

It is sometimes difficult to become qualified for working the tasks after submitting the test.

I had a bad experience with the Mechanical Turk system. I had a HIT rate of 98 %, but once a client rejected 40 HITS of mine by mistake. He realized this mistake and tried to rollback the rejects, but the system would not allow him to do so. As a result my HIT acceptance rate came down to 80 % and I was not able to perform many HITS which I could have easily done. I took it up with Amazon support team, but they were very callous and just said that they are sorry but they can’t do anything about it.

Lack of ability to communicate with requestors is the biggest source of frustration.  Also, it is discouraging to find so few research/academic tasks, with most work related to SEO.  Being banned/blocked by requestors because of a misunderstanding of the task is a real danger, since we don’t have a means of getting clarification before we submit the work.

Sometimes a task application doesn’t work the way the requester intended and you’re unable to submit your work.  Sometimes a task depends on you running a certain operating system.  Tasks almost never pay enough, at least for US workers.  It would be nice to earn a part time income doing HITs but it’s really nothing more than a hobby.

THAT WAS GOOD AND CHALLENGES TO LIFE.

The biggest challenge for me is balancing the reward, the time it will take to complete a task and the trust I put into the requester. In other words, will the time I will put in this request worth the reward and is there a chance the requester is going to rip me off ?

The biggest challenge of crowd-sourced work is finding HIT’s that are easy to do in a reasonable amount of time that pay a reasonable sum of money. I rarely find anything that isn’t spam or unreasonably demanding to complete.

The mechanical turk workers are facing the problem or rather the biggest challenge is that they are not so quality conscious as is being expected of from them.  There is not much problem except that sometimes very little is being offered for a work which demands more in this age of inflationary pressures.

The biggest challenge is needing money in today’s economy and having to accept pennies at times, when the work deserves a lot more compensation. To be fair, some requesters are very generous, but they are the minority. But, one does what one needs to do to pay the bills.

Is government use of UbiComp ethical?

1

A government seeks to identify its dissidents. This government has a pre-existing cache of photos linked to names and addresses, for the national identity card system. They also have reasonably high quality images and video footage from a recent protest. The government, theoretically, could ask participants in UbiComp sites to determine gender and age-range features of the identification photographs, and then ask users to match faces with high accuracy, speed and low cost. This would, of course, help the government investigate and track down their dissenters.

Does this change your opinion on Human Ubiquitous Computing? How so? What might companies do to guard against this type of action while maintaining their full functionality? Should the participants in UbiComp platforms be required to understand the end goals of their jobs?

Should political and civic action be crowdsourced?

1

Here at Stanford, we could organize a protest. The protest would most likely occur online, and definitely through a web interface. Communication would appear authentic but participants would be paid. UbiComp sites can facilitate these types of activities. Emails are written to senators. Resturants are given positive reviews. Comments are posted, letters are sent and seemingly authentic activity is appearing online, but is created by paid individuals on crowdsourcing sites.

Should this be possible? Is it ethical to participate in these activities? What if a user is participating and decides to only write for causes they support? Is it possible or realistic to enforce regulations in this space?

Is UbiComp denying workers’ rights?

ø

Imagine a scenario wherein a middle school girl plays a puzzle solving game frequently online. She is successful and brilliant at the game, and her efforts are leveraged to solve real-live computational issues, but she is never given any compensation, social or otherwise. Her brother also finds satisfaction in tasks online on a site that awards small payments for menial tasks. On average he is able to make around minimum wage for these efforts and also finds them enjoyable, but he has no workers compensation, no union and definitely not a guarantee of pay for hours worked, as payment is awarded based on his ability to successfully complete a task according to metrics assigned by his benefactor.

They both have a choice to go elsewhere for entertainment and monetary gain, but is it ethical for the businesses perpetuating the systems to operate in this way? Is either scenario fair? Would you personally be willing, either for enjoyment or monetary compensation to participate in either activity? How could either situation be more advantageous for those participating? Are worker’s rights required for menial tasks online? Why or why not?

The Global Net Initiative and Corporate Responsibility

ø

Does information technology raise human rights concerns? Where are the legal borders of the Internet? Is positive law the only law, or should companies take supererogatory efforts? (meaning do we stop only at laws explicitly written down?)

Yesterday our course had a round table conversation with Mark Chandler Senior Vice President, General Counsel, and Secretary of CISCO, Chuck Cosson Senior Policy Counsel of Microsoft and Dunstan Hope  Managing Director ICT Practice, Managing Director Advisory Services of Businesses for Social Responsibility. The outcomes proved to be filled with questions and difficult problems.

The Global Net Initiative (GNI) was formally launched in December 2008 and its primary goal is to further the ideals of freedom of expression and privacy online. The GNI has implementation guidelines for participating companies that outline the principles of membership. These include freedom of expression, privacy, multi-stakeholder collaboration, and governance, accountability and transparency.

Within the concept of Global Net and human rights online our course discussion centered around the roles of businesses that facilitate online infrastructure we invited guests to propose their questions and ideas about the broader spectrum of the global Internet and it’s implications for businesses, policies and human rights.

One approach to GNI issues is to allow Governments to negotiate international issues of information and distribution. This can be advantageous for tech companies especially when the issues at hand cut across industry and through global treaty obligations. We discussed the concept of re-casting encryption issues as a government to government issue, thus affecting how an entire industry gets treated rather than singling out a particular company.

Is it useful for governments to handle disputes rather than corporations? When are these debates between two enterprises rather than between two governments?

Another huge benefit of GNI is the possibility of having uninvolved agents evaluate a situation. This sets up personal relationships that allow for collaboration between industry and academia that might not otherwise be possible.The diplomacy between nations and companies is an integral part of the ideals in GNI.

If more private diplomacy works better than naming and shaming, what is the best way to publicize/inform citizens of these types of negotiations? Do citizens have a right to know about these negotiations as they are in progress?

Are human rights more of an externality for some companies than others? (externality here meaning, if they do something that affects society, does it affect their sales)  How can the Global Net Initiative incentivize smaller companies where Human Rights?

We’d love your thoughts in the comments.

Welcome to Difficult Problems in Cyberlaw

ø

In the coming three weeks, students from Harvard, MIT and Stanford will be tackling real-life problems of Internet commerce, governance, security and information dissemination. These problems themselves are not only conceptual issues but also identifiable struggles within their spheres. Students will be engaged with practitioners and academics–people who potentially hold the power to shape the future of these issues or at least provide the course with a sounding board to articulate better questions about the future.

An important aspect of the trajectory of this course is the students’ participation in the Internet phenomena they have chosen to investigate for these few weeks. Students will be required to understand cycles perpetuated by Reputation Defender, participate in human computing sites like Amazon Mechanical Turk and understand debates around the successes and perils of Couchsurfing.com (of course, through forums, as three weeks at Stanford is a quite lengthy amount of time to couchsurf!). The students are also offered field trips to interact firsthand with various components of the technical sphere they seek to understand including Facebook, Ebay and Google. The idea behind this immersion is to allow students the participatory (albeit “couchsurfing” free) understanding of the media they consume and now also advise.