A Pennsylvania detective had a nickname of a suspect, but no real name. He turned to Facebook, found a picture, and eventually apprehended the person. This anecdote from a Washington Post investigation into law enforcement use of facial recognition software illustrates how social media can be a boon for catching criminals.
As people share more about their thoughts and actions on social media and as algorithms grow more sophisticated, law enforcement’s ability to mine such information for clues into how to prevent crimes raises concerns of profiling and questions of oversight.
Recently, the ACLU’s “Mapping the FBI” project uncovered intelligence gathering that used racial and ethnic mapping. The project’s documentation includes a 2009 memo from the bureau’s Detroit office that called Michigan’s Middle East and Muslim community “prime territory for attempted radicalization and recruitment by” terrorist groups. The FBI’s reason: most State Department-labeled terrorist groups originate in the Middle East and South Asia.
In 2010 a 20-year-old Arab-American man in California found a tracking device on his car and learned the FBI had been surveilling him, a US citizen, for months, if not longer. Since 2011 the Associated Press has investigated the NYPD’s spying on Muslim communities, documenting what The Atlantic calls “horrifying effects” on both those surveilled, who have not been accused of any crimes, and on counterterrorism efforts as a whole—in six years, the program did not generate a single lead.
The NSA and British intelligence agency GCHQ collect raw Internet traffic that includes email, social media, and chats. US law enforcement agencies at all levels can obtain information from Internet and communication companies with court orders. But police don’t need permission to monitor what already flows freely on the web. Ars Technica reported on the London Metropolitan police’s extensive efforts to monitor social media:
For the past two years, a secretive unit in the Metropolitan Police has been developing the tools for blanket surveillance of the public’s social media conversations. Operating 24 hours a day, seven days a week, a staff of 17 officers in the National Domestic Extremism Unit (NDEU) has been scanning the public’s tweets, YouTube videos, Facebook profiles, and anything else UK citizens post in the public online sphere.
Several commercial tools exist to monitor social media streams, and companies actively market them to law enforcement. Police departments at the University of Maryland, Hampton University, and the city of Boca Raton, Florida use tools from the Virginia-based technology company ECM Universe to surveil social media users and analyze the text of their messages. A brochure touts that with the tool,
[A] city can monitor activist groups who are using social media to organize their efforts on the ground and receive alerts in a matter of minutes from the time of the postings when dangerous radical elements emerge from the crowd.
Such language underscores the need for oversight on how to use information gathered from social media. Participating in an activist group is not a criminal activity, and “dangerous radical elements” do not emerge at every activist meeting.
US law enforcement generally needs a reason and court permission to investigate someone. Predictive analytics involves mining data to look for undetected or unconsidered patterns. Agencies “don’t necessarily know what they need to monitor on Twitter,” software company SAS wrote in a paper detailing its tools, one of which maps people’s friends and followers on Facebook and Twitter. Users maintain hundreds of connections on these sites, including people they may not have contacted for years or people they don’t even know. To what extent will a person’s connections implicate them?
Predictive policing has helped police departments lower crime. But such efforts used previously reported and anonymized crime data; inclusion of social media data adds another dimension of concern. People don’t know what governments are doing with troves of social media data, and people can’t see the algorithms that police use to fight crime. As Evgeny Morozov wrote, “If no one can examine the algorithms…we won’t know what biases and discriminatory practices are built into them.”