Chris Soghoian, a CS student at Indiana, wrote a now-infamous program that allowed Internet users to print valid-looking, but fake, Northwest Airlines boarding passes. The passes could allow someone to pass through airport security (run by the Transportation Security Administration) without having purchased a ticket, or under a fake name. The problem has been nosed around for years – Senator Charles Schumer (D-NY) and security expert Bruce Schneier have railed about it previously – but Soghoian is the first (that we know of) to write an application exploiting this security flaw.
Now, FBI agents have raided his home and seized his computer, and the site is off the Web. Indiana U., in an inspirational display of support for academic freedom and backbone, has declined to defend Soghoian. Federal criminal charges may be in the works.
It strikes me that criminal liability is at least a possibility here, since Soghoian has created a tool tailored to violating federal law / regulations (in the airport security context). However, I’m not a criminal lawyer, and I’m more interested in what this example tells us about revealing security flaws. Here, TSA has known about the problem for years, yet has focused on having us remove our shoes and leave our Evian behind rather than mitigating this (more grave) risk. Soghoian’s revelation and Web tool, then, can be viewed as a tactic calculated to draw public attention and force TSA to address this problem. If intent were an issue in any charges filed against Soghoian, I would assume this would be quite relevant.
More importantly, it raises the question of when disclosure of sensitive information should be criminalized. We’ve seen this before – Hugh Hewitt accused the New York Times of treason for discussing U.S. monitoring of the SWIFT program, and the Court of Appeals for the D.C. Circuit is hearing arguments today about whether it’s unlawful to disclose information obtained from an illegally-recorded cell phone conversation. The circumstances of each case are different, of course, but there is a common thread among them: disclosing sensitive information arguably creates one risk (for example, that terrorists will print fake boarding passes) but reduces another (for example, that TSA’s “security theater” is not effective in keeping terrorists out of airports). The media, and computer security researchers, are following these cases with obvious interest.
Without going into the specifics of each case, I think we might want to consider something akin to a necessity defense under criminal law. Necessity posits that the defendant, while technically guilty of the crime charged, should not be convicted because his/her actions prevented a greater harm, with no reasonable alternatives available. Information disclosure analysis might work the same way: did the defendant act with intent to avoid greater risk or injury? Were there previous attempts, by the defendant or others, to mitigate this harm? What alternatives were available, and what harm actually occurred? (While the last point isn’t strictly necessary in criminal law – attempted crimes are punishable in most cases – in practice it tends to count.)
In short, we should worry when liability is used to deal with the messenger, not the problem. If it’s possible to fake easily airline boarding passes, I am less worried that Soghoian put up a site allowing me to get through security at Detroit’s airport (dominated by Northwest) than I am that it’s a sufficiently trivial barrier that a CS student can bypass it in his spare time. (Anyone who thinks that America’s enemies are devoid of computer skills hasn’t seen the professional-quality jihadist videos that proliferate on pro-insurgent message boards.) Government can suffer from a principal-agent problem as easily as the rest of us: it may be in a security agency’s interest to minimize public perception of risks, or of its lack of competence in mitigating them, rather than to devote attention to real but ugly problems. Sunlight, as the bromide goes, can be a powerful disinfectant.