Disclosure

Chris Soghoian, a CS student at Indiana, wrote a now-infamous program that allowed Internet users to print valid-looking, but fake, Northwest Airlines boarding passes. The passes could allow someone to pass through airport security (run by the Transportation Security Administration) without having purchased a ticket, or under a fake name. The problem has been nosed around for years – Senator Charles Schumer (D-NY) and security expert Bruce Schneier have railed about it previously – but Soghoian is the first (that we know of) to write an application exploiting this security flaw.

Now, FBI agents have raided his home and seized his computer, and the site is off the Web. Indiana U., in an inspirational display of support for academic freedom and backbone, has declined to defend Soghoian. Federal criminal charges may be in the works.

It strikes me that criminal liability is at least a possibility here, since Soghoian has created a tool tailored to violating federal law / regulations (in the airport security context). However, I’m not a criminal lawyer, and I’m more interested in what this example tells us about revealing security flaws. Here, TSA has known about the problem for years, yet has focused on having us remove our shoes and leave our Evian behind rather than mitigating this (more grave) risk. Soghoian’s revelation and Web tool, then, can be viewed as a tactic calculated to draw public attention and force TSA to address this problem. If intent were an issue in any charges filed against Soghoian, I would assume this would be quite relevant.

More importantly, it raises the question of when disclosure of sensitive information should be criminalized. We’ve seen this before – Hugh Hewitt accused the New York Times of treason for discussing U.S. monitoring of the SWIFT program, and the Court of Appeals for the D.C. Circuit is hearing arguments today about whether it’s unlawful to disclose information obtained from an illegally-recorded cell phone conversation. The circumstances of each case are different, of course, but there is a common thread among them: disclosing sensitive information arguably creates one risk (for example, that terrorists will print fake boarding passes) but reduces another (for example, that TSA’s “security theater” is not effective in keeping terrorists out of airports). The media, and computer security researchers, are following these cases with obvious interest.

Without going into the specifics of each case, I think we might want to consider something akin to a necessity defense under criminal law.  Necessity posits that the defendant, while technically guilty of the crime charged, should not be convicted because his/her actions prevented a greater harm, with no reasonable alternatives available.  Information disclosure analysis might work the same way: did the defendant act with intent to avoid greater risk or injury? Were there previous attempts, by the defendant or others, to mitigate this harm?  What alternatives were available, and what harm actually occurred?  (While the last point isn’t strictly necessary in criminal law – attempted crimes are punishable in most cases – in practice it tends to count.)

In short, we should worry when liability is used to deal with the messenger, not the problem.  If it’s possible to fake easily airline boarding passes, I am less worried that Soghoian put up a site allowing me to get through security at Detroit’s airport (dominated by Northwest) than I am that it’s a sufficiently trivial barrier that a CS student can bypass it in his spare time.  (Anyone who thinks that America’s enemies are devoid of computer skills hasn’t seen the professional-quality jihadist videos that proliferate on pro-insurgent message boards.)  Government can suffer from a principal-agent problem as easily as the rest of us: it may be in a security agency’s interest to minimize public perception of risks, or of its lack of competence in mitigating them, rather than to devote attention to real but ugly problems. Sunlight, as the bromide goes, can be a powerful disinfectant.

5 Responses to “Disclosure”

  1. Fascinating post, Derek!

    I’d suggest one possible expansion on the outlines of your suggested defense — one that might make Soghoian’s conduct out of bounds but still allow experts to raise the alarm about security breaches.

    The harm in announcing to the world “I can make fake boarding passes” must be less than the harm of then adding: “And I am posting my tools on the web — you can try it at home!” Did he (or should he) have a duty to limit himself to the first?

    There may be situations where you must show your audience what you are doing in order to make them believe you. And to be sure, there may still be some harm just in revealing the existence of a security loophole (perhaps outweighed, as you say, by disclosure increasing the likelihood that the loophole will be closed). But, as we alreay know from the world of “white hat” hackers, there are good and bad ways of disclosing security flaws, and I might suggest that mass distribution of your tools falls into the “bad” category, and perhaps it should diqualify you from this line of defense — however noble your intentions might have been.

  2. For those that are interested, I have recorded a twenty-minute “lecture” on the criminal law defense of necessity. It’s from the cases I studied and the notes I took in Criminal Law at Cincinnati. (This is one episode of a larger project recording what I learned in law school.)

    The specific episode on necessity is at http://www.lifeofalawstudent.com/article.php?story=crimlaw24. It’s licensed as CC-Attribution and GNU FDL.

    – Neil Wehneman

  3. Excellent and interesting commentary Derek! As far as Mr. McGeveran’s comments above I have a question of practicality: Would the government actually listen if Soghoian actually stated he could make the boarding passes? How many citizens have warned/ commented/ suggested terrosim methods to the government and been ignored. Maybe his method of drawing attention to the issue was the only reason the government gave it any attention.

  4. Becky, I agree with you. As I said, “There may be situations where you must show your audience what you are doing in order to make them believe you.” But I can imagine intermediate steps — perhaps showing your tools to the Dept. of Homeland Security Inspector General — that are still short of posting the tool on the web for other people to duplicate your security breach. Insofar as those alternatives are available, I think it should undermine the sort of defense Derek is proposing.

  5. The difficult part here is determining when intermediate steps have been sufficiently played out. How long does one give DHS to fix the problem? (After all, don’t Schneier and Schumer’s warnings count here?) Must one alert every federal agency? Is this an “exhaust all steps” standard or a “reasonable efforts” standard?

    These problems hold for most security problems. The difficulties are 1) absent public disclosure, there is often limited incentive to fix the problem (it’s like catching spies – they shouldn’t have been there in the first place, so one receives little credit for it), and 2) proof of concept is important, but revealing such things privately to government agencies may let them quash the report without fixing the problem. Intermediate steps are clearly important, and one should have to deal with this issue in mounting a necessity defense, but I think the risk here is that what we’ve seen in the whistleblower context will be replicated in the security context…