What should the future look like for brain-based pain measurement in the law? This is the question tackled by our concluding three contributors: Diane Hoffmann, Henry (“Hank”) T. Greely, and Frank Pasquale. Professors Hoffmann and Greely are among the founders of the fields of health law and law & biosciences. Both discuss parallels to the development of DNA evidence in court and the need for similar standards, practices, and ethical frameworks in the brain imaging area. Professor Pasquale is an innovative younger scholar who brings great theoretical depth, as well as technological savvy, to these fields. Their perspectives on the use of brain imaging in legal settings, particularly for pain measurement, illuminate different facets of this issue.
This post describes their provocative contributions – which stake out different visions but also reinforce each other. The post also highlights the forthcoming conference-based book with Oxford University Press and introduces future directions for the use of the brain imaging of pain – in areas as diverse as the law of torture, the death penalty, drug policy, criminal law, and animal rights and suffering. Please read on!
The recent meeting at Harvard on neuroimaging, pain, and the law demonstrated powerfully that the offering of neuroimaging as evidence of pain, in court and in administrative hearings, is growing closer. The science for identifying a likely pattern of neuroimaging results strongly associated with the subjective sensation of pain keeps improving. Two companies (and here) recently were founded to provide electro-encephalography (EEG) evidence of the existence of pain. And at least one neuroscientist has been providing expert testimony that a particular neuroimaging signal detected using functional magnetic resonance imaging (fMRI) is useful evidence of the existence of pain, as discussed recently in Nature.
If nothing more is done, neuroimaging evidence of pain will be offered, accepted, rejected, relied upon, and discounted in the normal, chaotic course of the law’s evolution. A “good” result, permitting appropriate use of some valid neuroimaging evidence and rejecting inappropriate use of other such evidence, might come about. Or it might not.
We can do better than this existing non-system. And the time to start planning a better approach is now. (Read on for more on how)
By Frank Pasquale, Professor of Law, University of Maryland Carey School of Law
Many thanks to Amanda for the opportunity to post as a guest in this symposium. I was thinking more about neuroethics half a decade ago, and my scholarly agenda has, since then, focused mainly on algorithms, automation, and health IT. But there is an important common thread: The unintended consequences of technology. With that in mind, I want to discuss a context where the measurement of pain (algometry?) might be further algorithmatized or systematized, and if so, who will be helped, who will be harmed, and what individual and social phenomena we may miss as we focus on new and compelling pictures.
Some hope that better pain measurement will make legal disability or damages determinations more scientific. Identifying a brain-based correlate for pain that otherwise lacks a clearly medically-determinable cause might help deserving claimants win recognition for their suffering as disabling. But the history of “rationalizing” disability and welfare determinations is not encouraging. Such steps have often been used to exclude individuals from entitlements, on flimsy grounds of widespread shirking. In other words, a push toward measurement is more often a cover for putting a suspect class through additional hurdles than it is toward finding and helping those viewed as deserving.
Of Disability, Malingering, and Interpersonal Comparisons of Disutility (read on for more)
As someone who has been greatly concerned about and devoted much of my scholarship to legal obstacles to the treatment of pain, I applaud Professor Pustilnik for increasing attention to the role of neuroimaging in our efforts to understand our experience of pain and how the law does or does not adequately take into account such experience. Pustilnik has written eloquently about this issue in several published articles but her efforts to bring together scientists, medical experts, legal academics, and judges (see also here) deserves high praise as a method for illuminating what we know and do not know about pain and the brain and to what extent brain imaging can serve as a diagnostic tool or an external validator of pain experience.
In this post, I discuss how DNA testing serves as a precedent for how to develop responsible uses of new technologies in law, including, potentially, brain imaging for pain detection. The ethical, legal, and social implications (ELSI) of DNA research and testing were integral to developing national protocols and rules about DNA. Brain imaging of pain needs its own ELSI initiative, before zealous adoption outpaces both the technology and the thinking about the right guiding principles and limitations.
The idea of brain images serving as a “pain-o-meter” to prove or disprove pain in legal cases is clearly a premature use of this information and likely an over simplification of the mechanisms of pain expression. However, the potential for an objective diagnostic tool or indicator of the pain experience is something that lawyers representing clients in criminal, personal injury, workers comp or disability cases may find too attractive to resist and attempt to have admitted in the courtroom. This state of affairs brings to mind the ways in which lawyers have attempted to use genetic test results, initially obtained for medical purposes, in litigation. (Read on for more about ELSI in DNA and several national pain initiatives that could adopt the Human Genome Project and DNA ELSI model).
A potential difficulty, but also an opportunity, relating to using neuroimaging evidence in legal cases arises from the difficulty brain researchers have in separating emotional and physical pain. We know that pain and emotion are tightly linked. In fact, “emotion” is in the very definition of pain. The IASP definition of pain is: “An unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.” Yet, the legal system deals with “physical” versus “psychiatric” versus “emotional” pain in different ways.
Chronic pain is associated with anxiety, depression, and stress. These factors can exacerbate the pain, and pain can exacerbate them. Pain’s sensory and emotional components connect in a “feed-forward” cycle. It may not be possible to entirely separate the sensory and emotional components of pain, biologically or experientially. But it might be necessary for the purposes of legal cases, as important areas of law create sharp distinctions between physical and emotional, or body and mind.
Neurolaw includes some fascinating issues that lack any practical legal significance – for example whether we should consider anyone responsible for anything they do, given that all behavior is physically caused by brain processes. It also includes some legally important issues that lack intellectual juiciness – like regulatory issues surrounding neurotechnology.
Pain, I learned at this meeting, is at the heart of many legal proceedings. A major problem to be solved in these proceedings is the determination of whether someone is truly in pain. Chronic pain in particular may not have physically obvious causes. There may be clinical and circumstantial evidence of pain – like adhering to a medication regime, seeking surgeries or other interventional procedures, and avoiding pleasurable activities – but often the major evidence of pain is just what someone says that it is. However, the motivation exists to lie about pain – to sue for more money, to obtain disability benefits – and so an objective measure of pain, a “pain-o-meter,” would be helpful.
The prevalence of chronic pain is staggering. The Institute of Medicine reported in 2011 that 100 million Americans suffer from chronic pain – more than those with heart disease, cancer and diabetes combined. The report also highlights that the annual costs for medical care, lost wages and productivity is more than $600B. These enormous personal and societal costs of chronic pain has driven an effort to “prove” if and how much pain an individual is suffering from for health care providers, insurance companies and legal actors. This is challenging because pain is a personal and subjective experience. Ideally, self report would be sufficient to establish the “ground truth” of the pain experience.
However, some are not able to provide self reports accurately, and the potential financial gain associated with claims of pain has tarnished the perceived authenticity of subjective reports. This has led some to develop brain imaging-based tests of pain – a so-called “painometer.” Yet, current technologies are simply not able to determine whether or not someone has chronic pain. Here, I consider specifically how we could develop a brain-imaging based painometer – and whether we would want to do so. As we ask: “Can we do it?,” we should always ask, “Is this the right thing to do?”
I recently saw someone walk into a signpost (amazingly, one that signalled ‘caution pedestrians’); by the angle and magnitude that his body rebounded, I estimated that this probably really hurt. What I had witnessed was a danger of walking under the influence of a smart phone. Because this man lacked the ability to tweet and simultaneously attend to and process the peripheral visual information that would enable him to avoid posts, the sidewalk was a dangerous place. If only there existed some way to enhance this cognitive ability, the sidewalks would be safer for multi-taskers (though less entertaining for bystanders).
In a public event on neurogaming held last Friday as part of the annual meeting of the International Society for Neuroethics, Adam Gazzaley from UCSF described a method that may lead to just the type of cognitive enhancement this man needed. In a recent paper published in nature, his team showed that sustained training at a game called NeuroRacer can effectively enhance the ability of elderly individuals to attend to and process peripheral visual information. While this game has a way to go before it can improve pedestrian safety, it does raise interesting questions about the future of our regulations surrounding distracted driving, e.g., driving while texting. In many jurisdictions, we prohibit texting while driving, and a California court recently ruled to extend these regulations to prohibit certain instances of driving under the influence of smart phones (i.e. smart driving).
But if individuals were to train on a descendant of NeuroRacer and improve their ability to visually multitask, should we give them a permit to text while driving?
In general, the panel rightly pointed out practical limitations of these technologies. Panelist Nancy Kanwisher highlighted, for example, that research on lie-detection is done in a controlled, non-threatening environment from which we may be unable to generalize to criminal courts where the stakes are high.
While I was sympathetic to most of this discussion, I was puzzled by one point that the panel raised several times: the problematic nature of applying data based on a group of people to say something about an individual (e.g., this particular defendant). To present a simplified example: even if we could rigorously show a measurable difference in brain activity between a group of people who told a lie in the imager and a group of people who told the truth, we cannot conclude that an individual is lying if he shows an activity pattern similar to the liars. Since the justice system makes decisions on individuals, therefore, use of group data is problematic.
To me, this categorical objection to group data seems a bit odd, and this is why: I can’t see how group data is conceptually different from ordinary circumstantial evidence. Continue reading →