[Ed. Note: On Friday, May 2 and Saturday, May 3, 2014, the Petrie-Flom Center hosted its 2014 annual conference: “Behavioral Economics, Law, and Health Policy.” This is an installment in our series of live blog posts from the event; video will be available later in the summer on our website.]
This session was kicked off by Jennifer Zamzow, Post-Doctoral Fellow, Center for Ethics and Policy, Carnegie Mellon University, with a talk called “Affective Forecasting in Medical Decision-Making: What Do Physicians Owe Their Patients?” Jennifer began with an example of a recently paralyzed patient requesting termination of life-sustaining care on the grounds that his injury feels like a fate worse than death. On the one hand, we feel compelled to respect the decisions of competent patients, but on the other hand, given what we know about errors in affective forecasting, we anticipate that the patient would be able to eventually adapt to his new circumstances and lead a happy, full life. The question, then, is whether physicians have any obligation to help their patients make more accurate affective forecasts.
Jennifer pointed out two opposing positions on this question: (1) affective forecasting is the patient’s sole responsibility, considering that only they are expert in their own beliefs, desires, and values; or (2) clinicians should help patients with their affective forecasting, given what we know about patients’ likelihood of mistakes. Advocating for the latter position, Jennifer argued that clinicians indeed have an obligation to help patients in this way based on their duty of beneficence. Among other reasons, clinicians have access to information that their patients don’t have, namely how patients like them have reacted to similar circumstances, and more importantly, changed their reactions over time.
With this obligation in mind, Jennifer asked when should physicians intervene? Her answer: when the stakes for patients are high, and when patients are particularly likely to make poor forecasts. She concluded by pointing out that clinicians’ role is not to push particular choices, but rather to help their patients form more accurate beliefs about what life might be like for them under various circumstances.
Next, Alexander Capron, University Professor; Scott H. Bice Chair in Healthcare Law, Policy and Ethics; Professor of Law and Medicine, Keck School of Medicine; Co-Director, Pacific Center for Health Policy and Ethics, USC Gould School of Law, presented his paper, “Mobile Devices, Small Data, and Personal Healthcare Decisions: Behavioral Economics’ New Frontier and the Law.” Alex’s focus was nudging in the clinical context.
On the one hand, Alex noted that a nudging doctor could look like “Dr. Hippocrates,” a prototype that believes there is no sense in deferring to patients choosing for their own reasons because patients don’t really know their own reasons. Instead, Dr. Hippocrates feels it is best for the doctor to figure out what is right and then steer the patient to make those choices. Alex noted a variety of problems with this model, including the fact that doctors themselves are subject to their own cognitive problems (e.g., availability and mistaken heuristics), have difficulty getting to know patients well, etc.
As an alternative, Alex proposed that a nudging clinician could look like “Dr. Diligent,” a doctor who works with each patient to engage in rational persuasion using “small data” about them – e.g., devices and web services used for tracking, such as Fitbit; digital breadcrumbs from mobile devices, such as GPS; mHealth, such as sensors and disease self-management, etc. Alex pointed out that there is tons of information available about people based on their use of the internet, mobile phones, and the like. Of course, mobile technology has all sort of capacities for the data-hungry, such as sensors that can sense behavior; GPS that can sense place; sound recognition that can sense conversation and mood; wireless data that is wearable, just to name a few examples. And clinicians could use this small data to help their patients make better health care decisions.
More specifically, Alex noted that small data can help with both ends of “personalized paternalism” – knowledge and action. On the knowledge side, small data helps clinicians understand what the patient does, and likes, and with whom – broadly, what is happening with the patient? On the action side, it allows clinicians to offer timely and fine-tuned nudges, such as routine reminders, two way interaction, context specific messages, etc. As one specific example, Alex noted that a clinician could engage with technology to help recovering addicts avoid certain neighborhoods and triggers.
A few concerns about clinicians’ uses of small data include obvious privacy issues, as well as differences in how different groups engage with technology leading to potential disparities between age groups, cultural groups, etc. based on having more or less small data about them.
Finally, Ester Moher, Postdoctoral Fellow, Electronic Health Information Lab, University of Ottawa (with Khaled El Emam), presented on “The Perilous Promise of Privacy: Ironic Influences on Disclosures of Health Information.” Ester described research that tries to get at the question of whether disclosures of confidentiality protection actually convey safety or concern. Existing literature offers evidence on both sides. On the one hand, individuals report increased incidence of sensitive behaviors following confidentiality disclosures, but on the other, their reports seem to be less accurate and less consistent.
Ester’s research asks the following question: do assurances of confidentiality encourage or discourage disclosure of health information? She described two studies in this space. The first had participants read a consent form that included (and highlighted) the anonymity and confidentiality sections v. a consent form excluding those sections. Participants were then asked various sensitive questions (e.g., have you ever had an STI). The results indicated that the presence of the assurance actually discouraged disclosure – despite the intention of such disclosures to help assuage participant concerns. A second study asked whether attention to the confidentiality assurance increases or decreases disclosure rates. This study confirmed that the greater attention paid to the confidentiality assurance, the less disclosure of sensitive information there was.
The upshot: when people are assured of confidentiality, that actually causes problems with disclosure of precisely what we might be trying to encourage. People seem to view such assurances as warnings of increased risk, rather than safety notices of decreased risk. Thus, researchers and policymakers imposing confidentiality disclosure requirements with good intentions may actually be compromising quality and quantity of data collected. More comprehensive consent forms are unlikely to simultaneously increase disclosure and improve understanding of anonymity and confidentiality protections.