The New York Times publishes an article on how people make medical decisions: apparently, we’re more willing to subject others (including our children) to a vaccine with a low but real risk that protects against a more dangerous type of flu. The author of the underlying study suggests a “sense of responsibility” forces people to overcome risk aversion and to recommend that others make the (proper) risk-adjusted choice. Of course, this could just be rational evolutionary calculus at work: it’s easier to let others, even relatives, take risks with their lives than to put our own on the line.
I’m fascinated with this problem – how information, and how it’s presented, affects human decisions and the way law tries to regulate them – and have a paper coming out soon that looks at the tension between cognitive biases and the theory of the “marketplace of ideas.” Even random information can alter our analysis. To give one famous example, Amos Tversky and Nobel laureate Daniel Kahneman spun a wheel of fortune with numbers from 0 to 100 in front of study participants. Next, Tversky and Kahneman asked the subjects to estimate how many African countries were in the United Nations. Lo and behold, the spin results significantly affected people’s estimates – even though they had no bearing whatsoever on the correct answer.
What does it mean for law and policy when how information is framed alters the choices we make based on it? For example, what is the “correct” way for doctors to present data on risks to patients, knowing that discussing it in terms of the risk of dying (mortality) versus the probability of living (survival) – flip sides of the same coin – will shift the resulting decision? What should the law require for “informed consent“? Should doctors reveal a tiny risk of a gruesome death if they know, empirically, patients will give undue weight to that possibility?
I don’t know the answers to these questions. I submit to you that information law must grapple with them if it wishes to guide regulation effectively.