Here is HHS’s own summary of what has changed and what it thinks is most important:
The U.S. Department of Health and Human Services and fifteen other Federal Departments and Agencies have announced proposed revisions to modernize, strengthen, and make more effective the Federal Policy for the Protection of Human Subjects that was promulgated as a Common Rule in 1991. A Notice of Proposed Rulemaking (NPRM) was put on public display on September 2, 2015 by the Office of the Federal Register. The NPRM seeks comment on proposals to better protect human subjects involved in research, while facilitating valuable research and reducing burden, delay, and ambiguity for investigators. It is expected that the NPRM will be published in the Federal Register on September 8, 2015. There are plans to release several webinars that will explain the changes proposed in the NPRM, and a town hall meeting is planned to be held in Washington, D.C. in October.Continue reading →
UPDATE: Plaintiffs have filed an appeal in the U.S. Court of Appeals for the Eleventh Circuit. Their brief is due on October 19.
The district court has granted summary judgment (opinion pdf) for all remaining defendants as to all of plaintiffs’ remaining claims in Looney v. Moore, the lawsuit arising out of the controversial SUPPORT trial, which I last discussed here. This therefore ends the lawsuit, pending possible appeal by the plaintiffs.
Plaintiff infants include two who were randomized to the low oxygen group and survived, but suffer from “neurological issues,” and one who was randomized to the high oxygen group who developed ROP, but not permanent vision loss. In their Fifth Amended Complaint (pdf), plaintiffs alleged negligence, lack of informed consent, breach of fiduciary duty, and product liability claims against, variously, individual IRB members, the P.I., and the pulse oximeter manufacturer. What unites all of these claims is the burden on plaintiffs to show (among other things) that their injuries were caused by their participation in the trial. Continue reading →
What should the future look like for brain-based pain measurement in the law? This is the question tackled by our concluding three contributors: Diane Hoffmann, Henry (“Hank”) T. Greely, and Frank Pasquale. Professors Hoffmann and Greely are among the founders of the fields of health law and law & biosciences. Both discuss parallels to the development of DNA evidence in court and the need for similar standards, practices, and ethical frameworks in the brain imaging area. Professor Pasquale is an innovative younger scholar who brings great theoretical depth, as well as technological savvy, to these fields. Their perspectives on the use of brain imaging in legal settings, particularly for pain measurement, illuminate different facets of this issue.
This post describes their provocative contributions – which stake out different visions but also reinforce each other. The post also highlights the forthcoming conference-based book with Oxford University Press and introduces future directions for the use of the brain imaging of pain – in areas as diverse as the law of torture, the death penalty, drug policy, criminal law, and animal rights and suffering. Please read on!
The recent meeting at Harvard on neuroimaging, pain, and the law demonstrated powerfully that the offering of neuroimaging as evidence of pain, in court and in administrative hearings, is growing closer. The science for identifying a likely pattern of neuroimaging results strongly associated with the subjective sensation of pain keeps improving. Two companies (and here) recently were founded to provide electro-encephalography (EEG) evidence of the existence of pain. And at least one neuroscientist has been providing expert testimony that a particular neuroimaging signal detected using functional magnetic resonance imaging (fMRI) is useful evidence of the existence of pain, as discussed recently in Nature.
If nothing more is done, neuroimaging evidence of pain will be offered, accepted, rejected, relied upon, and discounted in the normal, chaotic course of the law’s evolution. A “good” result, permitting appropriate use of some valid neuroimaging evidence and rejecting inappropriate use of other such evidence, might come about. Or it might not.
We can do better than this existing non-system. And the time to start planning a better approach is now. (Read on for more on how)
By Frank Pasquale, Professor of Law, University of Maryland Carey School of Law
Many thanks to Amanda for the opportunity to post as a guest in this symposium. I was thinking more about neuroethics half a decade ago, and my scholarly agenda has, since then, focused mainly on algorithms, automation, and health IT. But there is an important common thread: The unintended consequences of technology. With that in mind, I want to discuss a context where the measurement of pain (algometry?) might be further algorithmatized or systematized, and if so, who will be helped, who will be harmed, and what individual and social phenomena we may miss as we focus on new and compelling pictures.
Some hope that better pain measurement will make legal disability or damages determinations more scientific. Identifying a brain-based correlate for pain that otherwise lacks a clearly medically-determinable cause might help deserving claimants win recognition for their suffering as disabling. But the history of “rationalizing” disability and welfare determinations is not encouraging. Such steps have often been used to exclude individuals from entitlements, on flimsy grounds of widespread shirking. In other words, a push toward measurement is more often a cover for putting a suspect class through additional hurdles than it is toward finding and helping those viewed as deserving.
Of Disability, Malingering, and Interpersonal Comparisons of Disutility (read on for more)
I have an op-ed with Christopher Chabris that appeared in this past Sunday’s New York Times. It focuses on one theme in my recent law review article on corporate experimentation: the A/B illusion. Despite the rather provocative headline that the Times gave it, our basic argument, made as clearly as we could in 800 words, is this: sometimes, it is more ethical to conduct a nonconsensual A/B experiment than to simply go with one’s intuition and impose A on everyone. Our contrary tendency to see experiments—but not untested innovations foisted on us by powerful people—as involving risk, uncertainty, and power asymmetries is what I call the A/B illusion in my law review article. Here is how the op-ed begins:
Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent? The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise. But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.
After the jump, some clarifications and further thoughts.
On May 21, along with my frequent co-author Eli Adashi, I published an op-ed in the New York Timesraising some questions about FDA’s proposed guidance recommending a ban on taking the blood on any man who has had sex with another man in the past year, or in other words imposing a one year celibacy requirement on gay men if they want to donate blood. This built on our critique last July in JAMA, wherein we argued that FDA’s then-lifetime ban on gay men and MSM donating blood was out of step with science and the practice of our peer countries, as well as potentially unconstitutional.
Thanks to our work, and a concerted effort by public health, medical, and gay rights groups, FDA has finally moved off of that prior policy and recognized that it was unjustified, and discriminatory.
Just to put this in context It took more than 30 years to convince FDA that it was problematic to ban blood donation for a lifetime any man who ever had sex with another man, even if both have repeatedly tested negative for HIV, while it imposed only a one year ban on people who had sex with individuals known to be HIV positive or a sex worker. FDA is appropriately a conservative agency, but on this issue of the lifetime ban its willingness to listen and reconsider has gone beyond conservatism to the point of lunacy. [By the way to be clear, I *love* FDA. I represented them while at the DOJ and have a new book coming out about FDA in the fall. You can think highly of an agency but think they have a bad track record on an issue. This is critique not hater-aide].
Well with that background, one should be not so quick to assume that a move to a one year ban — a de facto lifetime ban for any gay man who is sexually active, even one who is monogamously married with children — is the best policy. To put it bluntly, refusing to change a lifetime ban for such a long period makes me skeptical we should accept a “just trust us” line on their new restrictive policy.
The question we raised in our op-ed was whether FDA had adequately justified retaining a one year ban in light of the evidence from places like South Africa (with a much shorter time period ban), Italy (which does individualized risk assessment instead of stigmatizing all gay men as high risk for disease), etc.
A remarkable new “sting” of the “diet research-media complex” was just revealed. It tells us little we didn’t already know and has potentially caused a fair amount of damage, spread across millions of people. It does, however, offer an opportunity to explore the importance of prospective group review of non-consensual human subjects research—and the limits of IRBs applying the Common Rule in serving that function in contexts like this.
Journalist John Bohannon, two German reporters, a doctor and a statistician recruited 16 German subjects through Facebook into a three-week randomized controlled trial of diet and weight loss. One-third were told to follow a low-carb diet, one-third were told to cut carbs but add 1.5 ounces of dark chocolate (about 230 calories) per day, and one-third served as control subjects and were told to make no changes to their current diet. They were all given questionnaires and blood tests in advance to ensure they didn’t have diabetes, eating disorders, or other conditions that would make the study dangerous for them, and these tests were repeated after the study. They were each paid 150 Euros (~$163) for their trouble.
But it turns out that Bohannon, the good doctor (who had written a book about dietary pseudoscience), and their colleagues were not at all interested in studying diet. Instead, they wanted to show how easy it is for bad science to be published and reported by the media. The design of the diet trial was deliberately poor. It involved only a handful of subjects, had a poor balance of age and of men and women, and so on. But, through the magic of p-hacking, they managed several statistically significant results: eating chocolate accelerates weight loss and leads to healthier cholesterol levels and increased well-being. Continue reading →
This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).
Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Continue reading →
I’ve mentioned on this blog before that I had a past life as a nurse. Therefore, I wanted to call attention to an important new study that has just come out in JAMA: Salary Differences Between Male and Female Registered Nurses in the United States. The study found that “[m]ale RNs outearned female RNs across settings, specialties, and positions.” On average, male nurses make $5,150 more per year than female colleagues in similar positions. This salary gap affects 2.5 million female RNs.
Ongoing identification of nursing as “women’s work” and the presence of gender bias in nursing can affect male nurses in different, seemingly contradictory ways. On the one hand, the 2000 National Sample Survey of Registered Nurses found that men leave nursing at a higher rate in their first four years of practice. Some have attributed that attrition to the harmful effects of gender bias. On the other hand, it has been observed that–unlike women who enter male-dominated professions–male nurses who enter this female-dominated profession typically encounter structural advantages that tend to enhance their careers.
There is a need for more nurses. According to the Bureau of Labor Statistics’s Employment Projections, the RN workforce is expected to grow to 3.24 million in 2022. That is a 19% increase. Nursing is a context that highlights how gender stereotyping hurts everyone–men who encounter discrimination, women who earn less than their male counterparts, and patients who benefit most when nursing recruits and retains excellent people.
I personally found nursing to be very rewarding. I hope this study motivates employers to scrutinize their pay structures but also to appreciate and address the broader effects of gender bias on the profession.
For centuries, researchers have studied multiple aspects of women’s reproduction. Research tells us when women are more likely to become pregnant, when infertility kicks in, and even offers significant insights into the psychological dimensions of pregnancy and mothering from the dopamine release associated with breastfeeding to the potential for postnatal depression to occur after birth. Perhaps for this reason, lawmakers and courts tend to focus on women’s environment and conduct, during pregnancy, as the space to promote fetal health and well-being with an eye toward healthy child development.
Has anything been missing? Until recently, very limited attention has focused on paternity. Decades-old studies linking paternity to mental health conditions such as schizophrenia are valuable, but sadly overlooked. And recent research linking older paternity to autism is just beginning to gain attention. Adding to this discourse and carving out unique pathways for understanding paternity is Professor Wendy Goldberg at the University of California at Irvine.
In her book, Father Time: The Social Clock and the Timing of Fatherhood, she takes up overlooked phenomenon, involving fathering. For example, do men experience postnatal depression? It turns out that they do–and more. Some expecting-fathers experience neuroticism, and even jealousy. Goldberg studies different age groups to explain how the “social” clock for dads impacts their relationships with offspring, partners, as well as how it impacts fathers’ mental health. It adds to an important, growing literature.
A WSJ reporter just tipped me off to this news release by Facebook regarding the changes it has made in its research practices in response to public outrage about its emotional contagion experiment, published in PNAS. I had a brief window of time in which to respond with my comments, so these are rushed and a first reaction, but for what they’re worth, here’s what I told her (plus links and less a couple of typos):
There’s a lot to like in this announcement. I’m delighted that, despite the backlash it received, Facebook will continue to publish at least some of their research in peer-reviewed journals and to post reprints of that research on their website, where everyone can benefit from it. It’s also encouraging that the company acknowledges the importance of user trust and that it has expressed a commitment to better communicate its research goals and results.
As for Facebook’s promise to subject future research to more extensive review by a wider and more senior group of people within the company, with an enhanced review process for research that concerns, say, minors or sensitive topics, it’s impossible to assess whether this is ethically good or bad without knowing a lot more about both the people who comprise the panel and their review process (including but not limited to Facebook’s policy on when, if ever, the default requirements of informed consent may be modified or waived). It’s tempting to conclude that more review is always better. But research ethics committees (IRBs) can and do make mistakes in both directions – by approving research that should not have gone forward and by unreasonably thwarting important research. Do Facebook’s law, privacy, and policy people have any training in research ethics? Is there any sort of appeal process for Facebook’s data scientists if the panel arbitrarily rejects their proposal? These are the tip of the iceberg of challenges that the academic IRBs continue to face, and I fear that we are unthinkingly exporting an unhealthy system into the corporate world. Discussion is just beginning among academic scientists, corporate data scientists, and ethicists about the ethics of mass-scale digital experimentation (see, ahem, here and here). It’s theoretically possible, but unlikely, that in its new, but unclear, guidelines and review process Facebook has struck the optimal balance among the competing values and interests that this work involves. Continue reading →
Another stop on my fall Facebook/OKCupid tour: on October 10, I’ll be participating on a panel (previewed in the NYT here) on “Experimentation and Ethical Practice,” along with Harvard Law’s Jonathan Zittrain, Google chief economist Hal Varian, my fellow PersonalGenomes.org board member and start-up investor Ester Dyson, and my friend and Maryland Law prof Leslie Meltzer Henry.
The panel will be moderated by Sinan Aral of the MIT Sloan School of Management, who is also one of the organizers of a two-day Conference on Digital Experimentation (CODE), of which the panel is a part. The conference, which brings together academic researchers and data scientists from Google, Microsoft, and, yes, Facebook, may be of interest to some of our social scientist readers. (I’m told registration space is very limited, so “act soon,” as they say.) From the conference website:
The ability to rapidly deploy micro-level randomized experiments at population scale is, in our view, one of the most significant innovations in modern social science. As more and more social interactions, behaviors, decisions, opinions and transactions are digitized and mediated by online platforms, we can quickly answer nuanced causal questions about the role of social behavior in population-level outcomes such as health, voting, political mobilization, consumer demand, information sharing, product rating and opinion aggregation. When appropriately theorized and rigorously applied, randomized experiments are the gold standard of causal inference and a cornerstone of effective policy. But the scale and complexity of these experiments also create scientific and statistical challenges for design and inference. The purpose of the Conference on Digital Experimentation at MIT (CODE) is to bring together leading researchers conducting and analyzing large scale randomized experiments in digitally mediated social and economic environments, in various scientific disciplines including economics, computer science and sociology, in order to lay the foundation for ongoing relationships and to build a lasting multidisciplinary research community.
The Kaiser Family Foundation (KFF) recently conducted a survey of gay and bisexual men in the U.S. focusing on attitudes, knowledge, and experiences with HIV/AIDS. The survey results, released Thursday, can be found here. I was most interested in the finding that only a quarter of those surveyed know about PrEP (pre-exposure prophylaxis).
Although many lament that the ubiquity of smartphones has contributed to a recent decline in etiquette, a study published this week in Science suggests that smartphones’ ubiquity may make them a valuable–if surprising–tool for studying modern morality.
Most moral judgment experiments are lab-based and driven by hypotheticals. By contrast, this was a field experiment that focused on the moral judgments people make in their daily lives. The authors recruited 1,252 adults from the U.S. and Canada. Participants were contacted via text message five times each day over a three-day period. Each time, they were asked “whether they committed, were the target of, witnessed, or learned about a moral or immoral act within the past hour.” For each moral or immoral event, participants described via text what the event was about; provided situational context; and provided information about nine moral emotions (e.g., guilt and disgust). Political ideology and religiosity were assessed during an intake survey.
Participants reported a moral or immoral event on 28.9% of responses (n = 3,828). Moral and immoral events had similar overall frequencies. The authors found political ideology was reliably associated with the types of moral problems people identified. Liberals mentioned events related to Fairness/Unfairness, Liberty/Oppression, and Honesty/Dishonesty more frequently than did conservatives. By contrast, conservatives were more likely to mention events related to Loyalty/Disloyalty, Authority/Subversion, and Sanctity/Degradation.Continue reading →
In “Is it ethical to hire sherpas when climbing Mount Everest?,” a short piece out today in the British Medical Journal, I suggest that the question of whether it is ethical to pay sherpas to assume risks for the benefit of relatively affluent Western climbers is a variant of cases–common in medical ethics–where compensation and assumption of risk coincide. Consider offers of payment to research subjects, organ sales, and paid surrogacy. As a result, medical ethics can offer helpful frameworks for evaluating the acceptability of payment and, perhaps, suggest protections for sherpas as we look forward to the next climbing season on Everest.
I owe particular thanks to Nir Eyal, Harvard Medical School Center for Bioethics and Harvard School of Public Health Department of Global Health and Population; Richard Salisbury, University of Michigan (retired); and Paul Firth, Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital.
I have a long article in Slate (with Chris Chabris) on the importance of replicating science. We use a recent (and especially bitter) dispute over the failure to replicate a social psychology experiment as an occasion for discussing several things of much broader import, including:
The facts that replication, despite being a cornerstone of the scientific method, is rarely practiced (and even less frequently published) not only in psychology but across science, and that when such studies are conducted, they frequently fail to replicate the original findings (let this be a warning to those of you who, like me, cite empirical literature in your scholarship);
Why replications are so rarely conducted and published, relative to their importance (tl;dr: it’s the incentives, stupid);
Why it’s critical that this aspect of the academic research culture change (because academic science doesn’t only affect academic scientists; the rest of us have a stake in science, too, including those who fund it, those who help researchers produce it (i.e., human subjects), those who consume and build on it (other scholars and policy-makers), and all of us who are subject to myriad laws and policies informed by it); and
Some better and worse ways of facilitating that cultural change (among other things, we disagree with Daniel Kahneman’s most recent proposal for conducting replications).
Few people know that new prescription drugs have a 1 in 5 chance of causing serious reactions after they have been approved. That is why expert physicians recommend not taking new drugs for at least five years unless patients have first tried better-established options and need to. Faster reviews advocated by the industry-funded public regulators increase the risk of serious harm to 1 in 3. Yet most drugs they approve are found to have few offsetting clinical advantages over existing ones.
Systematic reviews of hospital charts by expert teams have found that even properly prescribed drugs (aside from misprescribing, overdosing, or self-prescribing) cause about 1.9 million hospitalizations a year. Another 840,000 hospitalized patients given drugs have serious adverse reactions for a total of 2.74 million. Further, the expert teams attributed as many deaths to the drugs as people who die from stroke. A policy review done at the Edmond J. Safra Center for Ethics at Harvard University concluded that prescription drugs are tied with stroke as the 4th leading cause of death in the United States. The European Commission estimates that adverse reactions from prescription drugs cause 200,000 deaths; so together, about 328,000 patients in the US and Europe die from prescription drugs each year. The FDA does not acknowledge these facts and instead gathers a small fraction of the cases.
Perhaps this is “the price of progress”? For example, about 170 million Americans take prescription drugs, and many benefit from them. For some, drugs keep them alive. If we suppose they all benefit, then 2.7 million people have a severe reactions, it’s only about 1.5 percent – the price of progress?
However, independent reviews over the past 35 years have found that only 11-15 percent of newly approved drugs have significant clinical advantages over existing, better-known drugs. While these contribute to the large medicine chest of effective drugs developed over the decades, the 85-89 percent with little or no clinical advantage flood the market. Of the additional $70 billion spent on drugs since 2000 in the U.S. (and another $70 billion abroad), about four-fifths has been spent on purchasing these minor new variations rather than on the really innovative drugs.
In a recent decade, independent reviewers concluded that only 8 percent of 946 new products were clinically superior, down from 11-15 percent in previous decades. (See Figure) Only 2 were breakthroughs and another 13 represented a real therapeutic advance.
The Journal of Law and the Biosciences (JLB) is actively soliciting original manuscripts, responses, essays, and book reviews devoted to the examination of issues related to the intersection of law and biosciences, including bioethics, neuroethics, genetics, reproductive technologies, stem cells, enhancement, patent law, and food and drug regulation. JLB welcomes submissions of varying length, with a theoretical, empirical, practical, or policy oriented focus.
JLB is the first fully open access peer-reviewed legal journal focused on the advances at the intersection of law and the biosciences. A co-venture between Duke University, Harvard Law School, and Stanford University, and published by Oxford University Press, this open access, online, and interdisciplinary academic journal publishes cutting-edge scholarship in this important new field. JLB is published as one volume with three issues per year with new articles posted online on an ongoing basis.
By now, most of you have probably heard—perhaps via your Facebook feed itself—that for one week in January of 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly-selected users (about 1 of every 2500 Facebook users). The results of this study—conducted by Adam Kramer of Facebook, Jamie Guillory of the University of California, San Francisco, and Jeffrey Hancock of Cornell—were just published in the Proceedings of the National Academy of Sciences (PNAS).
Although some have defended the study, most have criticized it as unethical, primarily because the closest that these 689,003 users came to giving voluntary, informed consent to participate was when they—and the rest of us—created a Facebook account and thereby agreed to Facebook’s Data Use Policy, which in its current iteration warns users that Facebook “may use the information we receive about you . . . for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
Some of the discussion has reflected quite a bit of misunderstanding about the applicability of federal research regulations and IRB review to various kinds of actors, about when informed consent is and isn’t required under those regulations, and about what the study itself entailed. In this post, after going over the details of the study, I explain (more or less in order):
How the federal regulations define “human subjects research” (HSR)
Why HSR conducted and funded solely by an entity like Facebook is not subject to the federal regulations
Why HSR conducted by academics at some institutions (like Cornell and UCSF) may be subject to IRB review, even when that research is not federally funded
Why involvement in the Facebook study by two academics nevertheless probably did not trigger Cornell’s and UCSF’s requirements of IRB review
Why an IRB—had one reviewed the study—might plausibly have approved the study with reduced (though not waived) informed consent requirements
And why we should think twice before holding academics to a higher standard than corporations