It will be published in the Federal Register on September 8 (and comments will be due 90 days thereafter), but it is available now here. It is 519 pages long, though there is an executive summary and a list of the most important changes (which seem to roughly track the ANPRM) at pp. 21-26. Time to put on a pot of coffee, tea, or the caffeinated beverage of your choice.
The Future of Privacy Forum is hosting an academic workshop supported by the National Science Foundation to discuss ethical, legal, and technical guidance for organizations conducting research on personal information. Authors are invited to submit papers for presentation at a full-day program to take place on December 10, 2015. Papers for presentation will be selected by an academic advisory board and published in the online edition of the Washington and Lee Law Review. Four papers will be selected to serve as “firestarters” for the December workshop, awarding each author with a $1000 stipend. Submissions, which are due by October 25, 2015, at 11:59 PM ET, must be 2,500 to 3,500 words, with minimal footnotes and in a readable style accessible to a wide audience. Publication decisions and workshop invitations will be sent in November. Details here.
UPDATE: Plaintiffs have filed an appeal in the U.S. Court of Appeals for the Eleventh Circuit. Their brief is due on October 19.
The district court has granted summary judgment (opinion pdf) for all remaining defendants as to all of plaintiffs’ remaining claims in Looney v. Moore, the lawsuit arising out of the controversial SUPPORT trial, which I last discussed here. This therefore ends the lawsuit, pending possible appeal by the plaintiffs.
Plaintiff infants include two who were randomized to the low oxygen group and survived, but suffer from “neurological issues,” and one who was randomized to the high oxygen group who developed ROP, but not permanent vision loss. In their Fifth Amended Complaint (pdf), plaintiffs alleged negligence, lack of informed consent, breach of fiduciary duty, and product liability claims against, variously, individual IRB members, the P.I., and the pulse oximeter manufacturer. What unites all of these claims is the burden on plaintiffs to show (among other things) that their injuries were caused by their participation in the trial. Continue reading
I’ve started writing for Forbes as a regular contributor. My first piece, Carly Fiorina Says Her Views On Vaccines Are Unremarkable; For Better Or Worse, She’s Right, analyzes GOP presidential candidate Carly Fiorina’s recent ad hoc remarks on the relative rights of parents and schools with respect to vaccinations and to some of the hyperbolic reactions to those remarks. Fiorina’s remarks are ambiguous, in ways that I discuss. But, as the title of the article suggests, and for better or worse, I think that the best interpretation of them places her stance squarely in the mainstream of current U.S. vaccination law. I end with a call for minimally charitable interpretations of others’ views, especially on contentious issues like vaccination.
I have an op-ed with Christopher Chabris that appeared in this past Sunday’s New York Times. It focuses on one theme in my recent law review article on corporate experimentation: the A/B illusion. Despite the rather provocative headline that the Times gave it, our basic argument, made as clearly as we could in 800 words, is this: sometimes, it is more ethical to conduct a nonconsensual A/B experiment than to simply go with one’s intuition and impose A on everyone. Our contrary tendency to see experiments—but not untested innovations foisted on us by powerful people—as involving risk, uncertainty, and power asymmetries is what I call the A/B illusion in my law review article. Here is how the op-ed begins:
Can it ever be ethical for companies or governments to experiment on their employees, customers or citizens without their consent? The conventional answer — of course not! — animated public outrage last year after Facebook published a study in which it manipulated how much emotional content more than half a million of its users saw. Similar indignation followed the revelation by the dating site OkCupid that, as an experiment, it briefly told some pairs of users that they were good matches when its algorithm had predicted otherwise. But this outrage is misguided. Indeed, we believe that it is based on a kind of moral illusion.
After the jump, some clarifications and further thoughts.
A remarkable new “sting” of the “diet research-media complex” was just revealed. It tells us little we didn’t already know and has potentially caused a fair amount of damage, spread across millions of people. It does, however, offer an opportunity to explore the importance of prospective group review of non-consensual human subjects research—and the limits of IRBs applying the Common Rule in serving that function in contexts like this.
Journalist John Bohannon, two German reporters, a doctor and a statistician recruited 16 German subjects through Facebook into a three-week randomized controlled trial of diet and weight loss. One-third were told to follow a low-carb diet, one-third were told to cut carbs but add 1.5 ounces of dark chocolate (about 230 calories) per day, and one-third served as control subjects and were told to make no changes to their current diet. They were all given questionnaires and blood tests in advance to ensure they didn’t have diabetes, eating disorders, or other conditions that would make the study dangerous for them, and these tests were repeated after the study. They were each paid 150 Euros (~$163) for their trouble.
But it turns out that Bohannon, the good doctor (who had written a book about dietary pseudoscience), and their colleagues were not at all interested in studying diet. Instead, they wanted to show how easy it is for bad science to be published and reported by the media. The design of the diet trial was deliberately poor. It involved only a handful of subjects, had a poor balance of age and of men and women, and so on. But, through the magic of p-hacking, they managed several statistically significant results: eating chocolate accelerates weight loss and leads to healthier cholesterol levels and increased well-being. Continue reading
I have a new law review article out, Two Cheers for Corporate Experimentation: The A/B Illusion and the Virtues of Data-Driven Innovation, arising out of last year’s terrific Silicon Flatirons annual tech/privacy conference at Colorado Law, the theme of which was “When Companies Study Their Customers.”
This article builds on, but goes well beyond, my prior work on the Facebook experiment in Wired (mostly a wonky regulatory explainer of the Common Rule and OHRP engagement guidance as applied to the Facebook-Cornell experiment, albeit with hints of things to come in later work) and Nature (a brief mostly-defense of the ethics of the experiment co-authored with 5 ethicists and signed by an additional 28, which was necessarily limited in breadth and depth by both space constraints and the need to achieve overlapping consensus).
Although I once again turn to the Facebook experiment as a case study (and also to new discussions of the OkCupid matching algorithm experiment and of 401(k) experiments), the new article aims at answering a much broader question than whether any particular experiment was legal or ethical. Continue reading
I am deeply saddened to report that bioethicist John D. Arras died on March 9, 2015. John was the Porterfield Professor of Bioethics and Professor of Philosophy at the University of Virginia, where he directed the undergraduate bioethics program, held an additional appointment at the School of Medicine’s Center for Biomedical Ethics and Humanities, and over the years co-taught multiple courses at the Law School. He was a leading figure in the field of bioethics, and held several prestigious appointments beyond UVa including, at the time of his death, as a Fellow of The Hastings Center and a commissioner of the Presidential Commission for the Study of Bioethical Issues (whose recent report on Ebola he spoke to a journalist about just days ago). He also consulted regularly at the National Institutes of Health and was a founding member of the ethics advisory board of the Centers for Disease Control and Prevention.
John’s scholarly focus in bioethics was two-fold. First, like most bioethicists, John tackled concrete practical ethical problems involving medicine, public health, and the biosciences. His interests in this regard were fairly broad, but he focused on physician-assisted suicide, public health, human subjects research, and what justice requires in the way of access to health care. Continue reading
By Michelle Meyer
The case I mentioned in my last post, Maine Department of Health and Human Services v. Kaci Hickox is no more. Hickox and public health officials agreed to stipulate to a final court order imposing on Hickox the terms that the court had imposed on her in an earlier, temporary order. Until Nov. 10, when the 21-day incubation period for Ebola ends, Hickox will submit to “direct active monitoring” and coordinate her travel with Maine public health authorities to ensure that such monitoring occurs uninterrupted. She has since said that she will not venture into town or other public places, although she is free to do so.
In a new post at The Faculty Lounge,* I offer a detailed account of the case, which suggests the following lessons:
- As Hickox herself described it, the result of her case is a “compromise,” reflecting neither what Hickox nor what Maine initially wanted.
- That compromise was achieved by the parties availing themselves of the legal process, not through Hickox’s civil disobedience.
- The compromise is not easily described, as it has been, as a victory of science-based federal policy over fear-based state demagoguery. By the time the parties got to court, and perhaps even before then, what Maine requested was consistent with U.S. CDC Guidance, albeit a strict application of it. What Hickox had initially offered to do, by contrast, fell below even the most relaxed application of those guidelines, although by the time the parties reached court, she had agreed to comply with that minimum.
- The compromise applies only to Hickox, and was based on a stipulation by the parties to agree to the terms that the court had temporarily imposed after reviewing a limited evidentiary record. Additional evidence and legal arguments that the state might have raised in the now-cancelled two-day hearing could have resulted in a different outcome.
- A substantially different outcome, however, would have been unlikely under Maine’s public health statute. Indeed, it is not clear that Maine’s public health statute allows public health authorities to compel asymptomatic people at-risk of developing Ebola to do anything, including complying with minimum CDC recommendations.
- “Quarantine” is a charged, but ambiguous, term. It allows us to talk past one another, to shorthand and needlessly politicize a much-needed debate about appropriate policy, and to miss the fact that the CDC Guidance in some cases recommends what could be fairly described as a “quarantine” for people like Hickox and requires it for asymptomatic people with stronger exposure to Ebola (but who are still probably less likely to get sick than not).
- It’s not clear who has bragging rights to Ebola policy “grounded in science,” or what that policy looks like.
* The piece is quite long, and I cannot bear the fight with the WordPress formatting demons that it would require to cross-post it here.
[Author’s Note: Addendum and updates (latest: 4 pm, 10/31) added below.]
A physician shall… be honest in all professional interactions, and strive to report physicians… engaging in fraud or deception, to appropriate entities.
—AMA Principles of Medical Ethics
This is a troubling series of news reports about deception and defiance on the part of some healthcare workers (HCWs) in response to what they believe to be unscientific, unfair, and/or unconstitutional public health measures (to be clear, the text is not mine (until after the jump); it’s cut and pasted, in relevant part, from the linked sources):
Gavin Macgregor-Skinner, an epidemiologist and Global Projects Manager for the Elizabeth R. Griffin Foundation, who has led teams of doctors to treat Ebola in West Africa, reported that he “can’t tell them [his doctors] to tell the truth [to U.S. officials]” on Monday’s “CNN Newsroom.”
“At the moment these people are so valuable . . . I have to ensure they come back here, they get the rest needed. I can’t tell them to tell the truth at the moment because we’re seeing so much irrational behavior,” he stated. “I’ve come back numerous times between the U.S. and West Africa. If I come back now and say ‘I’ve been in contact with Ebola patients,’ I’m going to be locked in my house for 21 days,” Macgregor-Skinner said as his reason for not being truthful with officials, he added, “when I’m back here in the US, I am visiting US hospitals everyday helping them get prepared for Ebola. You take me out for three weeks, who’s going to replace me and help now US hospitals get ready? Those gaps can’t be filled.
He argued that teams of doctors and nurses could be trusted with the responsibility of monitoring themselves, stating, “When I bring my team back we are talking each day on video conferencing, FaceTime, Skype, text messaging, supporting each other. As soon as I feel sick I’m going to stay at home and call for help, but I’m not going to go to a Redskins game here in Washington D.C. That’s irresponsible, but I need to get back to these hospitals and help them be prepared.
UPDATE: Here is the CNN video of his remarks.
The city’s first Ebola patient initially lied to authorities about his travels around the city following his return from treating disease victims in Africa, law-enforcement sources said. Dr. Craig Spencer at first told officials that he isolated himself in his Harlem apartment — and didn’t admit he rode the subways, dined out and went bowling until cops looked at his MetroCard the sources said. “He told the authorities that he self-quarantined. Detectives then reviewed his credit-card statement and MetroCard and found that he went over here, over there, up and down and all around,” a source said. Spencer finally ’fessed up when a cop “got on the phone and had to relay questions to him through the Health Department,” a source said. Officials then retraced Spencer’s steps, which included dining at The Meatball Shop in Greenwich Village and bowling at The Gutter in Brooklyn.
Update 11PM, 10/30: A spokesperson for the NYC healh department has now disputed the above story, which cites anonymous police officer sources, in a statement provided to CNBC. The spokesperson said: “Dr. Spencer cooperated fully with the Health Department to establish a timeline of his movements in the days following his return to New York from Guinea, providing his MetroCard, credit cards and cellphone.” . . . When CNBC asked again if Spencer had at first lied to authorities or otherwise mislead them about his movements in the city, Lewin replied: “Please refer to the statement I just sent. As this states, Dr. Spencer cooperated fully with the Health Department.”
(3) Ebola nurse in Maine rejects home quarantine rules [the WaPo headline better captures the gist: After fight with Chris Christie, nurse Kaci Hickox will defy Ebola quarantine in Maine]
Kaci Hickox, the Ebola nurse who was forcibly held in an isolation tent in New Jersey for three days, says she will not obey instructions to remain at home in Maine for 21 days. “I don’t plan on sticking to the guidelines,” Hickox tells TODAY’s Matt Lauer. “I am not going to sit around and be bullied by politicians and forced to stay in my home when I am not a risk to the American public.”
Maine health officials have said they expect her to agree to be quarantined at her home for a 21-day period. The Bangor Daily News reports. But Hickox, who agreed to stay home for two days, tells TODAY she will pursue legal action if Maine forces her into continued isolation. “If the restrictions placed on me by the state of Maine are not lifted by Thursday morning, I will go to court to fight for my freedom,” she says.
Some thoughts on these reports, after the jump. Continue reading
Reuters broke the story on Friday, citing anonymous sources:
The company is exploring creating online “support communities” that would connect Facebook users suffering from various ailments. . . . Recently, Facebook executives have come to realize that healthcare might work as a tool to increase engagement with the site. One catalyst: the unexpected success of Facebook’s “organ-donor status initiative,” introduced in 2012. The day that Facebook altered profile pages to allow members to specify their organ donor-status, 13,054 people registered to be organ donors online in the United States, a 21 fold increase over the daily average of 616 registrations . . . . Separately, Facebook product teams noticed that people with chronic ailments such as diabetes would search the social networking site for advice, said one former Facebook insider. In addition, the proliferation of patient networks such as PatientsLikeMe demonstrate that people are increasingly comfortable sharing symptoms and treatment experiences online. . . . Facebook may already have a few ideas to alleviate privacy concerns around its health initiatives. The company is considering rolling out its first health application quietly and under a different name, a source said.
A WSJ reporter just tipped me off to this news release by Facebook regarding the changes it has made in its research practices in response to public outrage about its emotional contagion experiment, published in PNAS. I had a brief window of time in which to respond with my comments, so these are rushed and a first reaction, but for what they’re worth, here’s what I told her (plus links and less a couple of typos):
There’s a lot to like in this announcement. I’m delighted that, despite the backlash it received, Facebook will continue to publish at least some of their research in peer-reviewed journals and to post reprints of that research on their website, where everyone can benefit from it. It’s also encouraging that the company acknowledges the importance of user trust and that it has expressed a commitment to better communicate its research goals and results.
As for Facebook’s promise to subject future research to more extensive review by a wider and more senior group of people within the company, with an enhanced review process for research that concerns, say, minors or sensitive topics, it’s impossible to assess whether this is ethically good or bad without knowing a lot more about both the people who comprise the panel and their review process (including but not limited to Facebook’s policy on when, if ever, the default requirements of informed consent may be modified or waived). It’s tempting to conclude that more review is always better. But research ethics committees (IRBs) can and do make mistakes in both directions – by approving research that should not have gone forward and by unreasonably thwarting important research. Do Facebook’s law, privacy, and policy people have any training in research ethics? Is there any sort of appeal process for Facebook’s data scientists if the panel arbitrarily rejects their proposal? These are the tip of the iceberg of challenges that the academic IRBs continue to face, and I fear that we are unthinkingly exporting an unhealthy system into the corporate world. Discussion is just beginning among academic scientists, corporate data scientists, and ethicists about the ethics of mass-scale digital experimentation (see, ahem, here and here). It’s theoretically possible, but unlikely, that in its new, but unclear, guidelines and review process Facebook has struck the optimal balance among the competing values and interests that this work involves. Continue reading
Another stop on my fall Facebook/OKCupid tour: on October 10, I’ll be participating on a panel (previewed in the NYT here) on “Experimentation and Ethical Practice,” along with Harvard Law’s Jonathan Zittrain, Google chief economist Hal Varian, my fellow PersonalGenomes.org board member and start-up investor Ester Dyson, and my friend and Maryland Law prof Leslie Meltzer Henry.
The panel will be moderated by Sinan Aral of the MIT Sloan School of Management, who is also one of the organizers of a two-day Conference on Digital Experimentation (CODE), of which the panel is a part. The conference, which brings together academic researchers and data scientists from Google, Microsoft, and, yes, Facebook, may be of interest to some of our social scientist readers. (I’m told registration space is very limited, so “act soon,” as they say.) From the conference website:
The ability to rapidly deploy micro-level randomized experiments at population scale is, in our view, one of the most significant innovations in modern social science. As more and more social interactions, behaviors, decisions, opinions and transactions are digitized and mediated by online platforms, we can quickly answer nuanced causal questions about the role of social behavior in population-level outcomes such as health, voting, political mobilization, consumer demand, information sharing, product rating and opinion aggregation. When appropriately theorized and rigorously applied, randomized experiments are the gold standard of causal inference and a cornerstone of effective policy. But the scale and complexity of these experiments also create scientific and statistical challenges for design and inference. The purpose of the Conference on Digital Experimentation at MIT (CODE) is to bring together leading researchers conducting and analyzing large scale randomized experiments in digitally mediated social and economic environments, in various scientific disciplines including economics, computer science and sociology, in order to lay the foundation for ongoing relationships and to build a lasting multidisciplinary research community.
I’m participating in several public events this fall pertaining to research ethics and regulation, most of them arising out of my recent work (in Wired and in Nature and elsewhere) on how to think about corporations conducting behavioral testing (in collaboration with academic researchers or not) on users and their online environments (think the recent Facebook and OKCupid experiments). These issues raise legal and ethical questions at the intersection of research, business, informational privacy, and innovation policy, and the mix of speakers in most of these events reflect that. Continue reading
I have a long article in Slate (with Chris Chabris) on the importance of replicating science. We use a recent (and especially bitter) dispute over the failure to replicate a social psychology experiment as an occasion for discussing several things of much broader import, including:
- The facts that replication, despite being a cornerstone of the scientific method, is rarely practiced (and even less frequently published) not only in psychology but across science, and that when such studies are conducted, they frequently fail to replicate the original findings (let this be a warning to those of you who, like me, cite empirical literature in your scholarship);
- Why replications are so rarely conducted and published, relative to their importance (tl;dr: it’s the incentives, stupid);
- Why it’s critical that this aspect of the academic research culture change (because academic science doesn’t only affect academic scientists; the rest of us have a stake in science, too, including those who fund it, those who help researchers produce it (i.e., human subjects), those who consume and build on it (other scholars and policy-makers), and all of us who are subject to myriad laws and policies informed by it); and
- Some better and worse ways of facilitating that cultural change (among other things, we disagree with Daniel Kahneman’s most recent proposal for conducting replications).
Michelle Meyer has a new piece in Nature – an open letter on the Facebook study signed by a group of bioethicists (including PFC’s Executive Director Holly Fernandez Lynch) in which she argues that a Facebook study that manipulated news feeds was not definitively unethical and offered valuable insight into social behavior.
From the piece:
“Some bioethicists have said that Facebook’s recent study of user behavior is “scandalous”, “violates accepted research ethics” and “should never have been performed”. I write with 5 co-authors, on behalf of 27 other ethicists, to disagree with these sweeping condemnations (see go.nature.com/XI7szI).
We are making this stand because the vitriolic criticism of this study could have a chilling effect on valuable research. Worse, it perpetuates the presumption that research is dangerous.”
Read the full article.
By now, most of you have probably heard—perhaps via your Facebook feed itself—that for one week in January of 2012, Facebook altered the algorithms it uses to determine which status updates appeared in the News Feed of 689,003 randomly-selected users (about 1 of every 2500 Facebook users). The results of this study—conducted by Adam Kramer of Facebook, Jamie Guillory of the University of California, San Francisco, and Jeffrey Hancock of Cornell—were just published in the Proceedings of the National Academy of Sciences (PNAS).
Although some have defended the study, most have criticized it as unethical, primarily because the closest that these 689,003 users came to giving voluntary, informed consent to participate was when they—and the rest of us—created a Facebook account and thereby agreed to Facebook’s Data Use Policy, which in its current iteration warns users that Facebook “may use the information we receive about you . . . for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”
Some of the discussion has reflected quite a bit of misunderstanding about the applicability of federal research regulations and IRB review to various kinds of actors, about when informed consent is and isn’t required under those regulations, and about what the study itself entailed. In this post, after going over the details of the study, I explain (more or less in order):
- How the federal regulations define “human subjects research” (HSR)
- Why HSR conducted and funded solely by an entity like Facebook is not subject to the federal regulations
- Why HSR conducted by academics at some institutions (like Cornell and UCSF) may be subject to IRB review, even when that research is not federally funded
- Why involvement in the Facebook study by two academics nevertheless probably did not trigger Cornell’s and UCSF’s requirements of IRB review
- Why an IRB—had one reviewed the study—might plausibly have approved the study with reduced (though not waived) informed consent requirements
- And why we should think twice before holding academics to a higher standard than corporations
You might think that the answer to this question is obvious. Clearly, it’s your business, and yours alone, right? I mean, sure, maybe it would be considerate to discuss the potential ramifications of this activity with your partner. And you might want to consider the welfare of the bee. But other than that, whose business could it possibly be?
Well, as academic empiricists know, what others can do freely, they often require permission to do. Journalists, for instance, can ask potentially traumatizing questions to children without having to ask whether the risk to these children of interviewing them is justified by the expected knowledge to be gained; academics, by contrast, have to get permission from their institution’s IRB first (and often that permission never comes).
So, too, with potentially traumatizing yourself — at least if you’re an academic who’s trying to induce a bee to sting your penis in order to produce generalizable knowledge, rather than for some, um, other purpose.
Yesterday, science writer Ed Yong reported a fascinating self-experiment conducted by Michael Smith, a Cornell graduate student in the Department of Neurobiology and Behavior who studies the behavior and evolution of honeybees. As Ed explains, when, while doing his other research, a honeybee flew up Smith’s shorts and stung his testicles, Smith was surprised to find that it didn’t hurt as much as he expected. He began to wonder which body parts would really smart if they were stung by a bee and was again surprised to learn that there was a gap in the literature on this point. So he decided to conduct an experiment on himself. (In addition to writing about the science of bee stings to the human penis, Ed is also your go-to guy for bat fellatio and cunnilingus, the spiky penises of beetles and spiders, and coral orgies.)
As Ed notes, Smith explains in his recently published paper reporting the results of his experiment, Honey bee sting pain index by body location, that
Cornell University’s Human Research Protection Program does not have a policy regarding researcher self-experimentation, so this research was not subject to review from their offices. The methods do not conflict with the Helsinki Declaration of 1975, revised in 1983. The author was the only person stung, was aware of all associated risks therein, gave his consent, and is aware that these results will be made public.
As Ed says, Smith’s paper is “deadpan gold.” But on this point, it’s also wrong. Continue reading
For those closely following the litigation over this clinical trial, a few updates. On January 22, the district court ruled on defendants’ motions to dismiss plaintiffs’ third amended complaint. That complaint named as defendants the director of the IRB, the chair of the IRB, the other members of the IRB (“the IRB defendants”)—all in their individual capacities; the PI of the trial, in his individual capacity; Masimo Corporation, the manufacturer of the oximeter used in the trial; and fictitious defendants (ABC Health Care Providers #1-100; ABC Individuals #1-100; and XYZ Entities #1-100). The complaint stated seven counts: products liability and negligence against Masimo; negligence, negligence per se, lack of informed consent, and breach of fiduciary duty against the IRB defendants and the PI; and wrongful death against all defendants.
This is post is part of The Bioethics Program’s ongoing Online Symposium on the Munoz and McMath cases, which I’ve organized, and is cross-posted from the symposium. To see all symposium contributions, in reverse chronological order, click here.
Had the hospital not relented and removed the ventilator from Marlise Munoz’s body, could the Munoz fetus have been brought to term, or at least to viability? And if so, would the resulting child have experienced any temporary or permanent adverse health outcomes? Despite some overly confident commentary on both “sides” of this case suggesting a clear answer one way or the other—i.e., that there was no point in retaining the ventilator because the fetus could never be viable or was doomed to be born with catastrophic abnormalities; or, on the other hand, that but for the removal of the ventilator, the “unborn baby” was clearly on track to being born healthy—the truth is that we simply don’t know.
Before getting into the limited available data about fetal outcomes in these relatively rare cases, a bit of brush clearing. The New York Times juxtaposed reports about possible abnormalities in the Munoz fetus with the hospital’s stipulation about the fetus’s non-viability in ways that are likely to confuse, rather than clarify:
Lawyers for Ms. Muñoz’s husband, Erick Muñoz, said they were provided with medical records that showed the fetus was “distinctly abnormal” and suffered from hydrocephalus — an accumulation of fluid in the cavities of the brain — as well as a possible heart problem.
The hospital acknowledged in court documents that the fetus was not viable.
Whether intentionally or not, the nation’s newspaper of record implies — wrongly, I think — that the hospital conceded that the fetus would never be viable because of these reported abnormalities. In court, the hospital and Erick Munoz stipulated to a series of facts, including that Marlise was then 22 weeks pregnant and that “[a]t the time of this hearing, the fetus gestating inside Mrs. Munoz is not viable” (emphasis added). The hospital conceded nothing at all about any fetal abnormalities. In short, the Times, and many other commentors, have conflated “non-viability” as a function of gestational age with “non-viability” as a way of characterizing disabilities that are incompatible with life. As I read this stipulation, the hospital was not at all conceding that the fetus would never have been viable, had the ventilator remained in place. Rather, given the constitutional relevance of fetal viability, the hospital was merely conceding the banal scientific fact that the Munoz fetus was, at 22 weeks, not currently viable. There is nothing surprising in the least about the hospital’s “concession” about “viability” in the first sense, above: 22-week fetuses are generally not considered viable. Continue reading