I’ve been thinking a lot lately about how our society regulates the integrity of scientific research in an era of fierce competition for diminishing grants and ultracompetitive academic appointments. When I shared a draft paper on this topic a few weeks ago, several colleagues urged me to think more about the role played by academic journals, so I was interested to see this article in Nature last week about a recently uncovered criminal scam defrauding two European science journals and countless would-be authors. It caught my attention because it seems to belie the notion that the journals and the honest scientific community are sophisticated enough actors to be trusted to root out the fabrication, falsification, and plagiarism that constitute “research misconduct” under Federal law. Needless to say, it takes a different kind of expertise to discern scientific misconduct than to uncover a more mundane phishing scam like the one these cons were running, but the anecdote stands as a nice reminder of the fallibility even of great minds.
The cover story of the March 4, 2013 issue of Time Magazine is a piece by Steven Brill titled Bitter Pill: Why Medical Bills Are Killing Us. The article has apparently made a pretty big splash: in an interview (Part 2, Part 3) with Brill last week, Jon Stewart of Comedy Central’s The Daily Show told his audience that the article was so good that it “should be required reading for . . . not only every individual in this country, but lawmaker in this country.”
What most seems to fascinate Stewart, and what Brill emphasizes, is an insight that is old hat to health law types: the market for health care is just plain screwy. Brill explains that health care consumers “have no choice in what you’re buying, you have no idea what you’re buying, you have no idea what the price is, even when you get the bill you have no idea what it says.” The starting point for the article was Brill’s observation that in all the debate over the last few years about health care, “we seem to jump right to the issue of who should pay the bills, blowing right past what should be the first question: Why exactly are the bills so high?” Continue reading
When I read Susannah Meadows’s article in last week’s New York Times Magazine, The Boy with a Thorn in His Joints, I was at a bit of a loss how to respond. The article is Meadows’s account of dealing with her son’s juvenile idiopathic arthritis, and describes how, wary of the side effects of the treatment recommended by two well-regarded pediatric rheumatologists, she put her son on an alternative-medicine regime instead. Meadows relates how, on a regimen of probiotics, sour Montmorency cherry juice, fish oil, and something called four-marvels powder, her son underwent a near total recovery.
It should be noted, to her credit, that Meadows goes out of her way to acknowledge the anecdotal character of her experience. And, likewise to her credit, Meadows continued to work with her son’s doctors and take their concerns seriously throughout her son’s experiment with alternative medicine. But in spite of Meadows best journalistic instincts and her thorough reporting, her article perpetuates a dangerous misunderstanding. Throughout her article, Meadows makes an implicit distinction between pharmaceuticals and the substances (cherry juice, fish oil, four marvels powder) she was putting in her son’s body. But here’s the thing: the single most important distinction between the methotrexate her doctors recommended and the four marvels powder she chose to administer to her son is that the former has been proven safe and effective in “adequate and well-controlled investigations,” while the latter is essentially unregulated. The active ingredients in both substances are chemicals with hard-to-pronounce Latin names, the difference is just how much we know about these chemicals.
And that’s the point that Michelle M. Francl, a professor of chemistry at Bryn Mawr College, articulates far more eloquently and forcefully than I possibly could in her recent Slate article: Don’t Take Medical Advice From the New York Times Magazine: The dangerous chemophobia behind its popular story about childhood arthritis. Francl’s article is a must-read, and makes several extremely valuable points, but I particularly want to highlight just one of these. Susannah Meadows is an intelligent and experienced journalist, a wonderful commentator on politics and publishing, and clearly a mother whose love for her children is boundless. But she is not a doctor or a scientist, nor is she even a health or science reporter. Yet her anecdotal account of her own child’s illness is now probably the most widely disseminated article about treating juvenile arthritis ever, and it is one that perpetuates a basic and dangerous misunderstanding about the nature of medicine.
Last spring I had the chance to work as a research assistant for Marc Rodwin, a Lab Fellow at Harvard’s Edmond J. Safra Center for Ethics, reading through hundreds (perhaps thousands) of pages of Congressional hearing transcripts from the 1960s and 1970s relating to the federal regulation of drugs. In reading the final version of the article that I helped to research, Independent Clinical Trials to Test Drugs: The Neglected Reform, forthcoming in Volume 6 of the Saint Louis University Journal of Health Law & Policy (also available online), two things occurred to me about the way academics and to some extent policymakers approach regulatory law, especially in the life sciences.
First, it struck me that in the general discourse about health policy, we are often surprisingly oblivious to history. Although the first U.S. law requiring safety or efficacy testing for new drugs was passed a mere 75 years ago, the intervening decades have been prolific when it comes to proposals and debate. What Rodwin’s article does, and I think is valuable both to support his own proposition and to advance the scholarship of others, is to frame his contemporary proposal with a close historical look at similar proposals and the debate surrounding them. What becomes clear, whether the topic is independent clinical testing, the “drug lag,” or the scope of patent protection and marketing exclusivity, is that our “new” ideas about how to improve the system are often anything but. The idea that we ought to know the history of our subject is not particularly groundbreaking, of course, but it’s plain we could be doing more. I’ve found that even the little history I’ve read–all those hearings, of course, but also wonderful books like Daniel Carpenter’s Reputation and Power and Philip J. Hilts’s Protecting America’s Health, and under-utilized resources like FDA’s Oral History Transcripts–has proved endlessly valuable even in thinking about cutting edge ideas like those I’m privileged to be exposed to as a participant in the Petrie-Flom Center’s Health Law Policy and Bioethics Workshop.
Second, it occurred to me that when we think and write about health policy, we might benefit from distinguishing–at least for ourselves–between two distinct projects: the practical endeavor of proposing “realistic” policy changes that take account of and purport to improve on our imperfect and historically contingent regulatory regime, on the one hand, and the more theoretical work of contemplating an ideal system, a “castle in the sky.” Folks who think deeply and write about these issues obviously engage in both: Rodwin’s article, for example, plainly takes the present framework as its starting point, but does so with the larger theoretical question in mind. I just wonder if we might be more explicit in articulating how proposed incremental changes fit into a broader project of making the regulatory scheme look more like our hypothetical ideal. The history of regulatory law is so littered with incremental changes that have had unanticipated consequences elsewhere in the system that I would love to see more authors lay out explicitly what they hope the system will look like, and justify their incremental proposals as part of that broader vision.
While reading some of the great articles from the health section of the New York Times over the holidays it struck me that such articles, in their need to be concise and accessible, often give only passing treatment to regulatory concepts that can be fundamental to the story. Accordingly, I thought it might be useful to write a series of posts digging down a bit deeper into some of the regulatory foundations of health stories that percolate up to public attention through the news. In this post I’ll begin by looking at an interesting point relating to drug efficacy standards raised by an article about a newly expensive (but decades-old) drug.
In Andrew Pollack’s “Questcor Finds Profits, at $28,000 a Vial” we read that a drug called Acthar, first approved by the FDA in 1952 and used primarily to treat rare infantile spasms, has in recent years become a very expensive and (for it’s maker) lucrative treatment for conditions ranging from multiple sclerosis to rheumatologic conditions. The article is worth a read for its thoughtful discussion of drug pricing, but it also makes passing reference to a some important regulatory concepts that bear further examination. One issue that particularly stood out to me was Pollack’s statement that Questcor, Acthar’s manufacturer, has been able to market the drug for a variety of uses “without being required to prove that the drug actually works” because it was “essentially grandfathered” into an anachronistic efficacy standard by being “approved for use in 1952, before the [FDA] required clinical trials . . . .” On first read, that sounds fairly alarming, so I thought it might be worthwhile to unpack the law around such “grandfathered” drugs a little. While it is true that FDA did not require proof of effectiveness for new drugs until lawmakers included this requirement in the Drug Amendments of 1962, it isn’t the case that pre-1962 drugs simply get a free pass on proving effectiveness. The truth, as one might expect, is somewhat more complicated. Continue reading
As I’ve written about previously on this blog, the consequences for the FDA of budget sequestration under the Budget Control Act of 2011 could be fairly severe (as well as raise some interesting legal questions). In a recent Online First piece for the Journal of the American Medical Association (JAMA), Hamilton Moses and E. Ray Dorsey note that sequestration would also have a serious impact–to the tune of $2.5 billion–on the National Institutes of Health (NIH), the primary source of public funding for biomedical research in the United States.
While Doctors Moses and Dorsey acknowledge that the immediate consequences of such a cut would primarily affect young researchers and new applicants for funding, “exacerbat[ing] tensions between large infrastructure projects . . . and small investigator-initiated grants, which historically have been the primary source of new clinical insights,” they also argue that sequestration presents an opportunity to reevaluate our emphasis on publicly funded biomedical research. In their telling, sequestration would be just the most recent step in a nearly decade-long trend of reducing federal funding, a trend that “presents an opportunity to reshape biomedical research.” Moses and Dorsey call for new private sources of research support, ranging from specialized financial instruments like Biomedical Research Bonds to an increased role for public charities and private foundations. The future of biomedical research, they argue, will be built on the private sector, not the federal government.
The challenges of shifting the burden of funding research to the private sector are many, of course. One particularly challenging question is whether private funds could effectively replace NIH’s significant role in funding “basic” research. Bhaven N. Sampat’s new article “Mission-Oriented Biomedical Research at the NIH” in Research Policy provides some context for the scale of the problem. Citing a 2010 study by Dr. Dorsey himself, Sampat notes that although NIH funding accounts for only about a third of U.S. biomedical research funding, “there is a sharp division of labor, with NIH funding concentrated further upstream, on ‘basic’ research than private sector funding” from private sector pharmaceutical, biotechnology, and medical device firms. Although the role of private foundations has grown in recent years, Sampat notes that NIH funding continues to exceed all such funding “by a factor of six . . . .” Assuming we continue to value basic research, the capacity and willingness of private actors to fund such research thus remains a major question mark.
In the October 22 edition of The New Yorker, Michael Specter wrote a fascinating article about the growing and exciting science of the human microbiome, the ecosystem of ten thousand or so bacterial species that call each of our bodies home. The hype around this particular field of scientific and medical inquiry is intense: Specter quotes David Relman of Stanford Medical school as saying that right now we are in the “beautiful, euphoric, heady early period” of the field, and notes that each week seems to bring additional symposia, publications, and grants for new research. All of this is for good reason. Promising studies have indicated that microbial therapy (the intentional introduction of certain bacteria into the body) can be an effective treatment for some diseases, while other researchers have suggested that a variety of modern diseases (like asthma, inflammatory-bowel disease, and some allergies) may be tied to changes in the human bacterial ecosystem. In some ways, this isn’t news: as Dr. Douglas Archer noted in an FDA advisory committee meeting on probiotics over a decade ago, using food with live cultures to treat disease is a longstanding practice dating at least as far back as 76 BC, when the Roman historian Plinio advocated using fermented milk to treat GI infections. Continue reading
A friend and I were having a conversation about health policy the other day when he observed that drug regulators like FDA face an impossible task in terms of public expectations: as consumers, we expect the drugs we take to be 100% safe, 100% of the time. Of course, no regulator, no matter how powerful or well funded, could deliver on that expectation, and the reality is that FDA operates under a variety of limitations, both fiscal and legal.
The current deadly meningitis outbreak linked to contaminated injections made by a Massachusetts compounding pharmacy shocks us and upsets our expectation that the drugs we take to get better will not, at the very least, cause us harm. Responding reflexively to this crisis, many in the media and in Washington have already started to call for greater federal oversight. This is a natural impulse, but one that merits cool-headed consideration. FDA is an agency that already has a broad statutory mandate and limited resources. Enforcement resources are slim enough that the agency’s response to an HHS report this month finding rampant violation of dietary supplement-labeling laws was simply to say that the agency would “address the recommendations as its resources and priorities allow.” Before we add still further to FDA’s crowded plate at a time when it is already facing a potential budget crisis (and it is worth noting that according to at least one former FDA chief counsel and congressional testimony by agency officials, FDA already possesses the authority to regulate pharmacies like the one involved in the outbreak and historically has done so), it is worth asking whether FDA enforcement is the only or best solution to the problem.
Looking back over last month’s health-related news, two articles published on The Atlantic’s website stand out to illustrate a tension that has received a great deal of focus in Medicare reform circles, and that seems to be a political sticking point for many otherwise promising cost-reduction strategies. In his September 10th article The Fallacy of Treating Health Care as an Industry, Professor Gunderman of Indiana University criticizes a recent Institute of Medicine (“IOM”) report suggesting that our medical system could be providing better care at lower cost if it could only learn a few lessons from other industries. Professor Gunderman’s critique invokes the specter of mechanical medicine: an “industrial assembly line approach to medicine” where the pursuit of efficient care utterly eclipses the human element, the “communication and relationships” that make the practice of medicine more than just an industry. Similar arguments can be and have been deployed against any resource-sensitive reform of medical practice, as the “death panels” debate from several years ago well illustrates.
While these kinds of human-relationship based critiques of efforts to make medical care more efficient may be relevant in the context of more extreme proposals of medical rationing, they are misguided as applied to recommendations like those made in the IOM report. Continue reading
Back in February, President Obama’s FY 2013 budget authorized $4.5 billion for the Food and Drug Administration (FDA), about $2 billion of which was to come from user fees, the fees paid by regulated industry under a variety of schemes including the Prescription Drug User Fee Act (PDUFA), the Medical Device User Fee Act (MDUFA) and newly-created programs for generic drugs and biosimilars. As of today, FDA’s ability to collect and use these fees is in question, endangering vital agency activities including drug and device premarket review.
The threat to FDA user fees crystallized on September 14, when the Office of Management and Budget released its Report Pursuant to the Sequestration Transparency Act of 2012, explaining what may happen if Congress fails to reach an accord on the federal budget as required by the Budget Control Act of 2011 (BCA). Such a failure would trigger sequestration resulting in an 8.2% reduction in non-exempt, non-defense discretionary funding. On pages 79-80, the report indicates that $3.873 billion of FDA’s budget for 2013 is considered eligible for sequestration. According to analysis by the Alliance For a Stronger FDA, this indicates that major user fee programs have been included as sequestration-eligible funds. According to the OMB report, the FDA budget would be reduced under sequestration by around $318 million.