Over the course of the last fifteen or so years, the belief that “de-identification” of personally identifiable information preserves the anonymity of those individuals has been repeatedly called up short by scholars and journalists. It would be difficult to overstate the importance, for privacy law and policy, of the early work of “re-identification scholars,” as I’ll call them. In the mid-1990s, the Massachusetts Group Insurance Commission (GIC) released data on individual hospital visits by state employees in order to aid important research. As Massachusetts Governor Bill Weld assured employees, their data had been “anonymized,” with all obvious identifiers, such as name, address, and Social Security number, removed. But Latanya Sweeney, then an MIT graduate student, wasn’t buying it. When, in 1996, Weld collapsed at a local event and was admitted to the hospital, she set out to show that she could re-identify his GIC entry. For twenty dollars, she purchased the full roll of Cambridge voter-registration records, and by linking the two data sets, which individually were innocuous enough, she was able to re-identify his GIC entry. As privacy law scholar Paul Ohm put it, “In a theatrical flourish, Dr. Sweeney sent the Governor’s health records (which included diagnoses and prescriptions) to his office.”
Sweeney’s demonstration led to important changes in privacy law, especially under HIPAA. But that demonstration was just the beginning. In 2006, the New York Times was able to re-identify one individual (and only one individual) in a publicly available research dataset of the three-month AOL search history of over 600,000 users. The Times demonstration led to a class-action lawsuit (which settled out of court), an FTC complaint, and soul-searching in Congress. That same year, Netflix began a three-year contest, offering a $1 million prize to whomever could most improve the algorithm by which the company predicts how much a particular user will enjoy a particular movie. To enable the contest, Netflix made publicly available a dataset of the movie ratings of 500,000 of its customers, whose names it replaced with numerical identifiers. In a 2008 paper, Arvind Narayanan, then a graduate student at UT-Austin, along with his advisor, showed that by linking the “anonymized” Netflix prize dataset to the Internet Movie Database (IMDb), in which viewers review movies, often under their own names, many Netflix users could be re-identified, revealing information that was suggestive of their political preferences and other potentially sensitive information. (Remarkably, notwithstanding the re-identification demonstration, after awarding the prize in 2009 to a team from AT&T, in 2010, Netflix announced plans for a second contest, which it cancelled only after tussling with a class-action lawsuit (again, settled out of court) and the FTC.) Earlier this year, Yaniv Erlich and colleagues, using a novel technique involving surnames and the Y chromosome, re-identified five men who had participated in the 1000 Genomes Project — an international consortium to place, in an open online database, the sequenced genomes of (as it turns out, 2500) “unidentified” people — who had also participated in a study of Mormon families in Utah.
Most recently, Sweeney and colleagues re-identified participants in Harvard’s Personal Genome Project (PGP), who are warned of this risk, using the same technique she used to re-identify Weld in 1997. As a scholar of research ethics and regulation — and also a PGP participant — this latest demonstration piqued my interest. Although much has been said about the appropriate legal and policy responses to these demonstrations (my own thoughts are here), there has been very little discussion about the legal and ethical aspects of the demonstrations themselves. As a modest step in filling that gap, I’m pleased to announce an online symposium, to take place here at the Bill of Health the week of May 20th, that will address both the scientific and policy value of these demonstrations and the legal and ethical issues they raise. Participants fill diverse stakeholder roles (data holder, data provider — i.e., research participant, re-identification researcher, privacy scholar, research ethicist) and will, I expect, have a range of perspectives on these questions:
I hope readers will join us on May 20.
[Posted on behalf of Elizabeth Pike and Kayte Spector-Bagdady from the Presidential Commission for the Study of Bioethical Issues - and cross-posted here.]
In the most recent issue of the Hastings Center Report, Drs. Amy Gutmann and James Wagner of the Presidential Commission for the Study of Bioethical Issues (the Bioethics Commission), contributed to the lively debate surrounding the identifiability of genetic data. In Found Your DNA on the Web: Reconciling Privacy and Progress, Gutmann and Wagner, Chair and Vice-chair respectively, argue that the paradigm of identifiability has become less relevant to individual privacy protections than restrictions on access and use.
In their commentary, Gutmann and Wagner continue the public deliberation of the Bioethics Commission’s report, Privacy and Progress in Whole Genome Sequencing, in which the Bioethics Commission took a forward-looking approach to the privacy concerns raised by whole genome sequencing—issues that have come to the forefront of this important science.
Under current law, health information that is deidentified—information for which there is “no reasonable basis” to believe it can identify an individual or that has been stripped of traditional identifiers—is afforded different legal protections than identifiable health information. However, whole genome sequence data are unique to only one person, making them more vulnerable to reidentification.
Recent articles have cast doubt on the extent to which whole genome sequence data can be deidentified. For example, in Identifying Personal Genomes by Surname Inference, published in Science in January, Melissa Gymrek, et. al. successfully uncovered full identities of 50 individuals.
Day two of PFC’s FDA in the 21st Century conference begins with a morning plenary by the very fabulous Alta Charo, of the University of Wisconsin Law School, who is speaking on “Integrating Speed and Safety.”
Today Alta is presenting what she calls “more of an initial idea than an actual proposal,” and she notes that she’s very interested to hear responses to it, so comment away or contact her offline. She wants to integrate into the usual and longstanding “FDA speed versus safety” debate some concerns that should be of interest to industry. “In other words,” she said, “I’d like to be nice to the drug people.”
Alta begins with a brief history of the speed versus safety debate, which turns out to be quite cyclical. Before 1906, she asks us to recall, we had true snake oil: products with high toxicity and little or no efficacy. Often these products were nevertheless perceived as effective because they contained alcohol or other drugs, so made you feel better at least, but of course that’s part of what made these products so dangerous, especially for children.
And so with the Federal Food and Drugs Act of 1906, we get post-market remedies for misbranding, although they require proof of intent. And then in 1937 over 100 children die from elixir of sulfanilamide. And the following year we get the Food, Drug, and Cosmetic Act. But the FDCA targets only safety. (Although rightly Alta notes that it’s hard to see how regulators were truly only looking at safety and not also at some form of efficacy, since there is no such thing as safety in the abstract, only safety relative to purpose for which someone is taking the drug.) Continue reading
[This is off-the-cuff live blogging, so apologies for any errors, typos, etc]
Lewis Grossman, FDA in the Age of the Empowered Consumer
Begins his analysis by comparing a hypothetical consumer in 1960 and today.
Consumer was passive. Today’s consumer is active, more unmediated choice, more direct citizen involvement.
Why the change? 1970 was the decade of advocacy, culminating in 1972 Patient’s Bill of Rights from AMA. Central them was informed consent and thus complete information from physician.
1998 saw disruption of WebMd and now even more disrupted by web search technology which is how most patients get there info.
Food: 1966, recipe standards. Relatively little variety and consumer choice. Very little info on nutrition, “batman white bread.” Turning point was 1969 White House conference that led to more choice and more info.
Health clams as the portal where 1st Amendment law entered into FDA law. The image of the intelligent consumer who need not be shielded from information.
Changes in standard by which FDA decided if something was misleading. Until 2002 unsure if reasonable or gullible consumer standard. In 2002 for food FDA chose the reasonable consumer standard.
Liberal and conservatives got scrambled on these matters in interesting ways.
Also a revolution in advertising, leading to revolution of patient’s relationship to his or her drugs.
The New England Journal of Medicine has two new commentaries out on the SUPPORT study and arguing that OHRP has things all wrong – in a dangerous way.
From the editors:
“[OHRP's] response is disappointing, because it does not take into account either the extent of clinical equipoise at the time the study was initiated and conducted or that the consent form, when viewed in its entirety, addressed the prevalent knowledge fairly and reasonably. At the time, as explained in the principal investigator’s response to the allegations and in a related letter to the editor in the Journal, there was no evidence to suggest an increased risk of death with oxygen levels in the lower end of a range viewed by experts as acceptable, and thus there was not a failure on the part of investigators to obtain appropriately informed consent from parents of participating infants. Through hindsight (and essentially faulting investigators for not informing parents up front of a risk later uncovered by the trial itself), the OHRP investigation has had the effect of damaging the reputation of the investigators and, even worse, casting a pall over the conduct of clinical research to answer important questions in daily practice. . . . The OHRP has a duty to investigate questions of research impropriety, but we strongly disagree with their determination of inadequate informed consent in this case.”
And from Art Caplan and David Magnus:
“With regard to SUPPORT, the OHRP is asking that research be described as riskier than it really is and is suggesting that the parents were duped into enrolling their frail infants in dangerous research. Not only is that not true, but it also poses substantial risk to the conduct of valuable comparative effectiveness research both for premature infants and for the general public who continue to face too many treatments where uncertainty prevails about what is best.”
There is also a letter to the editor from the SUPPORT study group themselves.
What do you think – is this OHRP’s latest version of its checklist debacle, or are they right here?
On Monday, the World Medical Association opened a 2-month public consultation on proposed revisions to the Declaration of Helsinki. The Declaration was most recently revised in 2008, and according to the WMA, the current round of proposed changes is intended “to provide for more protection for vulnerable groups and all participants by including the issue of compensation, more precise and specific requirements for post-study arrangements and a more systematic approach to the use of placebos.”
You can see a side by side analysis of the proposed revisions and explanatory comments here. We’ll see what happens down the road, but a few things are worth noting:
- The working group responsible for the revisions explicitly acknowledges that it cannot literally be true that the well-being of the individual research subjects must take precedence over all other interests (new paragraph 8/old para. 6). Nonetheless, it retains that language for aspirational purposes – a strange choice, IMHO, which could conceivably lead to less respect for the Declaration as a whole.
- The revised Declaration would (in line with almost every other ethical body to consider the issue) add a new paragraph indicating that “Adequate compensation and treatment for subjects who are harmed as a result of participating in the research must be ensured” (new para. 15). The current version (old para. 14) states only that the protocol should include “provisions for treating and/or compensating subjects who are harmed as a consequence of participation in the research study.” The nature of the Declaration is such that it doesn’t provide much detail, but even this revised statement is a step in the right direction. Will US regulators ever take the hint?
- The revisions would take a harder stance toward research involving disadvantaged or vulnerable populations, permitting research with these groups only when it could not be carried out in a non-vulnerable population (new para. 20/old para. 17). I understand the sentiment…really I do. But this approach seems to unduly discount the real benefits that accrue to research participants and communities just by virtue of having the research done there. And if research is responsive to population/community needs and priorities AND the population/community stands to benefit from the research – two standards that remain in the revised Declaration – why do we need the third criterion that research couldn’t be conducted in an alternative population? Continue reading
By now, most of you have heard about the controversial study that sought to evaluate how much oxygen to give premature newborns to preserve both their lives and their sight. Below, Laura Stark lays out some of the key details about the study and OHRP’s response, and concludes that part of the problem may have been a result of the difficulties associated with approving multi-site research.
Maybe so, but let me offer a more fundamental challenge: perhaps IRBs are just ineffective – or not as effective as we hope they would be – at protecting human subjects. In retrospect, it looks like all 23 IRBs that reviewed the study, all of which were applying the same regulatory standards, failed to do what OHRP, many news outlets, and as awareness grows, much of the public thinks they ought to have done to protect the babies and families involved in this study. How could they all have gotten it wrong? Are the regulations insufficient? Are the procedures insufficient? Is it all just a matter of interpretation?
These questions lead to another fundamental issue: the lack of empirical evidence on IRB effectiveness. We have data on whether IRBs follow the regulations, data on adverse events, data on OHRP warning letters, data on IRB-imposed research burdens and delays – but these all nibble around the edges of the real questions: what are IRBs supposed to be doing, are they doing it well, and how would we know? The counterfactual – a world without IRB review – is pretty tough to study, but I’m working with a group of colleagues at the Petrie-Flom Center and elsewhere to think through some empirical methods to get at precisely these issues. And we’d love to hear your thoughts!
Finally, as a side note, one point that seems to be getting lost in coverage of this preemie story is that although there seems to have been some major problems with the consent process, the study question itself was a very important one to ask.
Institutional Review Boards are in the top news at outlets such as the New York Times, as a research debacle unfolds. I looked through the documents that are publicly available to figure out what happened and what to expect.
Researchers at 22 universities or hospitals in the US enrolled premature babies in a randomized controlled trial between 2004 and 2009. This was the second part of a broader study, but the first part of the study “raised no concerns” according to the US Office of Human Research Protections on page 2 of its determination letter to the lead institution, University of Alabama-Birmingham. OHRP is the federal agency in charge of enforcing human-subjects regulation.
For the second part of the study, though, OHRP found that all 23 IRBs that approved the study (at 22 research sites) violated federal regulations: IRBs should have made researchers tell the parents that they knew their babies would be at higher risk of death, neurological damage, or blindness if they enrolled in the study (pages 2 and 10 of UAB letter). OHRP has only posted a determination letter for UAB at this point, but it explains that at all of the sites, the agency found violations with consent documents “similar to those described” to UAB. The UAB IRB is in especially hot water because it seems first to have first approved the 2.5-page template consent form, which the other institutions used (page 5). If you read the last page of UAB’s letter, you can make a good guess at who may officially be getting bad news from OHRP soon.
Working in private, the National Academy of Sciences’ panel on human-subjects regulations in social-behavioral sciences met this weekend to draft a final report. On Friday, the panel had wrapped up its public “Workshop on Proposed Revisions to the Common Rule in Relation to Behavioral and Social Sciences.” The workshop aimed to critique OHRP’s proposed revisions to federal human-subjects regulations (known as the Common Rule), rather to critique the regulations directly.
Here is what the National Academy panel members said they took to be a few take-points from the public workshop, which I attended:
- LOW-RISK: It’s essential to change regulations for lower-risk research, but the ANPRM does not currently set out a good way to do this. Few participants seemed keen on the new category of “excused,” nor did they like the current use of “exempt.” The key question to my mind is, How much autonomy, do the panelists think, should be handed over to scholar-investigators and taken away from IRBs? Speaker Lois Brako advocated requiring everyone to register their studies with their institutions. Other speakers (Brian Muskanski, Rena Lederman) suggested researchers should be given leeway to interpret abstract terms like “risk” and key moments such as when a study begins. Do panelists agree that scholar-investigators are trustworthy and knowledgeable enough to interpret regulations?
- INTERNATIONAL: The Common Rule gives little attention to research outside the USA, and OHRP’s proposed revisions do not address this dangerous and retrograde gap. Pearl O’Rouke of Partners Healthcare and Thomas Coates of UCLA usefully emphasized this important point and showed the stakes. To my mind, the question for many researchers will be, How should cross-national differences—in institutions’ resources, in study populations—be taken into account in the regulations? Medical anthropologists, for example, are in the midst of a raging debate over this issue. The traditional view has been that we should respect local differences, and this was the original point of requiring IRBs to account for “community attitudes,” which has morphed into a big problem for multisite studies in the present day. The avant garde in medical anthropology suggests that such “ethical variability” is not just inhumane, but it indulges a western insistence on treating some people as “others” rather than as us—whether in the USA or abroad—which happens to be very convenient for drug developers. In my own research, IRB members also faced the more routine question of whether “community” meant a study population, local residents of a region, or something else altogether. The panel may not have time to consider whether it makes sense to clarify what “community” means and, more broadly, who gets to speak on behalf of a “community” regarding its attitudes.
- PRIVACY: We have to come up with a system for reviewing social-behavioral research that is either more flexible or more refined. There is a wide range of appropriate protections, but they can quickly seem inappropriate if applied to some studies. Comparing a few of the presentations makes this point. George Alter explained the rigorous and necessary privacy protection plan for the big data sets and collaborative networks involved in University of Michigan’s ICPSR. On the flip side, Brian Mustanski and Rena Lederman explained the overweening attention to the so-called risks in their studies that involve first-hand interviews and observations.
- EVIDENCE: We need more data on IRB outcomes. It is apparent that the data exist—as talks such as Lois Brako’s showed, in which she documented her team’s impressive overhaul of the IRB at University of Michigan, dysfunctional only a few years ago. The data need to be expanded, analyzed and shared—and supported for the long term. Who will have the money or time for that? That remains to be seen, but either way I will be curious to see the effects of the workshop buzz word: “evidence-based” decision-making. Although panelists saw value in case studies, it would be easiest for them and for policymakers to prioritize problems that can be documented with statistics rather than stories. I wonder, How might this skew the problems that are identified the people included in discussions?
Earlier this month, the American Association of University Professors (AAUP) recommended that researchers should be trusted with the ability to decide whether individual studies involving human subjects should be exempt from regulation. The AAUP’s report, which was prepared by a subcommittee of the Association’s Committee on Academic Freedom and Tenure, proposes that minimal risk research should be exempt from the human research protection regulations and that faculty ought to be given the ability to determine when such an exemption may apply to their own projects.
Specifically, the report states, “Research on autonomous adults should be exempt from IRB approval straightforwardly exempt, with no provisos and no requirement of IRB approval of the exemption) if its methodology either (a) imposes no more than minimal risk of harm on its subjects, or (b) consists entirely in speech or writing, freely engaged in, between subject and researcher.”
These recommendations, designed to address long-standing concerns by social scientists about bureaucratic intrusions into their work, are misguided and could result in real harm to research subjects. Continue reading
Today and tomorrow, the National Academy of Sciences is hosting a workshop on revisions to the human-subjects regulations (the “Common Rule”), especially for rules on social and behavioral research. The workshop is being simulcast, and viewers can send in questions. Join us!
The most provocative presentation this morning, from my perch in the front row, was from Brian Mustanski, who studies adolescent health and risk behaviors–especially same-sex experiences. It’s an important topic to study because of the risk of HIV/AIDS transmission, among other things. But it’s tough for investigators to conduct studies on sex because the topic worries Institutional Review Boards (or researchers believe the topic will worry their IRBs). Sociologist Janice Irvine makes a similar argument in her survey of sex researchers.
Do IRBs need to be so worried? Mustanski and his colleagues asked the adolescents that they studied how comfortable the kids felt answering their sex survey. Around 70 percent felt either “comfortable” or “very comfortable” answering the sex questions–the implication being that it was silly for IRBs to think the questions posed more of a minimal risk. But his data also showed that 3 percent of the respondents felt “very uncomfortable.” He did not point out this finding, and so I asked Dr. Richard Campbell, another presenter, to weigh in on whether he would consider 3 percent to constitute a “large” or “likely” risk. Earlier Dr. Campbell had given a conceptual talk arguing that IRBs conflate the magnitude of risk with the likelihood of risk to participants. In answer to my question, Campbell said that making 2-4 precent of adolescents “very uncomfortable” would not constitute a large or likely risk, and so the research should go forward.
I imagine that IRB members of a more conservative bent would disagree–and this is the crux of the problem. In considering how to revise the human-subjects regulations, would it be more helpful to make the regulations more specific, for example by setting quantitative thresholds and standards that everyone would have to follow? Or would it be best to make the regulations more flexible? The regulations already give IRBs more discretion than they use. IRBs don’t use the flexibility in the regulations because they are always concerned about institutional liability. For IRBs, conversations about protecting human subjects from harm is simultaneously a conversation about protecting the institution from legal harm. IRBs would read surveys like Mustanski’s by seeing the few people who are uncomfortable rather than the majority of people who were entirely comfortable. Why? Because it only takes one lawsuit.
Is this regulatory contradiction too big for NAS? The debate in Washington continues.
A top-level commission has just released a new report on the morality of studying the safety of an anthrax vaccine in children, with an eye toward treating kids in the event of a terror attack.
The report, issued Tuesday by the Presidential Commission for the Study of Bioethical Issues, is quite thoughtful. It concludes that no testing should be considered unless the risk to kids is minimal. But it also represents a study of an experiment that has no chance of happening — ever. The commission has wasted its time. There is not a chance that a sufficient number of American parents are going to sign up their kids for the safety testing of an anthrax antidote.
Issues and Case Studies in Clinical Trial Data Sharing: Lessons and Solutions
May 17, 2013, 8:00AM-5:00PM
Harvard Law School, Wasserstein Hall, Milstein West A (2nd Floor)
1585 Massachusetts Ave., Cambridge, MA
Our current agenda/objectives are below the fold, and will be updated with additional detail shortly. Please make sure to register as space is limited!
The principle of justice articulated in The Belmont Report requires equitable selection of human research subjects. Equitable in this context means that the risks and benefits of the study are distributed fairly. Fairness has two components: 1) avoiding exploitation of the vulnerable (e.g. preying upon a poor, uneducated population) and 2) avoiding the unjustified exclusion of any population ( whether out of bigotry, laziness or convenience).
Recruitment strategies invariably shape the selection of research subjects and the extent to which a pool of participants really represents a cross-section of society. Institutional Review Boards (IRBs) are charged with evaluating whether study recruitment plans and materials used to obtain informed consent are easily understood and free of misleading information. This is relatively straightforward when researchers, IRB members, and study subjects all speak the same language. But when studies are done in geographical areas that include numerous cultural and language communities, it can be quite tricky.
One of the barriers that prevents people from enrolling in (or even knowing about) studies is a lack of awareness and planning by researchers to address language differences. The human research protection regulations at 45 CFR Part 46.116 require that informed consent information must be provided to research participants (or their representatives) in language understandable to them. IRBs are supposed to be vigilant about this and require investigators to obtain translated Informed Consent Documents (ICDs) for use with non-English speaking research subjects. But researchers commonly balk at this expectation, saying it’s unreasonable. (A disproportionate number of objections have been raised to me thusly, “And what am I supposed to do if someone shows up speaking only Swahili?!”) Continue reading
Last week, the Indian government issued revised rules governing “compensation in case of injury or death during clinical trial.” You’ve really got to read the whole thing, but some of the provisions are pretty remarkable:
- “In the case of an injury occurring to the clinical trial subject, he or she shall be given free medical management as long as required.” Note that this doesn’t say anything about the injury being study-related.
- If an injury is related, then the subject is also entitled to financial compensation above any medical expenses.
- If the subject dies as a result of clinical trial participation, his or her “nominees” would be entitled to financial compensation.
- Injury or death will be considered related to trial participation in a variety of usual circumstances, including adverse effects of the investigational product and protocol violation or negligence. But here’s the kicker: injury or death will be deemed trial-related, and therefore eligible for care/compensation, if it results from “failure of investigational product to provide intended therapeutic effect” or “use of placebo in a placebo-controlled trial”. Read that again – if an investigational product doesn’t work, the sponsor will be liable for free medical care and further financial compensation.
As most readers are probably aware, the past few years have seen considerable media and clinical interest in chronic traumatic encephalopathy (CTE), a progressive, neurodegenerative condition linked to, and thought to result from, concussions, blasts, and other forms of brain injury (including, importantly, repeated but milder sub-concussion-level injuries) that can lead to a variety of mood and cognitive disorders, including depression, suicidality, memory loss, dementia, confusion, and aggression. Once thought mostly to afflict only boxers, CTE has more recently been acknowledged to affect a potentially much larger population, including professional and amateur contact sports players and military personnel.
CTE is diagnosed by the deterioration of brain tissue and tell-tale patterns of accumulation of the protein tau inside the brain. Currently, CTE can be diagnosed only posthumously, by staining the brain tissue to reveal its concentrations and distributions of tau. According to Wikipedia, as of December of 2012, some thirty-three former NFL players have been found, posthumously, to have suffered from CTE. Non-professional football players are also at risk; in 2010, 17-year-old high school football player Nathan Styles became the youngest person to be posthumously diagnosed with CTE, followed closely by 21-year-old University of Pennsylvania junior lineman Owen Thomas. Hundreds of active and retired professional athletes have directed that their brains be donated to CTE research upon their deaths. More than one of these players died by their own hands, including Thomas, Atlanta Falcons safety Ray Easterling, Chicago Bears defensive back Dave Duerson, and, most recently, retired NFL linebacker Junior Seau. In February 2011, Duerson shot himself in the chest, shortly after he texted loved ones that he wanted his brain donated to CTE research. In May 2012, Seau, too, shot himself in the chest, but left no note. His family decided to donate his brain to CTE research in order “to help other individuals down the road.” Earlier this month, the pathology report revealed that Seau had indeed suffered from CTE. Many other athletes, both retired and active, have prospectively directed that their brains be donated to CTE research upon their death. Some 4,000 former NFL players have reportedly joined numerous lawsuits against the NFL for failure to protect players from concussions. Seau’s family, following similar action by Duerson’s estate, recently filed a wrongful death suit against both the NFL and the maker of Seau’s helmet.
The fact that CTE cannot currently be diagnosed until after death makes predicting and managing symptoms and, hence, studying treatments for and preventions of CTE, extremely difficult. Earlier this month, retired NFL quarterback Bernie Kosar, who sustained numerous concussions during his twelve-year professional career — and was friends with both Duerson and Seau — revealed both that he, too, has suffered from various debilitating symptoms consistent with CTE (but also, importantly, with any number of other conditions) and also that he believes that many of these symptoms have been alleviated by experimental (and proprietary) treatment provided by a Florida physician involving IV therapies and supplements designed to improve blood flow to the brain. If we could diagnose CTE in living individuals, then they could use that information to make decisions about how to live their lives going forward (e.g., early retirement from contact sports to prevent further damage), and researchers could learn more about who is most at risk for CTE and whether there are treatments, such as the one Kosar attests to, that might (or might not) prevent or ameliorate it.
Last week, UCLA researchers reported that they may have discovered just such a method of in vivo diagnosis of CTE. In their very small study, five research participants — all retired NFL players — were recruited “through organizational contacts” “because of a history of cognitive or mood symptoms” consistent with mild cognitive impairment (MCI). Participants were injected with a novel positron emission tomography (PET) imaging agent that, the investigators believe, uniquely binds to tau. All five participants revealed “significantly higher” concentrations of the agent compared to controls in several brain regions. If the agent really does bind to tau, and if the distributions of tau observed in these participants’ PET scans really are consistent with the distributions of tau seen in the brains of those who have been posthumously-diagnosed CTE, then these participants may also have CTE.
That is, of course, a lot of “ifs.” The well-known pseudomymous neuroscience blogger Neurocritic recently asked me about the ethics of this study. He then followed up with his own posts laying out his concerns about both the ethics and the science of the study. Neurocritic has two primary concerns about the ethics. First, what are the ethics of telling a research participant that they may be showing signs of CTE based on preliminary findings that have not been replicated by other researchers, much less endorsed by any regulatory or professional bodies? Second, what are the ethics of publishing research results that very likely make participants identifiable? I’ll take these questions in order. Continue reading
Questionable baldness remedies have been peddled since the beginning of medicine. According to Pliny (23-79 A.D.), ashes of seahorse could cure baldness. Almost 2000 years later, the British Medical Association warned the public of the increasing “number of preparations put forward for the cure of baldness,” particularly those which “are not applied locally but taken internally.” The purported active ingredient? “[H]aemoglobin.” (see Secret Remedies (1909), page 114).
While the medicinal use of a seahorse or dried blood matter may sound fanciful to modern ears, one has to wonder whether today’s public is any less credulous: Worldwide, consumers have spent over $400 million per year on a modern baldness remedy known by the trade name Propecia (finasteride). Has science finally triumphed over a medical condition that has persisted through millennia? Today’s consumers might rationally believe that its has, given that Propecia is FDA-approved for the treatment of alopecia (baldness). FDA-approved remedies must, according to federal law (21 U.S.C. § 355(d)), prove their efficacy in well-controlled, clinical investigations.
Yet one need only walk through a crowded street to see that, if a baldness cure has truly arrived, a surprising number of people have not availed themselves of it. Is Propecia, then, not effective? Let us take a look at the official data. Continue reading