Government Shutdown: Why the Pipeline Matters

by Suzanne M. Rivera, Ph.D.

Much attention has been paid to the government shutdown that started last week.  Many of us heard heart-tugging stories on public radio about the NIH closing down new subject enrollment at its “House of Hope,” the clinical trial hospital on the NIH main campus.  These stories gave many people the impression that clinical research halted around the country when the federal government failed to approve a Continuing Resolution.

The reality is both less dramatic in the short term and more concerning for the long term.  For the most part, federally-funded projects at university campuses and hospitals are continuing as usual (or, the new “usual,” as reduced by sequestration), because the grants already awarded are like I.O.U.s from the government.  By and large, university researchers will keep spending on their funded grants, with the knowledge that reimbursement will come once the government re-opens for business. The universities and hospitals are, in a sense, acting like banks that loan the government money while waiting for these expenses to be reimbursed.

Also, many clinical trials are funded by the pharmaceutical industry.  So it is not the case that hospitals are closing their doors to research en masse.  But the long-term effects of a shutdown will have lasting and compounding effects on our science pipeline.  The U.S. federal government is the single largest funder of scientific research at American universities.  Each month, thousands of grant proposals are sent to the various federal funding agencies for consideration.  These in turn are filtered and assigned to peer review committees.  The whole process of review, scoring, and funding approval typically takes months, sometimes more than a year.

Under the terms of the shutdown, the staff who normally receive and triage these grant proposals are considered non-essential.  All but one of the federal grant on-line submission portals have been taken off-line.  So thousands of researchers who had been working for months to write grant proposals for funds needed to conduct the next generation of studies are now left wondering when it will be possible to submit for agency review.  Those studies hold the keys to future discoveries that could bring needed cures to the bedside, important products to the marketplace, and new jobs into the economy. Continue reading

OHRP Revises Guidance on Remuneration for Human Research Subjects

by Suzanne M. Rivera, Ph.D.

The Office of Human Research Protections (OHRP) has issued revised guidance about research subject compensation.  And, although it has not attracted a great deal of fanfare, it deserves attention because the new guidance offers greater flexibility to investigators and to the Institutional Review Boards (IRBs) charged with reviewing proposed human research studies.   Under its list of Frequently Asked Questions (FAQ) related to informed consent, there is a question (7) which reads, “When does compensating subjects undermine informed consent or parental permission?”  (http://www.hhs.gov/ohrp/policy/consentfaqsmar2011.pdf).

Aside from the fact that it’s still a very leading question (asking “when does it?” implies that, in fact, it does…), the new answer provided by OHRP clarifies that compensation in and of itself is not necessarily coercive or a source of undue influence.  It says that remuneration to subjects may include compensation for risks associated with their participation in research and that compensation may be an acceptable motive for some individuals agreeing to participate in research.

That is a real paradigm shift. Continue reading

Academic Freedom and Responsibility

by Suzanne M. Rivera, Ph.D.

Earlier this month, the American Association of University Professors (AAUP) recommended that researchers should be trusted with the ability to decide whether individual studies involving human subjects should be exempt from regulation.  The AAUP’s report, which was prepared by a subcommittee of the Association’s Committee on Academic Freedom and Tenure, proposes that minimal risk research should be exempt from the human research protection regulations and that faculty ought to be given the ability to determine when such an exemption may apply to their own projects.

Specifically, the report states, “Research on autonomous adults should be exempt from IRB approval straightforwardly exempt, with no provisos and no requirement of IRB approval of the exemption) if its methodology either (a) imposes no more than minimal risk of harm on its subjects, or (b) consists entirely in speech or writing, freely engaged in, between subject and researcher.”

These recommendations, designed to address long-standing concerns by social scientists about bureaucratic intrusions into their work, are misguided and could result in real harm to research subjects. Continue reading

You Talkin’ to Me?

by Suzanne M. Rivera, Ph.D.

The principle of justice articulated in The Belmont Report requires equitable selection of human research subjects.  Equitable in this context means that the risks and benefits of the study are distributed fairly.  Fairness has two components: 1) avoiding exploitation of the vulnerable (e.g. preying upon a poor, uneducated population) and 2) avoiding the unjustified exclusion of any population ( whether out of bigotry, laziness or convenience).  

Recruitment strategies invariably shape the selection of research subjects and the extent to which a pool of participants really represents a cross-section of society.  Institutional Review Boards (IRBs) are charged with evaluating whether study recruitment plans and materials used to obtain informed consent are easily understood and free of misleading information.  This is relatively straightforward when researchers, IRB members, and study subjects all speak the same language.  But when studies are done in geographical areas that include numerous cultural and language communities, it can be quite tricky.

One of the barriers that prevents people from enrolling in (or even knowing about) studies is a lack of awareness and planning by researchers to address language differences.  The human research protection regulations at 45 CFR Part 46.116 require that informed consent information must be provided to research participants (or their representatives) in language understandable to them.  IRBs are supposed to be vigilant about this and require investigators to obtain translated Informed Consent Documents (ICDs) for use with non-English speaking research subjects.  But researchers commonly balk at this expectation, saying it’s unreasonable.   (A disproportionate number of objections have been raised to me thusly, “And what am I supposed to do if someone shows up speaking only Swahili?!”) Continue reading

Humane Transport of Research Animals

by Suzanne M. Rivera, Ph.D.

For some time, animal rights activists in the US and abroad have been trying to pressure commercial airlines out of their long-standing practice of transporting research animals.  Last week, a coalition of more than 150 leading research organizations and institutions sent a letter to the CEOs of the targeted airlines, urging them to continue transporting animals needed for research purposes.  A copy of the coalition joint letter is available here.

The letter initiative was organized by the National Association for Biomedical Research (NABR), and had the strong support of the Association of American Medical Colleges (AAMC), the Council on Governmental Relations (COGR), the American Association for Laboratory Animal Science (AAALAS), and Research!America, to name a few.

The basic thrust of the letter can be summarized by this passage: “Your company’s commitment to transporting laboratory animals is crucial to finding treatments and cures for diseases afflicting millions of people worldwide. We ask that you continue transporting research animals, allowing lifesaving research around the world to progress.” Continue reading

Accentuate the Negative

by Suzanne M. Rivera, Ph.D.

While attending the annual Advancing Ethical Research Conference of Public Responsibility in Medicine and Research (PRIM&R) last month in San Diego, I had the opportunity to hear a talk by Dr. John Ioannidis, in which he debunked commonly accepted scientific “truths.”  Calling upon his own work, which is focused on looking critically at published studies to examine the strength of their claims (see his heavily downloaded 2005 paper “Why Most Published Research Findings Are False”), Ioannidis raised important questions for those of us who think about research ethics, and who oversee and manage the research conducted at universities and scientific institutes across the country.

Ioannidis persuasively argued that our system for publishing only studies with statistically significant positive findings has resulted in a bizarre kind of reality where virtually no studies are ever reported that found “negative” results.  Negative results are suppressed because nobody is interested in publishing them.  Editors and reviewers have a major role in this problem; they choose not to publish studies that are not “sexy.”  This artificially inflates the proportion of observed “positive” results and influences the likelihood a scientist will even write up a journal article because she knows what it takes to get published.

But isn’t there an ethical obligation to publish so-called negative results?  In human research, people give their time and undergo risks for the conduct of a study.  Their sacrifices are not meaningful if the results are never shared.  Furthermore, negative results tell us something important.  And if they are not published, some other research team somewhere else may unknowingly repeat a study, putting a new batch of subjects at risk, to investigate a question for which the answer is already known.  Finally, to the extent a study is conducted using taxpayer dollars, the data derived should be considered community property, and there are opportunity costs associated with unnecessarily repetitive work.  Continue reading

Film Review: How to Survive a Plague

By Suzanne M. Rivera

How to Survive a Plague is a moving chronicle of the onset of the AIDS epidemic as seen through the lens of the activists who mobilized to identify and make available the effective treatments we have today.  Beginning at the start of the epidemic, when little was known about the HIV virus and even hospitals were refusing to treat AIDS patients out of fear of contagion, the film follows a group of leaders in the groups ACT-UP and TAG.  Using existing footage interspersed with current-day interviews, it tells the story of how patients and concerned allies pushed the research community to find a way to treat what was then a lethal disease.

The film’s portrayal of the U.S. Government, specifically then-President George H. W. Bush and high ranking officials in the Food and Drug Administration, is damning.  As hundreds of thousands of people became infected with HIV and the death toll rose, prejudice against marginalized groups (especially gay men, IV drug users) contributed to a lack of urgency about the need to learn how stop the spread of the virus and how to treat the opportunistic infections that killed people with full-blown AIDS.  In contrast, footage of demonstrations, meetings, and conferences highlights the courage of the activists who risked and endured discrimination, beatings and arrests to bring attention to the need for more research.

But How to Survive a Plague is more than a documentary about the power people have to make change when they join together to demand action.  It also is a provocative commentary about unintended consequences.  I saw the film while attending the annual Advancing Ethical Research Conference of Public Responsibility in Medicine and Research (PRIM&R).  In that context, I was especially interested in the way How to Survive a Plague highlights an interesting ethical issue in clinical research. Namely, the problem of protecting people so much from research risks that the protection itself causes harm. Continue reading

What’s In a Name?

by Suzanne M. Rivera, Ph.D.

In regulatory and research ethics circles, it is fairly common to hear people say they prefer the term “research participant” to “research subject” because they feel it’s more respectful.  They think the word “subject” is demeaning.  I respectfully disagree.  I think it’s honest.

The federal agencies that oversee human research use both terms as though they are interchangeable.  The National Institutes of Health (NIH), for example, has a policy called, “Required Education in the Protection of Research Participants” which compels training for “individuals involved in the design and/or conduct NIH funded human subjects research.”

Of course, some research subjects are willing and active participants, but many are not.  The truth is that many people are studied without their consent or even knowledge.  In compliance with federal regulations, and under the watchful eye of ethics committees called Institutional Review Boards (IRBs), millions of medical records, biological specimens, and other sources of data (like court records, purchasing patterns, and web searching cookies) are mined by researchers every day.  You and I don’t participate in those studies.  They are done to us.  To the extent these studies are done with integrity, I don’t object.  But let’s not pretend we are participants. Continue reading

Are Human Research Participants Deserving of Research Animals’ Rights?

by Suzanne M. Rivera, Ph.D.

For years, mainstream and extremist organizations have waged campaigns against the use of animals.  While PETA successfully deploys propaganda featuring provocative models in sexually explicit positions to denounce the use of animals for food, clothing and experimentation, other groups, such as the Animal Liberation Brigade, engage in violent (some would say terroristic) actions to disrupt animal research and scare off scientists from lines of inquiry for which the use of animal models is the state of the art.

Part of the philosophy of the anti-animal research groups is a belief in moral equivalency among species.    PETA’s Ingrid Newkirk once famously said, “A rat is a pig is a dog is a boy.”  Does she propose we allow people to suffer with treatable diseases because non-animal models for testing have not yet been developed?  Apparently so.  Newkirk also has gone on the record to say, “Even if animal tests produced a cure for AIDS, we’d be against it.”  This view is out of step with the majority of Americans, who – according to the latest Gallup poll– support animal research.

Among those who regulate and support animal research, there is a very strong commitment to animal welfare.  The “animal welfare” perspective contrasts with the “animal rights” view.  The animal rightists want to end animal use, including research (and also eating meat, hunting, zoos, police dogs and entertainment), because they see it as inherently indefensible.  Animal welfarists, on the other hand, believe animals can be used humanely, under strict rules that seek to prevent unnecessary pain and distress in research animals.  They acknowledging that the animals’ lives are worthy of respect, but do not ascribe the moral status of personhood to them.  The US government requires scientists to assume anything that could cause pain or distress in a human also would be painful for an animal, and they are compelled to provide analgesia and anesthesia accordingly.

Continue reading

Should Researchers Have a Professional Code of Ethics?

by Suzanne M. Rivera, Ph.D.

I was giving a workshop presentation at the annual meeting of the National Council of University Research Administrators and my co-presenter raised an interesting idea. Tommy Coggins of the University of South Carolina was talking about the importance of integrity for preserving the public’s trust in the research enterprise, and he pointed out that, unlike physicians, attorneys, and accountants, researchers do not have a unifying professional code of ethics.  Instead, they are subject to a patchwork of regulations, policies and laws, most of which were promulgated by grant funding agencies and therefore are enforceable only in cases where tax payer dollars are involved.  Although discipline-specific societies, such as the American Psychological Association, have their own ethics codes, researchers as a profession are not asked to adhere to a shared set of standards for their conduct.

And it’s true— there is no unifying code that all researchers (spanning the range of disciplines from anthropology to zoology) must swear to uphold.  And maybe it’s not realistic to expect that people whose jobs entail such a variety of different activities (working with data sets, lasers, yeast, mice, human participants, super colliders, etc.) could find sufficient common ground on which to cobble together a code.  But I wonder if it’s worth a try.  Given the current atmosphere of distrust that has resulted in new rules for increased transparency and oversight of researchers’ financial interests, perhaps the time is right to think explicitly about ethical standards for research.  Not merely avoiding “FFP” misconduct, but an affirmative duty to behave with integrity.

Continue reading

Conflicting Interests in Research: Don’t Assume a Few Bad Apples Are Spoiling the Bunch

by Suzanne M. Rivera, Ph.D.

In August of 2011, the Public Health Service updated its rules to address the kind of financial conflicts of interests that can undermine (or appear to undermine) integrity in research.  The new rules, issued under the ungainly title, “Responsibility of Applicants for Promoting Objectivity in Research for which Public Health Service Funding is Sought and Responsible Prospective Contractors,” were issued with a one-year implementation period to give universities and academic medical centers sufficient time to update their local policies and procedures for disclosure, review, and management (to the extent possible) of any conflicts their researchers might have between their significant personal financial interests and their academic and scholarly activities.

The rules were made significantly more strict because a few scoundrels (for examples, click here, and here) have behaved in ways that undermined the public’s trust in scientists and physicians. By accepting hundreds of thousands, even millions, of dollars from private pharmaceutical companies and other for-profit entities while performing studies on drugs and devices manufactured by the same companies, a few bad apples have called into question the integrity of the whole research enterprise.  This is a tremendous shame.

Having more than one interest is not bad or wrong; it’s normal.  Everyone has an attachment to the things they value, and most people value more than one thing.  Professors value their research, but they also want accolades, promotion, academic freedom, good parking spots, and food on their tables.  Having multiple interests only becomes a problem when the potential for personal enrichment or glory causes someone (consciously or unconsciously) to behave without integrity and compromise the design, conduct, or reporting of their research. Continue reading

Fear of a Digital Planet

by Suzanne M. Rivera

Federal regulations and ethical principles require that Institutional Review Boards (IRBs) consider the anticipated risks of a proposed human research study in light of any potential benefits (for subjects or others) before granting authorization for its performance.  This is required because, prior to the oversight required by regulation, unethical researchers exposed subjects to high degrees of risk without sufficient scientific and ethical justification.

Although the physical risks posed by clinical research are fairly well understood, so-called “informational risks”—risks of privacy breaches or violations of confidentiality— are the source of great confusion and controversy.  How do you quantify the harm that comes from a stolen, but encrypted, laptop full of study data?  Or the potential for embarrassment caused by observations of texted conversations held in a virtual chat room?

IRBs have for years considered the potential magnitude and likelihood of research risks in comparison to those activities and behaviors normally undertaken in regular, everyday life.  But everyday life in today’s digital world is very different from everyday life in 1981 when the regulations were implemented.  People share sonogram images on Facebook, replete with the kinds of information that would, in a research context, constitute a reportable breach under the Office of Civil Rights’ HIPAA Privacy Rule.  They also routinely allow their identities, locations, and other private information to be tracked, stored, and shared in exchange for “free” computer applications downloaded to smart phones, GPS devices, and tablet computers. Continue reading

Reality Check, Please!

by Suzanne M. Rivera, Ph.D.

By now, many people have seen a still photo or video footage of Rep. Paul Broun (R-Georgia), standing in front of a wall full of deer heads, proclaiming that evolution, embryology, and the Big Bang theory are “lies straight from the pit of hell.”  According to the Congressman, “it’s lies to try to keep me and all the folks who were taught that from understanding that they need a savior.”

Readers might shrug these statements off as merely absurd.  But Rep. Broun is a member of the U.S. House Committee on Science, Space, and Technology, and he’s running unopposed for re-election.

Broun is just one of the members of the House Committee on Science, Space, and Technology whose unusual views should give voters pause.  His colleagues on the Committee include Todd “Legitimate Rape” Akin (R-MO), and Randy Neugebauer (R-Texas), whose congressional record is most notable for introducing a resolution that “people in the United states should join together in prayer to humbly seek fair weather conditions.” Continue reading

What Is a (Big) Bird in the Hand Worth?

by Suzanne M. Rivera, Ph.D.

The Presidential debate on Wednesday was fascinating theater.  Much of the post-debate commentary in social media has focused on Mitt Romney’s threat to cut federal funding to PBS as a way of reducing the deficit (save Big Bird!) and President Obama’s unexpected restraint (including this piece of NSFW satire from The Onion).

I love Big Bird as much as the next child of the Sesame Street generation, but I’m even more concerned about something else.

Very little attention has been paid to the fact that neither candidate said much about science.  Because I was listening closely for any reference to research or innovation in medicine and healthcare, I can tell you this: the closest either candidate came to making a claim in this area was President Obama who (I think) intended to say something like, ‘Cuts to basic science and research would be a mistake.’ Continue reading

Social Inequality in Clinical Research

by Suzanne M. Rivera, PhD

For a variety of reasons, racial and ethnic minorities in the US do not participate in clinical research in numbers proportionate to their representation in the population.  Although legitimate mistrust by minorities of the healthcare system is one reason, institutional barriers and discrimination also contribute to the problem.  The equitable inclusion of minorities in research is important, both so that they receive an equal share of the benefits of research and to ensure that they do not bear a disproportionate burden.

Under-representation is not just a question of fairness in the distribution of research risks.  It also creates burdens for minorities because it leads to poorer healthcare.  Since participation in clinical trials provides extra consultation, more frequent monitoring, and access to state-of-the-art care, study participation can represent a significant advantage over standard medicine.  To the extent that participation in research may offer direct therapeutic value to study subjects, under-representation of minorities denies them, in a systematic way, the opportunity to benefit medically.

For many years, our system for drug development has operated under the assumption that that we can test materials in one kind of prototypical human body and then extrapolate the data about safety and efficacy to all people.  That’s a mistake; the more we learn about how drugs metabolize differently based on genetics and environmental factors, the more important it becomes to account for sub-group safety and efficacy outcomes.  More recently, greater emphasis has been placed on community-based participatory research.  This movement toward sharing decision-making power between the observer and the observed is a critical step for addressing both the subject and researcher sides of the inequality equation.

Research Exceptionalism Diminishes Individual Autonomy

by Suzanne M. Rivera, Ph.D.

One of the peculiar legacies of unethical human experimentation is an impulse to protect people from perceived research risks, even when that means interfering with the ability of potential participants to exercise their own wills.  Fears about the possibility of exploitation and other harms have resulted in a system of research oversight that in some cases prevents people from even having the option to enroll in certain studies because the research appears inherently risky.

Despite the fact that one of the central (some would say, the most important) principles of ethical human research is “respect for persons,” (shorthand: autonomy), our current regulations– and the institutions that enforce them– paradoxically promote an approach to research gate-keeping which emphasizes the prevention of potential harm at the expense of individual freedom.  As a result, research activities often are treated as perils from which unsuspecting recruits should be shielded, either because the recruits themselves are perceived as too vulnerable to make reasoned choices about participation, or based on the premise that no person of sound mind should want to do whatever is proposed.

One example of such liberty-diminishing overprotection is the notion that study participants should not be paid very much for their time or discomfort because to provide ample compensation might constitute undue inducement. Although there is no explicit regulatory prohibition against compensating research participants for their service, The Common Rule requires researchers to “seek such consent only under circumstances that provide the prospective subject or the representative sufficient opportunity to consider whether or not to participate and that minimize the possibility of coercion or undue influence.”  This has been interpreted by many to mean that payment for study participation cannot be offered in amounts greater than a symbolic thank you gesture and bus fare. Continue reading

Treatment of Subject Injury: Fair is Fair

By Suzanne M. Rivera, Ph.D.

Of all the protections provided in the Common Rule to safeguard the rights and welfare of research participants, there’s one glaring omission: treatment of study-related injuries.

Our current regulatory apparatus is silent on whether treatment of injuries incurred while participating in a study ought to be the responsibility of the sponsor, the researcher, or the test subjects.  The closest thing to guidance we are given on this topic in the Common Rule is a requirement that, if the study involves more than minimal risk, the informed consent document must provide, “an explanation as to whether any compensation and an explanation as to whether any medical treatments are available if injury occurs and, if so, what they consist of, or where further information may be obtained.”

Note, the regulations do not state that plans must be made to provide treatment at no cost to the participants.  In fact, the regulations don’t say treatment needs to be made available at all.  Thus, it is possible to comply with the letter and spirit of the regulations by stating the following in an informed consent document, “There are no plans to provide treatment if you should be injured or become ill as a result of your participation in this study.”  Or even, “The costs of any treatment of an injury or illness resulting from your participation in this study will be your responsibility.” Continue reading

Research Participation as a Responsibility of Citizenship

by Suzanne M. Rivera, Ph.D.

For legitimate reasons, the human research enterprise frequently is regarded with suspicion.  Despite numerous rules in place to protect research participants’ rights and welfare, there is a perception that research is inherently exploitative and dangerous.

Consequently, most people don’t participate in research.  This is not only a fairness problem (few people undergo risk and inconvenience so many can benefit from the knowledge derived), but also a scientific problem, in that the results of studies based on a relatively homogeneous few may not be representative and applicable to the whole population. Larger numbers of participants would improve statistical significance, allowing us to answer important questions faster and more definitively.  And more heterogeneous subject populations would give us information about variations within and between groups (by age, gender, socio-economic status, ethnicity, etc.).

Put simply, it would be better for everyone if we had a culture that promoted research participation, whether active (like enrolling in a clinical trial) or passive (like allowing one’s data or specimens to be used for future studies), as an honorable duty.    (Of course, this presumes the research is done responsibly and in a manner consistent with ethical and scientific standards, and the law.) Continue reading