The $4 billion Medical Data Breach Case That Lost Its Way

By Nicolas Terry

Sutter Health v. Superior Court, 2014 WL 3589699 (Cal. App. 2014), is a medical data breach class action case that raises questions beyond the specifics of the Californian Confidentiality of Medical Information Act.

The stakes were high in Sutter — under the California statute medical data breach claims trigger (or should trigger!) nominal damages at $1000 per patient. Here four million records were stolen.

Plaintiffs’ first argued the defendant breached a section prohibiting unconsented-to disclosure. The not unreasonable response from the court was that this provision required an affirmative act of disclosure by the defendant which was not satisfied by a theft.

A second statutory provision argued by the plaintiffs looked like a winner. This section provided, “Every provider of health care … who creates, maintains, preserves, stores, abandons, destroys, or disposes of medical information shall do so in a manner that preserves the confidentiality of the information contained therein.” Continue reading

Art Caplan Says Vasectomy Has No Place in Plea Deal

Art Caplan has a new opinion piece on NBCNews on the controversy over the case of Jessie Herald, in which he was offered a plea bargain that involved sterilization for a reduced sentencing. From the piece:

Jessie Lee Herald was facing five years or more in prison after a crash in which police and prosecutors said his 3-year-old son was bloodied but not seriously hurt. But Herald cut a deal. Or more accurately, the state agreed to reduce his sentence if he would agree to be cut. Shenandoah County assistant prosecutor Ilona White said she offered Herald, 27, of Edinburg, Virginia, the opportunity to get a drastically reduced sentence if he would agree to a vasectomy. It may not be immediately clear what a vasectomy has to do with driving dangerously and recklessly. It shouldn’t be. There is no connection.

Read the full article.

Chip and Fish: Inadvertent Spies

Art Caplan has authored a new opinion piece on Bioethics.net on the issue of “chipping” human beings. From the piece:

There has been a great deal of fingerpointing, second-guessing and recrimination over the decision by the President to exchange five former Taliban leaders for the American soldier, Bowe Bergdahl.  “You’ve just released five extremely dangerous people, who in my opinion … will rejoin the battlefield,” Senator Marco Rubio, R-Fla., and likely Presidential candidate told Fox News.  Senator John McCain, R-AZ, told ABC news and many other outlets that he would never have supported the swap if he’d known exactly which prisoners would be exchanged given their former high roles in battling the U.S. in Afghanistan.

Put aside for a second whether the five Taliban leaders that were flown to Qatar for Bergdahl are now too old and too long removed from Taliban affairs to resume anything close to their old roles.  Presume, instead, they will eagerly resume where they left off prior to their capture, attacking Americans and others they see as hindering Taliban goals for Afghanistan.  Is it possible that the U.S. did something to these men before letting them go in the swap—surreptitiously implanting them with microchips so that they could be tracked or traced?

Read the full article.

PCAST, Big Data, and Privacy

By Leslie Francis

Cross-post from HealthLawProf Blog

The President’s Council of Advisors on Science and Technology (PCAST) has issued a report intended to be a technological complement to the recent White House report on big data. This PCAST report, however, is far more than a technological analysis—although as a description of technological developments it is wonderfully accessible, clear and informative.  It also contains policy recommendations of sweeping significance about how technology should be used and developed.  PCAST’s recommendations carry the imprimatur of scientific expertise—and lawyers interested in health policy should be alert to the normative approach of PCAST to big data.

Here, in PCAST’s own words, is the basic approach: “In light of the continuing proliferation of ways to collect and use information about people, PCAST recommends that policy focus primarily on whether specific uses of information about people affect privacy adversely. It also recommends that policy focus on outcomes, on the “what” rather than the “how,” to avoid becoming obsolete as technology advances. The policy framework should accelerate the development and commercialization of technologies that can help to contain adverse impacts on privacy, including research into new technological options. By using technology more effectively, the Nation can lead internationally in making the most of big data’s benefits while limiting the concerns it poses for privacy. Finally, PCAST calls for efforts to assure that there is enough talent available with the expertise needed to develop and use big data in a privacy-sensitive way.”  In other words:  assume the importance of continuing to collect and analyze big data, identify potential harms and fixes on a case-by-case basis possibly after the fact, and enlist the help of the commercial sector to develop profitable privacy technologies.  Continue reading

Bumps on the Road Towards Clinical Trials Data Transparency- A recent U-Turn by the EMA?

By Timo Minssen

In a recent blog I discussed the benefits and potential draw-backs of a new “EU Regulation on clinical trials on medicinal products for human use,” which had been adopted by the European Parliament and Council in April 2014. Parallel to these legislative developments, the drug industry has responded with its own initiatives providing for varying degrees of transparency. But also medical authorities have been very active in developing their transparency policies.

In the US, the FDA proposed new rules which would require disclosure of masked and de-identified patient-level data. In the EU, the EMA organized during 2013 a series of meetings with its five advisory committees to devise a draft policy for proactive publication of and access to clinical-trial data. In June 2013 this process resulted in the publication, of a draft policy document titled “Publication and access to clinical-trial data” (EMA/240810/2013).

Following an invitation for public comments on this document, the EMA received more than 1,000 submissions from stakeholders. Based on these comments the EMA recently proposed “Terms of Use” (TOU) and “Redaction Principles” for clinical trial data disclosure.

In a letter to the EMA’s executive director Dr. Guido Rasi, dated 13 May 2014, the European Ombudsman, Emily O’Reilly, has now expressed concern about what seems to be a substantial shift of policy regarding clinical trial data transparency. Continue reading

Not Just Any Week in Privacy

By Nicolas Terry

Privacy is never easy to think about. This week it became harder. Two pieces framed my week. First, Eben Moglen’s essay in The Guardian (based on his Columbia talks from late last year) took my breath away; glorious writing and stunning breadth combined to deliver a desperately sad (but not entirely hopeless) message about government and corporate overreaching in data collection and processing.

A wry speech posted by software developer Maciej Ceglowski also helped frame my thoughts. He wrote, “The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.” There’s the problem in a nut. Ceglowski alludes to the divide between how human (offline) memory operates (it’s “fuzzy” and “memories tend to fade with time, and we remember only the more salient events”) and the online default of remembering everything. Government and Google and, for that matter, Big Data Brokers tell us that online rules now apply across the board and ‘that’s just peachy’ because we’ll have better national security, better searches, or more relevant advertising. But, that’s backwards. Continue reading

DUE 6/3: Call for Abstracts: Emerging Issues and New Frontiers for FDA Regulation

            PFC_Logo_300x300                    FDLI_Logo_380

The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Food and Drug Law Institute are pleased to announce an upcoming collaborative academic symposium:

Emerging Issues and New Frontiers for FDA Regulation

Monday, October 20, 2014 

Washington, DC

We are currently seeking abstracts for academic presentations/papers on the following topics:  Continue reading

A Bad Debt That Will Shake Big Data

By Nicolas Terry

A resident of Spain allegedly owed back taxes triggering attachment proceedings. The local newspaper published the details of an upcoming auction of his property in early 1998. At some point the issue was settled. However, the matter was not forgotten—the newspaper was online and a Google search of the gentleman’s name returned this history. He complained to the Spanish data protection agency (AEPD) that he had a right to have older, irrelevant information erased and that Google should remove the links. The AEPD agreed and Google sued for relief. The Spanish High Court referred the interpretation of the Data Directive (95/46) to the European Court of Justice in 2010 and in 2013 the Advocate-General issued an advisory opinion supportive of Google’s position. Somewhat surprisingly the European Court of Justice has now taken the opposite view (Case C‑131/12, Google Spain SL v. AEPD, May 13, 2014). Continue reading

Call for Abstracts: Emerging Issues and New Frontiers for FDA Regulation

PFC_Logo_300x300FDLI_logo_pink

 

 

 

The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Food and Drug Law Institute are pleased to announce an upcoming collaborative academic symposium:

Emerging Issues and New Frontiers for FDA Regulation

Monday, October 20, 2014 

Washington, DC

We are currently seeking abstracts for academic presentations/papers on the following topics:

  • Stem cell therapies
  • Nanotechnologies
  • Genetic (and biomarker) tests
  • Gene therapies
  • Personalized medicine
  • Comparative efficacy research
  • Drug resistant pathogens
  • Globalized markets
  • Tobacco
  • GMO
  • Bioterrorism countermeasures
  • Mobile health technologies
  • Health IT
  • Drug shortages
  • Other related topics

Abstracts should be no longer than 1 page, and should be emailed to Davina Rosen Marano at dsr@fdli.org by Tuesday, June 3, 2014. Questions should also be directed to Davina Rosen Marano.

We will notify selected participants by the end of June.  Selected participants will present at the symposium, and will be expected to submit a completed article by December 15, 2014 (after the event) to be considered for publication in a 2015 issue of FDLI’s Food and Drug Law Journal (FDLJ).  Publication decisions will be made based on usual FDLJ standards.

A More Transparent System for Clinical Trials Data in Europe – Mind the Gaps!

Following the approval of the European Parliament (EP) earlier last month, the Council of the European Union (the Council) adopted on 14 April 2014 a “Regulation on clinical trials on medicinal products for human use” repealing Directive 2001/20/EC.  As described in a press-release, the new law:

“aims to remedy the shortcomings of the existing Clinical Trials Directive by setting up a uniform framework for the authorization of clinical trials by all the member states concerned with a given single assessment outcome. Simplified reporting procedures, and the possibility for the Commission to do checks, are among the law’s key innovations.”

Moreover, and very importantly, the Regulation seeks to improve transparency by requiring pharmaceutical companies and academic researchers to publish the results of all their European clinical trials in a publicly-accessible EU database. In contrast to earlier stipulations which only obliged sponsor to publish the end-results of their clinical trials, the new law requires full clinical study reports to be published after a decision on – or withdrawal of – marketing authorization applications. Sponsors who do not comply with these requirements will face fines.

These groundbreaking changes will enter into force 20 days after publication in the Official Journal of the EU. However, it will first apply six months after a new EU portal for the submission of data on clinical trials and the above mentioned EU database have become fully functional. Since this is expected to take at least two years, the Regulation will apply in 2016 at the earliest (with an opt-out choice available until 2018).

Continue reading

Diagnosing Mental Disorders from Internet Use

By Nathaniel Counts

We live in a time when increasingly our personal information is publicly available on the internet.  This personal information includes our names and phone numbers, things we’ve written and things we’ve done, along with a good deal of information that only exists because we interact with others on the internet – thoughts that we might not have otherwise externalized, or that we certainly would not have saved so that others could read.

If all of this information is publicly available, all of this information can be gathered.  Already advertisers analyze our behaviors to better target products to us.  It is not hard to imagine a not so distant future where the government analyzes this data to determine whether we have a DSM mental disorder.  By looking at the online behaviors of those already diagnosed – the way the syndrome affects their usage patterns, the sites they visit, and how they interact with others online – it is likely that one can find statistically significant usage patterns that can distinguish individuals with a diagnosis from those without.  The available data could then be mined to identify other individuals that exhibit the usage pattern and allow for presumptive diagnosis.

Continue reading

The AOL Babies: Our Healthcare Crisis in a Nut

By Nicolas Terry

Cross-posted from HealthLawProf Blog.
Where does one start with AOL CEO Armstrong’s ridiculous and unfeeling justifications for changes in his company’s 401(k) plan. Cable TV and Twitter came out of the blocks fast with the obvious critiques. And the outrage only increased after novelist Deanna Fei took to Slate to identify her daughter as one of the subjects of Armstrong’s implied criticism. Armstrong has now apologized and reversed his earlier decision.
As the corporate spin doctors contain the damage, Armstrong’s statements likely will recede from memory, although I am still hoping The Onion will memorialize Armstrong’s entry into the healthcare debate (suggested headline, “CEO Discovers Nation’s Healthcare Crisis Caused by 25 Ounce Baby”). But supposing (just supposing) your health law students ask about the story in class this week. What sort of journey can you take them on?

Big Week for Big Data

By Nicolas Terry

For privacy advocates the last week contained something of a gut-check when the UK’s splendidly descriptive Health and Social Care Information Centre announced something of a bonanza for big data companies; the NHS’s care.data program, here, will make anonymized clinical data broadly available to researchers and commercial interests with few limitations, here.

For once, however, the US attitude to the growing big data phenomenon has appeared more robust. Writing on the White House Blog, here, Presidential counselor John Podesta announced he will be leading “a comprehensive review of the way that ‘big data’ will affect the way we live and work; the relationship between government and citizens; and how public and private sectors can spur innovation and maximize the opportunities and free flow of this information while minimizing the risks to privacy.” Results are promised in 90 days.

For health lawyers, however, the most interesting recent development has been the FTC’s denial of LabMD’s motion to dismiss, here. The LabMD complaint involves the data security practices of a clinical testing laboratory. The FTC alleged “unfair . . . acts or practices” under Section 5(a)(1) of the FTC Act. One of LabMD’s arguments for dismissal was that the specific HIPAA and HITECH statutes dealing with the health privacy and security obligations of covered entities blocked the FTC from enforcing its more general authority. According to the FTC:

Nothing in HIPAA, HITECH… reflects a “clear and manifest” intent of Congress to restrict the Commission’s authority over allegedly “unfair” data security practices such as those at issue in this case. LabMD identifies no provision that creates a “clear repugnancy” with the FTC Act, nor any requirement in HIPAA or HITECH that is “clearly incompatible” with LabMD’s obligations under Section 5.

LabMD is an important development. I have argued at length, here, that big data activities outside of HIPAA-protected space have illustrated the gaps in data protection because of the manner in which the US has regulated discrete vertical industries. LabMD suggests that the FTC is prepared to fill in the gaps.

Pit Crews with Computers: Can Health Information Technology Fix Fragmented Care?

I recently posted this draft on SSRN. Feedback much appreciated. Here is the abstract:

Fragmentation and lack of coordination remain as some of the most intractable problems facing health care. Attention has often alighted on the promise of Health care Information Technology not least because IT has had such positive impact on many other personal, professional and industrial domains. For at least two decades the HIT-panacea narrative has been persistent even though the context has shifted. At various times we have been promised that patient safety technologies would solve our medical error problems, electronic transactions would simplify healthcare administration and insurance and clinical data would become interoperable courtesy of electronic medical records. Today the IoM is positioning HIT at the center of its new “continuously learning” health care model that is in large part aimed at solving our fragmentation and lack of coordination problems. While the consensus judgment that HIT can reduce fragmentation and increase coordination has intuitive force the specifics are more complicated. First, the relationship between health care and IT has been both culturally and financially complex. Second, HIT has been overhyped as a solution for all of health care’s woes; it has its own problems. Third, the HIT-fragmentation solution presents a chicken-and-egg problem — can HIT solve health care fragmentation and lack of coordination problems or must health care problems such as episodic care be solved prior to successful deployment of HIT? The article takes a critical look at both health care and HIT with those questions in mind before concluding with some admittedly difficult recommendations designed to break the chicken-and-egg deadlock.

NPT

PRIM&R 2013 Advancing Ethical Research conference report: Who is responsible for the body?

By Lauren A. Taylor, Coordinator for Health and Religion in the Global Context, Harvard Divinity School and co-author of The American Health Care Paradox: Why Spending More Is Getting Us Less(PublicAffairs, 2013).

Who is responsible for the stewardship of the body? What about the body’s tissue? Or its blood? While many people suggest that each individual owns their own bodies, these questions can become considerably more complicated when viewed through the lens of a bio-banking effort. When blood and tissues are “inside” or “part of” our body, it may appear clear, but once removed from the larger whole, the question of ownership can become substantially more challenging. One of Wednesday’s pre-conference programs explored these issues in considerable depth, featuring perspectives from a range of relevant actors.

Continue reading

Disruptive Innovation and the Rise of the Retail Clinic

By Michael Young

The Association of American Medical Colleges (AAMC) projects that by 2025 the United States will face a shortage of 130,600 physicians, representing a near 18-fold increase from the deficit of 7,400 physicians in 2008.  The widening gap between physician supply and demand has grown out of a complex interplay of legal, political, and social factors, including a progressively aging population, Congressionally mandated caps on the number of Medicare-funded residency slots and funding for graduate medical education, and waning interest among medical school graduates in pursuing careers in primary care.

These issues generate unprecedented opportunities for healthcare innovators and entrepreneurs to design solutions that can effectively address widening disparities between healthcare supply and demand, particularly within vulnerable and underserved areas.

Continue reading

Art Caplan on social media and medical privacy

Art Caplan has a new piece at NBCNews.com. In “Is your doctor spying on your tweets? Social media raises medical privacy questions,” Caplan argues:

Presuming doctors, their helpers or your neighbors are going to look, ethical standards or not, shouldn’t patients be told if someone does? I think so. I think the transplant candidate had the right to know that he tweeted himself right out of a shot at a liver transplant. And you need to realize that information you put up on social media sites may wind up being used by your doctor, hospital, psychologist, school nurse or drug counselor.

Right now there are no rules or even suggestions to guide doctor-patient relationships over the Internet. Both now have new ways to look at one another outside the office or exam room. If they are going to continue to trust one another then we need to recalculate existing notions of medical privacy and confidentiality to fit an Internet world where there is not much of either.

For more, check out the full piece.

Is Turkey’s Health Systems Reform as Successful as It Sounds?

In July, the Lancet covered Turkey’s development and implementation of universal health coverage extensively in an article and in supplementary comments. The main article, written by those who are directly involved in the development of the health systems reform (including the former Health Minister), presents a success story. Within Turkey, however, the success of the reform has been disputed. Two points in particular are being repeated by the Turkish Medical Association (TTB), doctors, and journalists: the negative effects of the reform on (1) the quality of health care personnel and (2) privacy.

Continue reading

Ethical Concerns, Conduct and Public Policy for Re-Identification and De-identification Practice: Part 3 (Re-Identification Symposium)

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. Background on the symposium is here. You can call up all of the symposium contributions by clicking here. —MM

By Daniel C. Barth-Jones

In Part 1, and Part 2 of this symposium contribution I wrote about a number of re-identification demonstrations and their reporting, both by the popular press and in scientific communications. However, even beyond the ethical considerations that I’ve raised about the accuracy of some of these communications, there are additional ethical, “scientific ethos”, and pragmatic public policy considerations involved in the conduct of re-identification research and de-identification practice that warrant some more thorough discussion and debate.

First Do No Harm

Unless we believe that the ends always justify the means, even obtaining useful results for guiding public policy (as was the case with the PGP demonstration attack’s validation of “perfect population register” issues) doesn’t necessarily mean that the conduct of re-identification research is on solid ethical footing. Yaniv Erlich’s admonition in his “A Short Ethical Manifesto for the Privacy Researcher blog post contributed as part of this symposium provides this wise advice: “Do no harm to the individuals in your study. If you can prove your point by a simulation on artificial data – do it.” This is very sound ethical advice in my opinion. I would argue that the re-identification risks for those individuals in the PGP study who had supplied 5-digit Zip Code and full date of birth were already understood to be unacceptably high (if these persons were concerned about being identified) and that no additional research whatsoever was needed to demonstrate this point. However, if additional arguments needed to be made about the precise levels of the risks, this could have been adequately addressed through the use of probability models. I’d also argue that “data intrusion scenario” uncertainty analyses which I discussed in Part 1 of this symposium contribution already accurately predicted the very small re-identification risks found for the sort of journalist and “nosy neighbor” attacks directed at the Washington hospital data. When strong probabilistic arguments can be made regarding potential re-identification risks, there is little possible purpose for undertaking actual re-identifications that can impact specific persons.

Looking more broadly, it seems more reasonably debatable whether the earlier January re-identification attacks by the Erlich lab on the CEPH – Utah Residents with Northern and Western European Ancestry (CEU) participants could have been warranted by virtue of the attack having exposed a previously underappreciated risk. However, I think an argument could likely be made that, given the prior work by Gitschier which had already revealed the re-identification vulnerabilities of CEU participants, the CEU portion of the Science paper also might not have served any additional purpose in directly advancing the science needed for development of good public policy. Without the CEU re-identifications though, it is unclear whether the surname inference paper would have been published (at least by a prominent journal like Science) and it also seems quite unlikely that it would have sustained nearly the level of media attention.

Continue reading

Press and Reporting Considerations for Recent Re-Identification Demonstration Attacks: Part 2 (Re-Identification Symposium)

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. Background on the symposium is here. You can call up all of the symposium contributions by clicking here. —MM

Daniel C. Barth-Jones, M.P.H., Ph.D., is a HIV and infectious disease epidemiologist.  His work in the area of statistical disclosure control and implementation under the HIPAA Privacy Rule provisions for de-identification is focused on the importance of properly balancing competing goals of protecting patient privacy and preserving the accuracy of scientific research and statistical analyses conducted with de-identified data. You can follow him on Twitter at @dbarthjones.

Forecast for Re-identification: Media Storms Continue…

In Part 1 of this symposium contribution, I wrote about the re-identification “media storm” started in January by the Erlich lab’s “Y-STR” re-identifications which made use of the relationship between Short Tandem Repeats (STRs) on the Y chromosome and paternally inherited surnames. Within months of that attack, April and June brought additional re-identification media storms; this time surrounding re-identification of Personal Genome Project (PGP) participants and a separate attack matching 40 persons within the Washington State hospital discharge database to news reports. However, as I have written has sometimes been the case with past reporting on other re-identification risks, accurate and legitimate characterization of re-identification risks has, unfortunately, once again been over-shadowed by distortive and exaggerated reporting on some aspects of these re-identification attacks. Unfortunately, a careful review of both the popular press coverage and scientific communications for these recent re-identification demonstrations displays some highly misleading communications, the most egregious of which incorrectly informs more than 112 million persons (more than one third of the U.S. population) that they are at potential risk of re-identification when they would not actually be unique and, therefore, re-identifiable. While each separate reporting concern that I’ve addressed here is important in and of itself, the broader pattern that can be observed for these communications about re-identification demonstrations raises some serious concerns about the impact that such distortive reporting could have on the development of sound and prudent public policy for the use of de-identified data.

Reporting Fail (and after-Fails)

University of Arizona law professor Jane Yakowitz Bambauer was the first to call out the distortive “reporting fail” for the PGP “re-identifications” in her blog post on the Harvard Law School Info/Law website. Bambauer pointed out that a Forbes article (written by Adam Tanner, a fellow at Harvard University’s Department of Government, and colleague of the re-identification scientist) covering the PGP re-identification demonstration was misleading with regard to a number of aspects of the actual research report released by Harvard’s Data Privacy Lab. The PGP re-identification study attempted to re-identify 579 persons in the PGP study by linking their “quasi-identifiers” {5-digit Zip Code, date of birth and gender} to both voter registration lists and an online public records database. The Forbes article led with the statement that “more than 40% of a sample of anonymous participants” had been re-identified. (This dubious claim was also repeated in subsequent reporting by the same author in spite of Bambauer’s “call out” of the inaccuracy explained below.) However, the mischaracterization of this data as “anonymous” really should not have fooled anyone beyond the most casual readers. In fact, approximately 80 individuals among the 579 were “re-identified” only because they had their actual names included within file names of the publically available PGP data. Some two dozen additional persons had their names embedded within the PGP file names, but were also “re-identifiable” by matching to voter and online public records data. Bambauer points out that the inclusion of the named individuals was “not relevant to an assessment of re-identification risk because the participants were not de-identified,” and quite correctly adds that “Including these participants in the re-identification number inflates both the re-identification risk and the accuracy rate.

As one observer humorously tweeted after reading Bambauer’s blog piece,

It’s like claiming you “reidentified” people from their high school yearbook”.

Continue reading