The $4 billion Medical Data Breach Case That Lost Its Way

By Nicolas Terry

Sutter Health v. Superior Court, 2014 WL 3589699 (Cal. App. 2014), is a medical data breach class action case that raises questions beyond the specifics of the Californian Confidentiality of Medical Information Act.

The stakes were high in Sutter — under the California statute medical data breach claims trigger (or should trigger!) nominal damages at $1000 per patient. Here four million records were stolen.

Plaintiffs’ first argued the defendant breached a section prohibiting unconsented-to disclosure. The not unreasonable response from the court was that this provision required an affirmative act of disclosure by the defendant which was not satisfied by a theft.

A second statutory provision argued by the plaintiffs looked like a winner. This section provided, “Every provider of health care … who creates, maintains, preserves, stores, abandons, destroys, or disposes of medical information shall do so in a manner that preserves the confidentiality of the information contained therein.” Continue reading

Big Data, Predictive Analytics, Health Care, Law, and Ethics

Update: The Moore Foundation has generously paid to make my article available as open access on their website here. Today I am speaking at Health Affairs’ “Using Big Data to Transform Health Care” in DC, that will also launch its new issue devoted to the topic. I have a co-authored paper in the volume entitled “The Legal And Ethical Concerns That Arise From Using Complex Predictive Analytics In Health Care” that has just been released. Ironically the article is behind a paywall (while data wants to be free, I guess big data is different!) Here is the abstract.

Predictive analytics, or the use of electronic algorithms to forecast future events in real time, makes it possible to harness the power of big data to improve the health of patients and lower the cost of health care. However, this opportunity raises policy, ethical, and legal challenges. In this article we analyze the major challenges to implementing predictive analytics in health care settings and make broad recommendations for overcoming challenges raised in the four phases of the life cycle of a predictive analytics model: acquiring data to build the model, building and validating it, testing it in real-world settings, and disseminating and using it more broadly. For instance, we recommend that model developers implement governance structures that include patients and other stakeholders starting in the earliest phases of development. In addition, developers should be allowed to use already collected patient data without explicit consent, provided that they comply with federal regulations regarding research on human subjects and the privacy of health information.

I will also have a related paper on mobile health coming out later this summer that I will blog about when it comes out…

Using Big Data To Transform Care: A Briefing on the July 2014 Special Issue of Health Affairs

HealthAffairsJournal

Register online now!

The application of big data to transform health care delivery, health research, and health policy is underway, and its potential is limitless.  The July 2014 issue of Health Affairs, “Using Big Data To Transform Care,” examines this new era for research and patient care from every angle.

You are invited to join Health Affairs Editor-in-Chief Alan Weil on Wednesday, July 9, for an event at the National Press Club, when the issue will be unveiled and authors will present their work.  Panels will cover:

  • Using Big Data At The Point Of Care
  • Research Issues
  • The Role Of The Federal Government
  • Obstacles/Challenges Of Using Big Data

Among the confirmed speakers are:  Continue reading

Journal of Law & Biosciences publishes HLS student work

JLB coverThe Journal of Law and the Biosciences, the new open-access journal launched this year by the Petrie-Flom Center and Harvard Law School in partnership with Duke University and Stanford University, has published several articles in recent weeks by Harvard Law School students:

Check out these articles, and learn more about the Journal of Law and the Biosciences!

PCAST, Big Data, and Privacy

By Leslie Francis

Cross-post from HealthLawProf Blog

The President’s Council of Advisors on Science and Technology (PCAST) has issued a report intended to be a technological complement to the recent White House report on big data. This PCAST report, however, is far more than a technological analysis—although as a description of technological developments it is wonderfully accessible, clear and informative.  It also contains policy recommendations of sweeping significance about how technology should be used and developed.  PCAST’s recommendations carry the imprimatur of scientific expertise—and lawyers interested in health policy should be alert to the normative approach of PCAST to big data.

Here, in PCAST’s own words, is the basic approach: “In light of the continuing proliferation of ways to collect and use information about people, PCAST recommends that policy focus primarily on whether specific uses of information about people affect privacy adversely. It also recommends that policy focus on outcomes, on the “what” rather than the “how,” to avoid becoming obsolete as technology advances. The policy framework should accelerate the development and commercialization of technologies that can help to contain adverse impacts on privacy, including research into new technological options. By using technology more effectively, the Nation can lead internationally in making the most of big data’s benefits while limiting the concerns it poses for privacy. Finally, PCAST calls for efforts to assure that there is enough talent available with the expertise needed to develop and use big data in a privacy-sensitive way.”  In other words:  assume the importance of continuing to collect and analyze big data, identify potential harms and fixes on a case-by-case basis possibly after the fact, and enlist the help of the commercial sector to develop profitable privacy technologies.  Continue reading

DUE 6/3: Call for Abstracts: Emerging Issues and New Frontiers for FDA Regulation

            PFC_Logo_300x300                    FDLI_Logo_380

The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Food and Drug Law Institute are pleased to announce an upcoming collaborative academic symposium:

Emerging Issues and New Frontiers for FDA Regulation

Monday, October 20, 2014 

Washington, DC

We are currently seeking abstracts for academic presentations/papers on the following topics:  Continue reading

Call for Abstracts: Emerging Issues and New Frontiers for FDA Regulation

PFC_Logo_300x300FDLI_logo_pink

 

 

 

The Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School and the Food and Drug Law Institute are pleased to announce an upcoming collaborative academic symposium:

Emerging Issues and New Frontiers for FDA Regulation

Monday, October 20, 2014 

Washington, DC

We are currently seeking abstracts for academic presentations/papers on the following topics:

  • Stem cell therapies
  • Nanotechnologies
  • Genetic (and biomarker) tests
  • Gene therapies
  • Personalized medicine
  • Comparative efficacy research
  • Drug resistant pathogens
  • Globalized markets
  • Tobacco
  • GMO
  • Bioterrorism countermeasures
  • Mobile health technologies
  • Health IT
  • Drug shortages
  • Other related topics

Abstracts should be no longer than 1 page, and should be emailed to Davina Rosen Marano at dsr@fdli.org by Tuesday, June 3, 2014. Questions should also be directed to Davina Rosen Marano.

We will notify selected participants by the end of June.  Selected participants will present at the symposium, and will be expected to submit a completed article by December 15, 2014 (after the event) to be considered for publication in a 2015 issue of FDLI’s Food and Drug Law Journal (FDLJ).  Publication decisions will be made based on usual FDLJ standards.

A More Transparent System for Clinical Trials Data in Europe – Mind the Gaps!

Following the approval of the European Parliament (EP) earlier last month, the Council of the European Union (the Council) adopted on 14 April 2014 a “Regulation on clinical trials on medicinal products for human use” repealing Directive 2001/20/EC.  As described in a press-release, the new law:

“aims to remedy the shortcomings of the existing Clinical Trials Directive by setting up a uniform framework for the authorization of clinical trials by all the member states concerned with a given single assessment outcome. Simplified reporting procedures, and the possibility for the Commission to do checks, are among the law’s key innovations.”

Moreover, and very importantly, the Regulation seeks to improve transparency by requiring pharmaceutical companies and academic researchers to publish the results of all their European clinical trials in a publicly-accessible EU database. In contrast to earlier stipulations which only obliged sponsor to publish the end-results of their clinical trials, the new law requires full clinical study reports to be published after a decision on – or withdrawal of – marketing authorization applications. Sponsors who do not comply with these requirements will face fines.

These groundbreaking changes will enter into force 20 days after publication in the Official Journal of the EU. However, it will first apply six months after a new EU portal for the submission of data on clinical trials and the above mentioned EU database have become fully functional. Since this is expected to take at least two years, the Regulation will apply in 2016 at the earliest (with an opt-out choice available until 2018).

Continue reading

Book Review published on SSRN

Three weeks ago I blogged about my recent review of  “Pharmaceutical Innovation, Competition and Patent Law – a Trilateral Perspective” (Edward Elgar 2013). The full review, which is forthcoming in a spring issue of European Competition Law Review (Sweet Maxwell), is now available at SSRN: http://ssrn.com/abstract=2396804.

New paper on “Standardization, IPRs and Open Innovation in Synthetic Biology”

I am pleased to announce that we have today published the following paper:

Minssen, Timo and Wested, Jakob Blak, Standardization, IPRs and Open Innovation in Synthetic Biology (February 14, 2014). Available at SSRN.

This brief book contribution stems from a presentation given at the 2013 conference “Innovation, Competition, Collaboration” at Bucerius Law School, Hamburg, Germany. It is currently under review by Edward Elgar.  A longer journal-version will follow.

Abstract: 

An effective and just sharing of resources for innovation needs a supportive infrastructure. One such infrastructure of both historic and contemporary significance is the development of standards. Considering recent developments within the software and ICT industries, it seems fair to assume that the process of standardization may also have significant impact on the development and adoption of Synthetic Biology (SB). Within SB different standardization efforts have been made, but few have assumed dominance or authority. Standardization efforts within SB may differ within various technical areas, and also the basic processes of standard creation can be divided into various categories. The different technical areas and processes for standardization differ in their speed, handling of interests and ability to dodge possible IPR concerns.

Out of this notion arises i.a. the following questions: How comparable is engineering in SB to more traditional fields of engineering?; What type of standards have emerged and what bearing have IPRs on these?; and, How applicable are the approaches adopted by the standards-setting organizations in the information and communication technology (ICT) to biological standards? These and further legal issues related to IP, regulation, standardization, competition law & open innovation require a careful consideration of new user-generated models and solutions.

Before this background, our paper seeks to describe IP and standardization aspects of SB in order to discuss them in the context of the “open innovation” discourse. We concentrate on describing the technology and identifying areas of particular relevance. Ultimately we also sketch out open questions and potential solutions requiring further research. However, due to the limitation of this paper we do not aim to create elaborated theories or to propose solutions in more detail. Rather this paper, which will be complemented by more extensive follow-up studies, provides a first overview on the complex questions that we are currently dealing with.

To achieve this modest goal, section 1 commences with a brief introduction to the fascinating science of SB and a description of recent technological advances and applications. This will lead us to section 2, in which we will address standard setting efforts in SB, as well as the relevance and governance of various IPRs for specific SB standards. This provides the basis for section 3, in which we debate problematic issues and summarize our conclusions.

Pit Crews with Computers: Can Health Information Technology Fix Fragmented Care?

I recently posted this draft on SSRN. Feedback much appreciated. Here is the abstract:

Fragmentation and lack of coordination remain as some of the most intractable problems facing health care. Attention has often alighted on the promise of Health care Information Technology not least because IT has had such positive impact on many other personal, professional and industrial domains. For at least two decades the HIT-panacea narrative has been persistent even though the context has shifted. At various times we have been promised that patient safety technologies would solve our medical error problems, electronic transactions would simplify healthcare administration and insurance and clinical data would become interoperable courtesy of electronic medical records. Today the IoM is positioning HIT at the center of its new “continuously learning” health care model that is in large part aimed at solving our fragmentation and lack of coordination problems. While the consensus judgment that HIT can reduce fragmentation and increase coordination has intuitive force the specifics are more complicated. First, the relationship between health care and IT has been both culturally and financially complex. Second, HIT has been overhyped as a solution for all of health care’s woes; it has its own problems. Third, the HIT-fragmentation solution presents a chicken-and-egg problem — can HIT solve health care fragmentation and lack of coordination problems or must health care problems such as episodic care be solved prior to successful deployment of HIT? The article takes a critical look at both health care and HIT with those questions in mind before concluding with some admittedly difficult recommendations designed to break the chicken-and-egg deadlock.

NPT

Capturing Value in Advanced Medical Imaging

On December 12, a bipartisan bill entitled the Excellence in Diagnostic Imaging Utilization Act of 2013 (HR 3705) was introduced in the House of Representatives which would require clinicians to use electronic clinical decision support tools (CDS) before ordering advanced diagnostic imaging tests for Medicare patients.  Structured around appropriate use criteria  developed by professional medical societies, the tools would aim to increase the value of advanced imaging studies by informing and guiding practitioners’ decisions across a variety of clinical settings.

Such tools would provide active feedback on the appropriateness and evidence base of various imaging modalities, and would require physicians to furnish rationales for ordering tests that are inconsistent with appropriate use criteria.  The bill also envisions the creation of registries that document how diagnostic tests are used in order to facilitate research and to enable feedback to clinicians on metrics related to appropriate use criteria.  In a press release, the American College of Radiology lauded the proposed legislation, stating that it would “revolutionize the specialty of radiology.”

Mandating the use of electronic clinical decision support tools portends at least three key improvements in clinical workflows and healthcare quality more broadly.

Continue reading

Broader Lessons from the Insurance Exchange Fiasco

By Nicolas Terry

The political ripples from the poorly managed exchange roll-out likely will endure through at least one election cycle. Maybe, late night comedians will run out of material sooner. While criticism and inquiry are appropriate given the foreseeable nature of the problem (some months ago at SEALS even I was moved to highlight the OIG’s predictions that there would be little time for testing the data hub) mostly we will witness technical flaws being fashioned into a cudgel with which to beat the Affordable Care Act and its champion-in-chief.

As Ezra Klein has noted, “the politics here will be driven by the reality. If the policy continues to fail, then there’s nothing the White House can do to keep from being dragged down. Conversely, if the Web site is fixed come mid-December, and the policy begins working pretty well, then there’s no amount of Republican messaging that can make it a failure.”

Sitting here in mid-to-late November, it may be appropriate (or at least refreshing) to seek out some broader lessons that we may take away from this mess. In an illuminating post at the Commonwealth Fund blog David Blumenthal contrasted his experiences inside and outside of government and concluded that the federal government needed to reform its IT procurement system. Extrapolating even further from the current disaster Clay Shirky uses healthcare.gov to pose some fundamental questions about how managers communicate with technologists and how politicians approach Internet interaction with citizens. His “litmus test” for “whether out political class grasps the internet”? “Can anyone with authority over a new project articulate the tradeoff between features, quality, and time?” Those managing healthcare.gov failed that test.

Medical Advice and the Limits of Therapeutic Influence

By Michael Young

It is estimated that 500,000 patients are discharged from U.S. hospitals against the recommendations of medical staff each year.  This category of discharges, dubbed discharges against medical advice (DAMA), encompasses cases in which patients request to be discharged in spite of countervailing medical counsel to remain hospitalized.  Despite safeguards that exist to ensure that patients are adequately informed and competent to make such decisions, these cases can be ethically challenging for practitioners who may struggle to balance their commitments to patient-centered care with their impulse to accomplish what is in their view best for a patient’s health.

Writing in the most recent issue of JAMA, Alfandre et al. contend that “the term ['discharge against medical advice'] is an anachronism that has outlived its usefulness in an era of patient-centered care.”  They argue that the concept and category of DAMA “sends the undesirable message that physicians discount patients’ values in clinical decision making.  Accepting an informed patient’s values and preferences, even when they do not appear to coincide with commonly accepted notions of good decisions about health, is always part of patient-centered care.”  The driving assumption here seems to be that if physicians genuinely include patients’ interests and values in their assessments, then the possibility of “discharge against medical advice” is ruled out ab initio, since any medical advice issued would necessarily encapsulate and reflect patients’ preferences.  They therefore propose that “[f]or a profession accountable to the public and committed to patient-centered care, continued use of the discharged against medical advice designation is clinically and ethically problematic.”

While abandoning DAMA procedures may well augment patients’ sense of acceptance among medical providers and reduce deleterious effects on therapeutic relationships that may stem from having to sign DAMA forms, it leaves relatively unaddressed the broader question of how to mitigate health risks patients may experience following medically premature or unplanned discharge.  Alfandre and Schumann’s robust interpretation of patient-centeredness also raises the question of how to handle situations in which patients refuse medically appropriate discharge.  On this interpretation, can the ideal of patient-centered care be squared with concerns for optimizing the equity and efficiency of resource allocations more broadly?

Continue reading

Gregg Fields on the Failure of Public-Private Partnerships in Obamacare

Over at our sister blog for the Edmond J. Safra Center for Ethics, Gregg Fields has an insightful discussion of the way Obamacare has relied on private sector contractors to get its enrollment website up and running.  Gregg quotes Safra-affiliate Bill English, who explains the allure of public-private partnerships:   they “enable the public sector to harness the expertise and efficiencies that the private sector can bring to the delivery of certain facilities and services traditionally procured and delivered by the public sector.” HHS Secretary Kathleen Sebelius gets the understatement of the year award: “Unfortunately, a subset of those contracts for HealthCare.gov have not met expectations.”

Disruptive Innovation and the Rise of the Retail Clinic

By Michael Young

The Association of American Medical Colleges (AAMC) projects that by 2025 the United States will face a shortage of 130,600 physicians, representing a near 18-fold increase from the deficit of 7,400 physicians in 2008.  The widening gap between physician supply and demand has grown out of a complex interplay of legal, political, and social factors, including a progressively aging population, Congressionally mandated caps on the number of Medicare-funded residency slots and funding for graduate medical education, and waning interest among medical school graduates in pursuing careers in primary care.

These issues generate unprecedented opportunities for healthcare innovators and entrepreneurs to design solutions that can effectively address widening disparities between healthcare supply and demand, particularly within vulnerable and underserved areas.

Continue reading

Ethical Concerns, Conduct and Public Policy for Re-Identification and De-identification Practice: Part 3 (Re-Identification Symposium)

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. Background on the symposium is here. You can call up all of the symposium contributions by clicking here. —MM

By Daniel C. Barth-Jones

In Part 1, and Part 2 of this symposium contribution I wrote about a number of re-identification demonstrations and their reporting, both by the popular press and in scientific communications. However, even beyond the ethical considerations that I’ve raised about the accuracy of some of these communications, there are additional ethical, “scientific ethos”, and pragmatic public policy considerations involved in the conduct of re-identification research and de-identification practice that warrant some more thorough discussion and debate.

First Do No Harm

Unless we believe that the ends always justify the means, even obtaining useful results for guiding public policy (as was the case with the PGP demonstration attack’s validation of “perfect population register” issues) doesn’t necessarily mean that the conduct of re-identification research is on solid ethical footing. Yaniv Erlich’s admonition in his “A Short Ethical Manifesto for the Privacy Researcher blog post contributed as part of this symposium provides this wise advice: “Do no harm to the individuals in your study. If you can prove your point by a simulation on artificial data – do it.” This is very sound ethical advice in my opinion. I would argue that the re-identification risks for those individuals in the PGP study who had supplied 5-digit Zip Code and full date of birth were already understood to be unacceptably high (if these persons were concerned about being identified) and that no additional research whatsoever was needed to demonstrate this point. However, if additional arguments needed to be made about the precise levels of the risks, this could have been adequately addressed through the use of probability models. I’d also argue that “data intrusion scenario” uncertainty analyses which I discussed in Part 1 of this symposium contribution already accurately predicted the very small re-identification risks found for the sort of journalist and “nosy neighbor” attacks directed at the Washington hospital data. When strong probabilistic arguments can be made regarding potential re-identification risks, there is little possible purpose for undertaking actual re-identifications that can impact specific persons.

Looking more broadly, it seems more reasonably debatable whether the earlier January re-identification attacks by the Erlich lab on the CEPH – Utah Residents with Northern and Western European Ancestry (CEU) participants could have been warranted by virtue of the attack having exposed a previously underappreciated risk. However, I think an argument could likely be made that, given the prior work by Gitschier which had already revealed the re-identification vulnerabilities of CEU participants, the CEU portion of the Science paper also might not have served any additional purpose in directly advancing the science needed for development of good public policy. Without the CEU re-identifications though, it is unclear whether the surname inference paper would have been published (at least by a prominent journal like Science) and it also seems quite unlikely that it would have sustained nearly the level of media attention.

Continue reading

Press and Reporting Considerations for Recent Re-Identification Demonstration Attacks: Part 2 (Re-Identification Symposium)

This post is part of Bill of Health‘s symposium on the Law, Ethics, and Science of Re-Identification Demonstrations. Background on the symposium is here. You can call up all of the symposium contributions by clicking here. —MM

Daniel C. Barth-Jones, M.P.H., Ph.D., is a HIV and infectious disease epidemiologist.  His work in the area of statistical disclosure control and implementation under the HIPAA Privacy Rule provisions for de-identification is focused on the importance of properly balancing competing goals of protecting patient privacy and preserving the accuracy of scientific research and statistical analyses conducted with de-identified data. You can follow him on Twitter at @dbarthjones.

Forecast for Re-identification: Media Storms Continue…

In Part 1 of this symposium contribution, I wrote about the re-identification “media storm” started in January by the Erlich lab’s “Y-STR” re-identifications which made use of the relationship between Short Tandem Repeats (STRs) on the Y chromosome and paternally inherited surnames. Within months of that attack, April and June brought additional re-identification media storms; this time surrounding re-identification of Personal Genome Project (PGP) participants and a separate attack matching 40 persons within the Washington State hospital discharge database to news reports. However, as I have written has sometimes been the case with past reporting on other re-identification risks, accurate and legitimate characterization of re-identification risks has, unfortunately, once again been over-shadowed by distortive and exaggerated reporting on some aspects of these re-identification attacks. Unfortunately, a careful review of both the popular press coverage and scientific communications for these recent re-identification demonstrations displays some highly misleading communications, the most egregious of which incorrectly informs more than 112 million persons (more than one third of the U.S. population) that they are at potential risk of re-identification when they would not actually be unique and, therefore, re-identifiable. While each separate reporting concern that I’ve addressed here is important in and of itself, the broader pattern that can be observed for these communications about re-identification demonstrations raises some serious concerns about the impact that such distortive reporting could have on the development of sound and prudent public policy for the use of de-identified data.

Reporting Fail (and after-Fails)

University of Arizona law professor Jane Yakowitz Bambauer was the first to call out the distortive “reporting fail” for the PGP “re-identifications” in her blog post on the Harvard Law School Info/Law website. Bambauer pointed out that a Forbes article (written by Adam Tanner, a fellow at Harvard University’s Department of Government, and colleague of the re-identification scientist) covering the PGP re-identification demonstration was misleading with regard to a number of aspects of the actual research report released by Harvard’s Data Privacy Lab. The PGP re-identification study attempted to re-identify 579 persons in the PGP study by linking their “quasi-identifiers” {5-digit Zip Code, date of birth and gender} to both voter registration lists and an online public records database. The Forbes article led with the statement that “more than 40% of a sample of anonymous participants” had been re-identified. (This dubious claim was also repeated in subsequent reporting by the same author in spite of Bambauer’s “call out” of the inaccuracy explained below.) However, the mischaracterization of this data as “anonymous” really should not have fooled anyone beyond the most casual readers. In fact, approximately 80 individuals among the 579 were “re-identified” only because they had their actual names included within file names of the publically available PGP data. Some two dozen additional persons had their names embedded within the PGP file names, but were also “re-identifiable” by matching to voter and online public records data. Bambauer points out that the inclusion of the named individuals was “not relevant to an assessment of re-identification risk because the participants were not de-identified,” and quite correctly adds that “Including these participants in the re-identification number inflates both the re-identification risk and the accuracy rate.

As one observer humorously tweeted after reading Bambauer’s blog piece,

It’s like claiming you “reidentified” people from their high school yearbook”.

Continue reading

HBR/NEJM online forum on health care innovation

For those of you who haven’t seen it yet, there’s a great ongoing online forum over at the joint Harvard Business Review and New England Journal of Medicine Insight Center on Leading Health Care Innovation.  It’s online at HBR here, and will feature an ongoing series of posts about innovation in high-value health care through November 15.  Short articles from scholars in various fields will focus on three main areas: Big Ideas (foundational principles of high-value health care); Managing Innovations (organization and delivery); and From the Front Lines (stories of specific case solutions from practitioners).

They’re looking to host a lively forum, so comments seem both quite welcome and unusually thoughtful so far.

 

Of Data Challenges

Cross-posted from the HealthLawProfs blog.

Challenges designed to spur innovative uses of data are springing up frequently. These are contests, sponsored by a mix of government agencies, industry, foundations, a variety of not-for-profit groups, or even individuals. They offer prize money or other incentives for people or teams to come up with solutions to a wide range of problems. In addition to grand prizes, they often offer many smaller prizes or networking opportunities. The latest such challenge to come to my attention was announced August 19 by the Knight Foundation: $2 million for answers to the question “how can we harnass data and information for the health of communities?” Companion prizes, of up to $200,000, are also being offered by the Robert Wood Johnson Foundation and the California Healthcare Foundation.

Such challenges are also a favorite of the Obama administration. From promoting Obamacare among younger Americans (over 100 prizes of up to $30,000)–now entered by Karl Rove’s Crossroads group–to arms control and identification of sewer overflows, the federal government has gone in for challenges big time. Check out challenge.gov to see the impressive list. Use of information and technological innovation feature prominently in the challenges, but there is also a challenge for “innovative communications strategies to target individuals who experience high levels of involuntary breaks (“churn”) in health insurance coverage” (from SAMHSA), a challenge to design posters to educate kids about concussions (from CDC), a challenge to develop a robot that can retrieve samples (from NASA), and a challenge to use technology for atrocity prevention (from USAID and Humanity United). All in all, some 285 challenges sponsored by the federal government are currently active, although for some the submission period has closed. Continue reading