You are viewing a read-only archive of the Blogs.Harvard network. Learn more.
Medical loss ratio versus year for Medicare and private insurers
…in the upper 90’s…
apparently from Health Care for America Now! via logarchism.com.

The Patient Protection and Affordable Care Act (ACA) limits the “medical loss ratio” (MLR) that an insurer can have — the percentage of collected medical premiums that must go to medical services for the insured. The minimum MLR mandated by the law is 80-85% depending on the particular market. (For simplicity, let’s call it 80%.)

On its face, this seems like a good idea. If an insurer’s MLR is really low, say 50%, they’re keeping an awful lot of money for administration and profit, and it looks like the premium-payers are getting a raw deal. By limiting MLR to at least 80%, premium-payers are guaranteed that at most 20% of their money will go to those costs that benefit them not at all. But there may be unintended consequences of the MLR limit, and alternatives to achieving its goal.

Because of the MLR limit, an insurance company that spends $1,000,000 on medical services can generate at most $250,000 in profit. They’d reach this limit by charging premiums totalling $1,250,000, yielding an MLR of 1,000,000/1,250,000 = .80. (Of course, they’d generate even less profit than this, since they have other costs than medical services, but $250,000 is an upper bound on their profit.) They can’t increase their profit by charging higher premiums alone, since this would just blow the MLR limit. The only way to increase the profits (governed by the denominator in the MLR calculation) is to increase medical services (the numerator) as well — pay for more doctor visits, longer stays, more tests, just the kinds of things we’re already spending too much on with our moral-hazard–infested medical care system. The MLR limit embeds an incentive for insurance companies to push for more medical services, whether needed or not.

And why 80%? Medicare has had an MLR in the upper 90%’s for a couple of decades, and private insurers used to make a go of it in that range as well in the early 1990’s. (See graph.) Other countries have MLR’s in the mid-90’s as well. An MLR limit of 80% means that once an insurer reaches 80% MLR, the regulation drops any incentive to improve further.

Wasn’t this moral hazard and inefficiency just the sort of thing the ACA was supposed to resolve by using market forces? When people buy insurance premiums on a transparently priced exchange, if one insurer is less efficient or egregious in profit-taking (therefore with a low MLR), it should end up outcompeted by more efficient and leaner insurers. No need to mandate a limit; the market will solve the problem.

If you think that the market forces in the health care exchanges won’t compete down adminstrative overheads and profits (that is, raise MLR) on their own and that regulation is necessary to prevent abuse, then you’re pretty much conceding that the market doesn’t work under the ACA, and that we should move to a single-payer system. MLR limits are not a way of achieving a more efficient insurance system but rather an admission that our insurance system is inherently broken. The MLR limit looks to me like a crisis of faith in the free market. What am I missing?

Update March 25, 2019: Wesley Pegden, Ariel D. Procaccia, and Dingli Yu have an elegant working out of the proposal below that they call “I cut, you freeze.” Pegden and Procaccia describe it in a Washington Post opinion piece.

'Halves' by flickr user Julie Remizova
…how to split a cupcake…
Halves” image by flickr user Julie Remizova.

Why is gerrymandering even possible in a country with a constitutional right to equal protection?:

No State shall make or enforce any law which shall…deny to any person within its jurisdiction the equal protection of the laws.

By reshaping districts to eliminate the voting power of particular individuals, as modern district mapping software allows, some persons are being denied equal protection, I’d have thought. And so have certain Supreme Court justices.

It’s hard to know what to do about the problem. Appeals to fairness aren’t particularly helpful, since who decides what’s fair? It would be nice to think that requirements of “compact districts of contiguous territory” (as Chief Justice Harlan put it) would be sufficient. But this reduces the problem of districting to a mathematical optimization problem; James Case proposes something like minimum isoperimetric quotient tessellation of a polygon. But such purely mathematical approaches may yield results that violate our intuitions about what is fair. They ignore other criteria, such as “natural or historical boundary lines”, determined for instance by geographical features like rivers and mountains or shared community interests. These boundaries may not coincide with the mathematical optima, so any mathematical formulation would need to be defeasible to take into account such features. This leads us right back to how to decide in which cases the mathematical formulation should be adjusted: who should decide what is fair?

A comment at a ProPublica article about gerrymandering from “damien” caught my attention as a nice way out of this quandary. In essence, he proposes that the parties themselves choose what’s fair.

The first solution to gerrymandering is to have a fitness measure for a proposed districting (e.g. the sum of the perimeters), and then to allow any individual or organisation to propose a districting, with the winner having the best fitness value.

What “damien” is proposing, I take it, is the application of an algorithm somewhat like one familiar from computer science (especially cryptography) and grade school cafeterias known as “cut and choose”. How do you decide how to split a cupcake between two kids? One cuts; the other chooses. The elegance of cut-and-choose is that it harmonizes the incentives of the two parties. The cutter is incentivized to split equally, since the chooser can punish inequity.

Cut-and-choose is asymmetrical; the two participants have different roles. A symmetrical variant has each participant propose a cut and an objective third party selecting whichever is better according to the pertinent objective measure. This variant shares the benefit that each participant has an incentive to be more nearly equal than the other. If Alice proposes a cut that gives her 60% of the cupcake and Bob 40%, she risks Bob proposing a better split that gives her only 45% with him taking the remaining 55%. To avoid getting taken advantage of, her best bet is to propose a split as nearly equal as possible.

In the anti-gerrymandering application of the idea, the two parties propose districtings, which they could gerrymander however they wanted. Whichever of the two proposals has the lower objective function (lower isoperimetric quotient, say) is chosen. Thus, if one party gerrymanders too much, their districting will be dropped in favor of the other party’s proposal. Each party has an incentive to hew relatively close to a compact partition, while being allowed to deviate in appropriate cases.

A nice property of this approach is that the optimization problem doesn’t ever need to be solved. All that is required is the evaluation of the objective function for the two proposed districtings, which is computationally far simpler. (In fact, I’d guess the minimum isoperimetric quotient optimization problem might well be NP-hard.)

There are problems of course. The procedure is subject to gaming when the proposal-generating process is not private to the parties. It is unclear how to extend the method to more than two parties. Of course, the obvious generalization works once the eligible parties are determined. The hard part is deciding what parties are eligible to propose a redistricting. Most critically, the method is subject to collusion, especially in cases where both parties benefit from gerrymandering. In particular, both parties benefit from a districting that protects incumbencies for both parties. The parties could agree, for instance, not to disturb each other’s safe districts, and would benefit from observing the agreement.

Nonetheless, once districting is thought of in terms of mechanism design, the full range of previous algorithms can be explored. Somewhere in the previous literature there might be a useful solution. (Indeed, the proposal here is essentially the first step in Brams, Jones, and Klamler’s surplus procedure for cake-cutting.)

Of course, as with many current political problems (campaign financing being the clearest example), the big question is how such new mechanisms would be instituted, given that it is not in the incumbent majority party’s interest to do so. Until that’s sorted out, I’m not holding out much hope.

'scams upon scammers' by flickr user Daniel Mogford
…what 419 scams are to banking…
scams upon scammers” image by flickr user Daniel Mogford used by permission.

Investigative science journalist John Bohannon[1] has a news piece in Science earlier this month about the scourge of faux open-access journals. I call them faux journals (rather than predatory journals), since they are not real journals at all. They display the trappings of a journal, promising peer-review and other services, but do not deliver; they perform no peer review, and provide no services, beyond posting papers and cashing checks for the publication fees. They are to scholarly journal publishing what 419 scams are to banking.

We’ve known about this practice for a long time, and Jeffrey Beall has done yeoman’s work codifying it informally. He has noted a recent dramatic increase in the number of publishers that appear to be engaged in the practice, growing by an order of magnitude in 2012 alone.

In the past, I’ve argued that the faux journal problem, while unfortunate, is oversold. My argument was that the existence of these faux journals costs clued-in researchers, research institutions, and the general public nothing. The journals don’t charge subscription fees, and we don’t submit articles to them so don’t pay their publication fees. Caveat emptor ought to handle the problem, I would have thought.

But I’ve come to understand over the past few years that the faux journal problem is important to address. The number of faux journals has exploded, and despite the fact that the faux journals tend to publish few articles, their existence crowds out attention to the many high-quality open-access journals. Their proliferation provides a convenient excuse to dismiss open-access journals as a viable model for scholarly publishing. It is therefore important to get a deeper and more articulated view of the problem.

My views on Bohannon’s piece, which has seen a lot of interest, may therefore be a bit contrarian among OA aficionados, who are quick to dismiss the effort as a stunt or to attribute hidden agendas. Despite some flaws (which have been widely noted and are discussed in part below), the study well characterizes and provides a far richer understanding of the faux OA journal problem. Bohannon provides tremendous texture to our understanding of the problem, far better than the anecdotal and unsystematic approaches that have been taken in the past.

His study shows that even in these early days of open-access publishing, many OA journals are doing an at least plausible job of peer review. In total, 98 of the 255 journals that came to a decision on the bogus paper (about 38%) rejected it. It makes clear that the problem of faux journal identification may not be as simple as looking at superficial properties of journal web sites. About 18% of the journals from Beall’s list of predatory publishers actually performed sufficient peer review to reject the bogus articles outright.

Just as clearly, the large and growing problem of faux journals — so easy to set up and so inexpensive to run — requires all scholars to pay careful attention to the services that journals provide. This holds especially for open-access journals, which are generally newer, with shorter track records, and for which the faux journal fraud has proliferated in a short time much faster than appropriate countermeasures can be deployed. The experiment provides copious data on where the faux journals tend to operate from, where they bank, where their editors are.

Bohannon should also be commended for providing his underlying data open access, which will allow others to do even more detailed analysis.

As with all studies, there are some aspects that require careful interpretation.

First, the experiment did not test subscription journals. All experimenters, Bohannon included, must decide how to deploy scarce resources; his concentrating on OA journals, where the faux journal problem is well known to be severe, is reasonable for certain purposes. However, as many commentators have noted, it does prevent drawing specific conclusions comparing OA with subscription journals. Common sense might indicate that OA journals, whose revenues rely more directly on the number of articles published, have more incentive to fraudulently accept articles without review, but the study unfortunately can’t directly corroborate this, and as in so many areas, common sense may be wrong. We know, for instance, that many OA journals seem to operate without the rapacity to accept every article that comes over the transom, and that there are countervailing economic incentives for OA journals to maintain high quality. Journals from 98 publishers — including the “big three” OA publishers Public Library of Science, Hindawi, and BioMed Central — all rejected the bogus paper, and more importantly, a slew of high-quality journals throughout many fields of scholarship are conducting exemplary peer review on every paper they receive. (Good examples are the several OA journals in my own research area of artificial intelligence — JMLR, JAIR, CL — which are all at the top of the prestige ladder in their fields.) Conversely, subscription publishers also may have perverse incentives to accept papers: Management typically establish goals for the number of articles to be published per year; they use article count statistics in marketing efforts; their regular founding of new journals engenders a need for a supply of articles so as to establish their contribution to the publisher’s stable of bundled journals; and many subscription journals especially in the life sciences charge author-side fees as well. Nonetheless, it would be unsurprising if the acceptance rate for the bogus articles would have been lower for subscription journal publishers given what we know about the state of faux journals. (Since there are many times more subscription journals than OA journals, it’s unclear how the problem would have compared in terms of absolute numbers of articles.) Hopefully, future work can clear up this problem with controls.

Second, the experiment did not test journals charging no author-side fees, which is currently the norm among OA journals. That eliminates about 70% of the OA journals, none of which have any incentive whatsoever to accept articles for acceptance’s sake. Ditto for journals that gain their revenue through submission fees instead of publication fees, a practice that I have long been fond of.

Third, his result holds only for journal publishing in the life sciences. (Some people in the life sciences need occasional reminding that science research is not coextensive with life sciences research, and that scholarly research is not coextensive with science research.) I suspect the faux journal problem is considerably lower outside of the life sciences. It is really only in the life sciences where there is long precedent for author-side charges and deep pockets to pay those charges in much of the world, so that legitimate OA publishers can rely on being paid for their services. This characteristic of legitimate life sciences OA journals provides the cover for the faux journals to pretend to operate in the same way. In many other areas of scholarship, OA journals do not tend to charge publication fees as the researcher community does not have the same precedent.

Finally, and most importantly, since the study reports percentages by publisher, rather than by journal or by published article, the results may overrepresent the problem from the reader’s point of view. Just because 62% of the tested publishers[2] accepted the bogus paper doesn’t mean the problem covers that percentage of OA publishing or even of life sciences APC-charging OA publishing. The faux publishers may publish a smaller percentage of the journals (though the faux publishers’ tactic of listing large numbers of unstaffed journals may lead to the opposite conclusion). More importantly, those publishers may cover a much smaller fraction of OA-journal-published papers. (Anyone who has spent any time surfing the web sites of faux journal publishers knows their tendency to list many journals with very few articles. Even fewer if you eliminate the plagiarized articles that faux publishers like to use to pad their journals.) So the vast majority of OA-published articles are likely to be from the 38% “good” journals. This should be determinable from Bohannon’s data — again thanks to his openness — and it would be useful to carry out the calculation, to show that the total number of OA-journal articles published by the faux publishers account for a small fraction of the OA articles published in all of the OA journals of all of the publishers in the study. I expect that’s highly likely.[3]

Bohannon has provided a valuable service, and his article is an important reminder, like the previous case of the faux Australasian Journals, that journal publishers do not always operate under selfless motivations. It behooves authors to take this into account, and it behooves the larger scientific community to establish infrastructure for helping researchers by systematically and fairly tracking and publicizing information about journals that can help its members with their due diligence.


  1. In the interest of full disclosure, I mention that I am John Bohannon’s sponsor in his role as an Associate (visiting researcher) of the School of Engineering and Applied Sciences at Harvard. He conceived, controlled, and carried out his study independently, and was in no sense under my direction. Though I did have discussions with him about his project, including on some of the topics discussed below, the study and its presentation were his alone.  ↩
  2. It is also worth noting that by actively searching out lists of faux journals (Beall’s list) to add to a more comprehensive list (DOAJ), Bohannon may have introduced skew into the data collection. The attempt to be even more comprehensive than DOAJ is laudable, but the method chosen means that even more care must be taken in interpreting the results. If we look only at the DOAJ-listed journals that were tested, the acceptance rate drops from 62% to 45%. If we look only at OASPA members subject to the test, who commit to a code of conduct, then by my count the acceptance rate drops to 17%. That’s still too high of course, but it does show that the cohort counts, and adding in Beall’s list but not OASPA membership (for instance) could have an effect.  ↩
  3. In a videotaped live chat, Michael Eisen has claimed that this is exactly the case.  ↩