Subscribe Subscribe   Find us on Twitter Follow POL on Twitter  



Fallacies of probability: "Panel Seeks Better Disciplining of Doctors"

In Massachusetts in the last 10 years, Ms. Audesse said, "one-fourth of 1 percent of all the doctors - 98 of the 37,369 doctors - accounted for more than 13 percent of all the malpractice payments, $134 million of the $1 billion in total payments." (Robert Pear, "Panel Seeks Better Disciplining of Doctors", New York Times, January 5.)
Does this prove that "more effective disciplining of incompetent doctors could significantly alleviate the problem of medical malpractice litigation"? Not by itself.

If one were to ask the 37,369 doctors (which by itself is an overestimate of the denominator, because many Massachusetts-licensed doctors don't practice in Massachusetts) to continuously flip a coin until it came up tails, one would expect about 73, less than a quarter of one percent, to come up with heads nine consecutive times. But one cannot point to that population and say that they're more likely than not to flip heads the tenth time. They just happen to be, after the fact, the observed population that ended up at the end of the distribution out of the thousands of doctors tossing coins. The previous events have no predictive value.

The same may or may not be true in the case of Massachusetts. If large malpractice payouts are distributed randomly, with probability po for obstetricians, pn for neurologists, pu for urologists, etc., then one would expect by pure chance for a particular period of time that the ninety-ninth percentile of physicians will have a disproportionate share of malpractice payouts. For example, even among the 98 doctors identified in the Times article, there's a single $9.5 million judgment against Barbara Bassil, who only had one other small malpractice payout in the last ten years. But (if that judgment was paid, rather than settled for less) one percent of the 98 doctors was responsible for seven percent of the $134 million paid out on behalf of those doctors. In 2002, three doctors were held jointly liable in a $22 million verdict--over 2% of total payouts in Massachusetts over the last ten years, and 15% of the $134 million singled out in the New York Times quotes. Given the long tail of the thousands of Massachusetts-licensed doctors who do not practice in Massachusetts, and given the impact that the largest verdicts have on total costs, of course a small number of doctors are going to be disproportionately responsible for a percentage of payouts.

The question becomes whether a previous malpractice verdict provides an accurate prediction of whether someone will commit malpractice in the future. (For the mathematically inclined, the technical term is whether previous malpractice verdicts have Bayesian value.) If so, then the problem is one of failure to discipline doctors who have shown prior incompetence. If not, then the problem is a medical liability system that functions more like a lottery than as an accurate adjudicator, and, while improving disciplinary bodies may improve care, it will do nothing to affect the malpractice crisis. (For example, in Bassil's case above, it's far from inconceivable that the jury's verdict reflects sympathy for a World War II veteran who suffered greatly, rather than malpractice.)

Given that much of the cost of the medical malpractice system comes in cases where no liability is adjudicated (note that the Massachusetts number was one of payouts rather than all costs, which would include defense costs), and given the studies that show the essentially random nature of malpractice litigation, it's questionable whether the Massachusetts data shows anything other than random distribution.

So: if one divides the ten-year data set in half, and controls for practice speciality, and omits doctors who don't practice at all, are the doctors who were responsible for the majority of the malpractice damages in the first five years responsible for a wildly disproprotionate share of malpractice damages in the second five years? To my knowledge, that study hasn't been done. (Even controlling for practice specialty, further controls might be needed for accurate results: for example, not all neurologists are neurosurgeons.) Judging by their press releases, previous work, and comments to the Times, I'm not optimistic that the authors of the study commissioned by the government mentioned in the Times, Josephine Gittler and Randall Bovbjerg, will ask the right questions. And from yesterday's article, one might make the Bayesian prediction that if a flawed study is released, it won't be reported accurately by the New York Times.



Rafael Mangual
Project Manager,
Legal Policy

Katherine Lazarski
Manhattan Institute


Published by the Manhattan Institute

The Manhattan Insitute's Center for Legal Policy.