Following in Ted's footsteps of dissecting misleading statistics in the medical malpractice debate (see May 2, Apr. 3, Apr. 1, Mar. 25, and Mar. 10; see also our editor's dismantling of The New York Times's med-mal article Feb. 23 and Feb. 24, and the analysis we each gave to Bob Herbert's claims in the Times last Jun. 22 (Copland), Jun. 22 (Olson), and Jun. 23 (Frank)), I spent some time this week going through Public Citizen's new report, "Medical Malpractice Payout Trends 1991-2004: Evidence Shows Lawsuits Haven't Caused Doctors' Insurance Woes" (PDF).
The "report" is, all too typically for Public Citizen, an exercise in obfuscation. Figures 3 and 4 of the report (pp. 3-4) suggest that medical malpractice payouts have not increased much in the last thirteen years, "adjusted for inflation." But what Public Citizen doesn't tell you (apart from a small note at the end of the report, unreferenced in the body of the report or the graphs) is that they aren't really adjusting for what we'd commonly call inflation -- i.e., the consumer price index -- but rather for a measure of health care inflation, which is much higher than overall inflation throughout the period. Because I know something about inflation rates and compounding, the deception was obvious to me, but it probably wouldn't be to the average journalist or casual reader.
Apart from being overtly misleading, Public Citizen's use of medical care inflation to discount malpractice damage awards makes little sense. Damage awards are estimates of the economic (e.g., actual treatments, lost wages) and noneconomic costs (e.g., pain and suffering) of injury. Except for the actual costs of medical treatment -- which aren't affected by caps on noneconomic damages, such as those in California's MICRA and the president's medical malpractice proposal -- it makes no sense whatsoever to "adjust" the growth in damage awards by a health care inflation number rather than a measure reflecting overall inflation.
Moreover, Public Citizen's inflation adjustment makes their analysis circular. Critics of our current medical malpractice system, such as myself, contend not only that high malpractice payouts reduce access to care but also that such payouts increase health costs, mostly by encouraging "defensive medicine." Extrapolating from Kessler and McClellan's study, the Department of Health and Human Services estimates such defensive medicine costs as between $60 and 108 billion per year (p. 7). It makes no sense to say that we should discount the growth in jury awards by the cost inflation those jury awards have at least in part encouraged.
Figure 5 of the report purports to show a decline in the number of payouts at or above $1 million, but employs the same sleight of hand by using medical care inflation numbers. Jury Verdict Research's most recent figures on medical malpractice jury awards show that the median verdict for med-mal cases rose sharply from $473,000 in 1996 to $1,000,000 in 2000, and has since plateaued at or above $1 million. It is of course very true that most cases settle and do not go to trial; but it's also axiomatic that verdict levels, and the probability of an unfavorable verdict, affect expected verdict levels -- and thus determine settlement values. (And I note that JVR's analysis also shows that the odds of a plaintiff winning in med-mal cases rose precipitously from 29% to 42% -- a 45% jump -- from 1996 to 2002.) JVR's figures probably understate actual escalation in the legal costs of medical malpractice over the period, because they represent median rather than mean verdicts: the cost of writing an insurance policy is the discounted expected value of future claims, so huge outlier verdicts absolutely drive up malpractice insurance costs.
Others of Public Citizen's statistics may well be accurate but don't tell us much. E.g., that the number of malpractice payouts (Figures 1 and 2) hasn't grown isn't really important. What matters is the mean payout per doctor. Public Citizen's own numbers show that overall payouts have doubled in nominal terms (and risen ~45% in real terms) over the period studied, which is striking if Public Citizen is correct that the number of payouts is roughly flat. Similarly, that the percentage of OB or surgical payouts may not have changed much (Figure 8), in terms of the number of payments, doesn't matter. Again, what matters is the mean payout per doctor, which Public Citizen doesn't mention. It's also not very interesting, or surprising, that injuries categorized as "more severe" receive the lion's share of awards (Figures 6 and 7). Nobody claims that the medical malpractice crisis is driven primarily by the claims of individuals without an actual injury (unlike, say, asbestos); the question is whether those awards are rationally related to doctor error and whether on average they're reasonable, and as Ted points out extensively in his recent post, there's lots of evidence to suggest that they're not.
How else does Public Citizen's report mislead? Well, Figure 10 combines what look to be really obvious avoidable lapses (the text refers to "such things as leaving a surgical instrument behind or operating on the wrong body part") with others that are much more debateable (such as "wrong treatment" and "failure to protect against infection"). Without knowing the breakdown, we can't know how many of these errors are of the more extreme type alluded to in the text -- it's possible only a tiny handful of these cases actually involve leaving behind a surgical instrument or operating on the wrong body part. Furthermore, these "easy cases," even by Public Citizen's definition, amount to only ~600 per year, less than 5% of the overall malpractice payments (over 14,000 in 2004).
Finally, the report repeats as novel the commonly known fact that a small percentage of doctors are responsible for the lion's share of tort awards. Such a reality doesn't tell us much. As Ted ably pointed out in his first response to Michael Saks, simple random chance insures that some fraction of doctors would be subject to many repeat suits, even assuming their performance was identical to their peers. Moreover, because not all doctors are in equally risky fields -- as Ted notes in his more recent post, "a brain surgeon is more likely to cause an injury than a dermatologist" -- we'd expect that some doctors, in high-risk fields, would be much more subject to being sued. I note that EVERY doctor singled out in Public Citizen's report as an example of high dollar-award, "repeat offender" status is a surgeon or obstetrician (see pp. 11-12).
Since many studies show that medical malpractice payouts are not accurate predictors of doctor error (see points 2-4 of Ted's post earlier this week), there's not a lot of confidence to think that repeat suits and payouts by a doctor are, in themselves, evidence of doctor incompetence. Rather, they're pretty good evidence that the doctor is in a high-risk field and/or is simply unlucky, which is not at all the same thing. That's not to say that there aren't doctors who are bad; of course there are. And it's certainly possible that doctors and hospitals could do a better job policing themselves. But Public Citizen's evidence doesn't come close to making that case; and it's very likely that the current state of malpractice litigation frustrates rather than facilitates better doctor and hospital self-policing.
It's simply the case that losses paid out per doctor rose much faster than premiums paid per doctor (1288% vs. 312%), or medical care inflation (480%), from 1975-2001 (see this graph). After 2001, when med-mal insurers were exiting the market in response to this reality, insurance regulators permitted substantial premium price increases that began to correct for these imbalances (though far from fully -- really returning medical malpractice paid-loss ratios to 1995 levels, which were still 150% higher than they were in 1975). It is an unavoidable fact that the exceptional growth in losses paid per doctor over time explains medical malpractice premium growth, no matter how much Public Citizen and its ilk try to obscure the issue.