PointofLaw.com
 Subscribe Subscribe   Find us on Twitter Follow POL on Twitter  
   
 
   

 

 

From Linda Gorman, Independence Institute: more on those IOM medical-error numbers



Following up on yesterday's post (which brought in a flood of visitors thanks to a Michelle Malkin link) I got a note from Linda Gorman of the Independence Institute, who wrote as follows:

The IOM [Institute of Medicine] numbers on medical errors are not only soft, they were repudiated by the author of one of the studies that they based their estimate on. You can get a rough idea of why from my comment on the Aug 21 post on the John Goodman Health Policy blog here. It is an excerpt of a more complete discussion from a paper (PDF) I wrote in October 2006 precisely because these sorts of false claims were getting out of hand (see below). A longer, more academic, explanation, including quotes, by someone else in 2004 is here.

Another question that could usefully be asked is: since no system in fact reduces the rate of adverse patient events to zero, how is the U.S. health system doing when compared with that of similar advanced industrial democracies? On p. 17 of her October 2006 paper, Gorman has a "Table 7" entitled "International Comparisons of Adverse Events in Hospital Patients" which sheds some light on this question. The Harvard study of New York hospitals, from which the IOM estimate is extrapolated, found a 3.7 percent rate of adverse events, while a separate study of health care in Utah and Colorado found a 2.9 percent rate. Both of these numbers are better than the numbers found in any of the four other nations listed in the table: a study of care in Canada found a 7.5 percent adverse event rate, a small London study found a 10.8 percent rate, and studies in Australia and New Zealand returned numbers higher still, though those numbers in part reflect looser measures of adverse event causation. When adjusted for that fact, the Australian number came in comparable to the British, at 10.6 percent.

Finally, here is an excerpt from Gorman's discussion in her October 2006 paper:

Like the push for utilization controls, the push to create measures that tied financial reimbursement to actual medical practice was aided by policy activists. A strategically released Institute of Medicine (IOM) book, To Err Is Human, was published in 2000 to make the case for a vast expansion in statistical quality measures. The IOM report claimed that "medical injuries account for between 48,000 and 98,000 deaths per year in the United States...ahead of breast cancer, AIDS, or motor vehicle accidents" and repeatedly claimed that nationally verifiable quality of care measurements were needed to protect patients.

That the IOM study was activism at its best was confirmed by when expert review determined that the results from the two studies of medical care errors that were the basis of the IOM report were misrepresented. Cox and Woloshin reviewed the IOM documentation backing its estimates and concluded that they "could not confirm the Institute of Medicine's reported number of deaths due to medical errors."[1] Troyan A. Brennan of Boston's Brigham and Women's Hospital, the author of one of the studies used to make the estimates, wrote that "neither study cited by the IOM as the source of data on the incidence of injuries due to medical care involved judgments by the physicians reviewing medical records about whether the injuries were caused by errors. Indeed, there is no evidence that such judgments can be made reliably."[2] He also characterized the IOM recommendations as "giv[ing] the impression that doctors and hospitals are doing very little about the problem of injuries caused by medical care...yet the evidence suggests that safety has improved, not deteriorated."

In an article in JAMA, McDonald et al.noted that the IOM figure of 98,000 deaths was extrapolated from the Harvard Medical Practice study. That study looked at 173 actual deaths in a 1984 hospital admissions database of 31,429 acutely ill patients. Though the study's authors said only that adverse events may have contributed to the 173 deaths they identified, the IOM simply assumed that each individual died as the result of the errors and extrapolated the results to the entire population. McDonald also notes that the IOM also claimed support from another study that found medication errors caused 7,000 deaths in the United States in 1993. Subsequent correspondence in the literature showed that this number was vastly overstated because it included deaths from drug abuse as medication errors.[3] A 2001 article by Hayward and Hofer revisited the topic, and again found that the IOM had wildly overstated the deaths due to medical errors.[4]

Unfortunately, the original publicity accompanying the release of the IOM study permanently implanted the idea of enormous error rates in the public mind. The furor was such that few questioned the new fad for the quality measures that were rapidly rolled out of foundations, government, and think tanks committed to the regulatory project. The result has been the rapid institutionalization of a number of poorly tested quality measures, many of which are as reliable as the IOM report, promise to make their nonprofit sponsors a great deal of money, and have at best a dubious relationship to outcomes of primary concern to patients.


Related Entries:

 

 


Isaac Gorodetski
Project Manager,
Center for Legal Policy at the
Manhattan Institute
igorodetski@manhattan-institute.org

Katherine Lazarski
Press Officer,
Manhattan Institute
klazarski@manhattan-institute.org

 

Published by the Manhattan Institute

The Manhattan Insitute's Center for Legal Policy.