Wednesday, October 28, 2009

Failing to Report Adverse Effects of Treatments

We have frequently advocated the evidence-based medicine (EBM) approach to improve the care of individual patients, and to improve health care quality at a reasonable cost for populations. Evidence-based medicine is not just medicine based on some sort of evidence. As Dr David Sackett, and colleagues wrote [Sackett DL, Rosenberg WM, Muir Gray JA, Haynes RB, Richardson WS. Evidence-based medicine; what it is and what it isn't. BMJ 1996; 312: 71-72. Link here. ]


Evidence based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research.

One can find other definitions of EBM, but nearly all emphasize that the approach is designed to appropriately apply results from the best clinical research, critically reviewed, to the individual patient, taking into account that patient's clinical characteristics and personal values.

When making decisions about treatments for individual patients, the EBM approach suggests using the best available evidence about possible benefits and harms of treatment, so that the treatment chosen is most likely to maximize benefits and minimize harms for the individual patient. The better the evidence about specific benefits and harms applicable to a particular patient, the greater will be the likelihood that a particular decision based on this evidence will result in the best possible outcomes for the patient.

A new study in the Archives of Internal Medicine focused on how articles report adverse effects found by clinical trials. [Pitrou I, Boutron I, Ahmad N et al. Reporting of safety results in published reports of randomized controlled trials. Arch Intern Med 2009; 169: 1756-1761. Link here.] The results were not encouraging.

The investigators assessed 133 articles reporting the results of randomized controlled trials published in 2006 in six English language journals with high impact factors, that is, the most prestigious journals, including the New England Journal of Medicine, Lancet, JAMA, British Medical Journal, and Annals of Internal Medicine. They excluded trials with less common designs, such as randomized cross-over trials. The majority of trials (54.9%) had private, or private mixed with public funding.

The major results were:
15/133 (11.3%) did not report anything about adverse events
36/133 (27.1%) did not report information about the severity of adverse events
63/133 (47.4%) did not report how many patients had to withdraw from the trial due to adverse events
43/133 (32.3%) had major limitations in how they reported adverse events, e.g., reporting only the most common events (even though most trials do not enroll enough patients to detect important but uncommon events).

The authors concluded, "the reporting of harm remains inadequate."

An accompanying editorial [Ioannidis JP. Adverse events in randomized controlled trials: neglected, distorted, and silenced. Arch Intern Med 2009; 169: 1737-1739. Link here] raised concerns about why the reporting of adverse events is so shoddy:
Perhaps conflicts of interest and marketing rather than science have shaped even the often accepted standard that randomized trials study primarily effectiveness, whereas information on harms from medical interventions can wait for case reports and nonrandomized studies. Nonrandomized data are very helpful, but they have limitations, and many harms will remain long undetected if we just wait for spontaneous reporting and other nonrandomized research to reveal them. In an environment where effectiveness benefits are small and shrinking, the randomized trials agenda may need to reprogram its whole mission, including its reporting, toward better understanding of harms.

Pitrou and colleagues have added to our knowledge about the drawbacks of the evidence about treatments that is publicly available to physicians and patients when making decisions about treatment. Even reports of studies with the best designs (randomized controlled trials) in the best journals seem to omit important information about the harms of the treatments they test.

It appears that the majority of the reports that Pitrou and colleagues studied received "private" funding, presumably meaning most were funded by drug, biotechnology, or device companies and were likely meant to evaluate the sponsoring companies' products. However, note that this article did not analyze the relationship of funding source to the completeness of information about adverse effects.

Nonetheless, on Health Care Renewal we have discussed many cases in which research has been manipulated in favor of the vested interests of research sponsors (funders), or in which research unfavorable to their interests has been suppressed. Therefore, it seems plausible that sponsors' influence over how clinical trials are designed, implemented, analyzed and reported may reduce information about the adverse effects of their products reported in journal articles. Trials may be designed not to gather information about adverse events. Analyses of some adverse events, or some aspects of these events may not be performed, or if performed, not reported. The evidence from clinical research available to make treatment decisions consequently may exaggerate the ratios of certain drugs' and devices' benefits to their harms.

Patients may thus receive treatments which are more likely to hurt than to help them, and populations of patients may be overtreated. Impressions that treatments are safer than they actually are may allow their manufacturers to overprice them, so health care costs may rise.

The article by Pitrou and colleagues adds to concerns that we physicians may too often really be practicing pseudo-evidence based medicine when we think we are practicing evidence-based medicine. We cannot judiciously balance benefits and harms of treatments to make the best decisions for patients when evidence about harms is hidden. Clearly, as Ioannidis wrote, we need to "reprogram." However, what we need to reprogram is our current dependence on drug and device manufacturers to pay for (and hence de facto run) evaluations of their own products. If health care reformers really want to improve quality while controlling costs, this is the sort of reform they need to start considering.

NB - See also the comments by Merrill Goozner in the GoozNews blog.

0 comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...