ADVERTISEMENT

Reader Beware: When Research Goes Wrong

No Comments

Over the years, my trust in medical research has slowly degraded as I’ve encountered the influence of both subtle bias and outright fraud. 


The first issue of Emergency Medical Abstracts was published in September of 1977. In starting EMA I firmly believed that the literature of emergency medicine was to be found in virtually all journals and that the more physicians knew about the literature, the better their care of patients would be. Much of that has changed in recent years. 

Only family practice and emergency medicine were “horizontally-oriented” specialties. These clinicians needed to know about pediatric fever, asthma, CHF, ankle sprains, COPD, abdominal disorders – in essence, parts of all of the other specialties. Orthopedists, being in a “vertical” specialty, could subscribe to a handful of orthopedic journals and generally be up-to-date in most aspects of orthopedics. The same is true, more or less, for ophthalmologists, endocrinologists, allergists, cardiologists and the other specialties. So the idea behind EMA was to search most all of the English-language journals and pull out those papers that relate to EM and provide abstracts and commentary.

ADVERTISEMENT

In retrospect, I guess I was an inadvertent early adopter of the concept that is now called “evidence-based” medicine. I bought the idea that the scientific literature, particularly the randomized controlled study, was the height of scientific evidence. I was of the belief that the authors of the literature were clinicians whose exclusive goal was to use the scientific method to find the “truth.” As such, I believed, naively, that the integrity of the literature was, pretty much, without question. And as a safeguard, there was always the peer review process to assure that everything was on the “up and up.” And the most prestigious journals certainly would not contain papers that were scientifically or otherwise compromised. Certainly, if it were in The New England Journal of Medicine, it had to be true.

But, over the years, I have become progressively more skeptical. And I would have to acknowledge that one of the most skeptical persons I know, Jerry Hoffman, has helped to open my eyes (and those of countless other physicians). Certainly, a very good methodologist can find sources of bias in papers that perhaps others may not see, and that the authors of studies, in their zeal to prove their hypothesis, may overlook. Some of these sources of bias may not substantially change the clinical applicability of a study’s conclusion, but it is the rare paper that is pristine. I’ve also learned over time that spin can be applied to the results of a paper to emphasize the conclusions that favor the authors hypothesis by selectively looking at the data. This is particularly true with studies funded by pharmaceutical companies where a positive outcome may result in more sales for a particular drug.

But out and out fraudulent publication was something that I didn’t really consider to be a problem. Fraudulent publication would be among the most unethical of behaviors, if not potentially criminal. Would not patients who participate in fraudulent studies be put needlessly at risk? Would not clinicians who adopted the conclusions of fraudulent studies put their own patients at risk? Would not meta-analyses using these papers likely come to incorrect conclusions? Would not guidelines developed depending on the integrity of these studies be corrupted?

ADVERTISEMENT

Well, apparently there can be the wholesale production of fraudulent studies. In the December 6, 2016 issue of Neurology there is an absolutely frightening study (Systematic Review and Statistical Analysis of the Integrity of 33 Randomized Controlled Trials, Bolland, M.J., et al) in which a detailed assessment of studies published by one research group out of Japan was analyzed with regard to their integrity. The papers focused primarily on bone research related to osteoporosis, stroke and Parkinsonism. So as to not misquote the authors, here are some comments from their abstract:

“The researchers (Yoshihiro Sato, et al – my clarification) were remarkably productive, publishing 33 RCTs over 15 years involving large numbers of older patients with substantial comorbidity, recruited over very short periods. Treatment groups were improbably similar.… Outcomes were remarkably positive, with very low mortality and study withdrawals despite substantial comorbidity. There were very large reductions in hip fracture incidence, regardless of intervention (relative risk 0.22, 95% confidence interval 0.15–0.31, p , 0.0001, range of relative risk 0.10–0.33), that greatly exceed those reported in meta-analyses of other trials. There were multiple examples of inconsistencies between and within trials, errors in reported data, misleading text, duplicated data and text, and uncertainties about ethical oversight.”

Three of the papers were published in Neurology, and all three were subsequently retracted. Retractionwatch.com noted that 12 of the papers, published in a variety of journals, including JAMA, have been retracted by the author who, in addition to the problems found by Bolland, acknowledged listing co-authors without their consent and self-plagiarism.

ADVERTISEMENT

It is suspected by some that all 33 studies may be retracted. But 33 would not be a record. No way. John Carlisle, an anesthesiologist in the U.K., analyzing the work of Yoshitaka Jujii, triggered a record-breaking 183 retractions.

Perhaps the most prolific and scholarly author on the topic of the role of the literature in modern medicine is John Ioannidis. His biography and list of accomplishments is nothing short of astounding (he is at Stanford and holds positions in multiple departments). He has authored over 800 papers-not bad for someone who is 52-years-old! The paper that put him on the map is abstracted below. It has been cited over 4,200 times and has over 2 million hits in Public Library of Science, Medicine (where it was originally published). This paper is typical of his writings in which he eviscerates commonly believed “truths” related to the advancement of medical knowledge.

WHY MOST PUBLISHED RESEARCH FINDINGS ARE FALSE
Ioannidis, J.P.A., PLoS Med 2(8):e124, August 2005

In this essay the author, from Greece and Tufts University in Boston, demonstrates mathematically why most of the “positive” results of studies are actually likely to be false, which explains why so often subsequent work or clinical experience refutes heralded research claims. He shows that this is inevitably true, even without considering biased study methods and ingenious spin in the interpretation of results. 

ADVERTISEMENT

Given the fact that the “proven” hypothesis is almost always only one of many possible hypotheses, even many “statistically significant” results are actually far more likely to be artifactual. This is particularly true when there wasn’t strong reason to suspect that the hypothesis was true before the study was done. This is similar to Bayesian reasoning in the interpretation of diagnostic tests, when many endpoints were studied such as disease-oriented or surrogate endpoints. This is particularly seen where there is a “flexible design” similar in concept to “data-snooping”. This results when there is only one study claiming the given results, particularly if it is relatively small. In such a study, a “significant” result increases the prior probability that the hypothesis is true, but only by a small amount. The result is such that even when the effect size is small, the results can be seen as statistically significant even when not clinically significant. He notes, as well, that proprietary studies, with financial conflicts of interest, have been empirically shown to be even more subject to reversal over time. Surprising as it may seem, therefore, research claims based on the concept that a hypothesis is “unlikely to be due to chance alone” are, in fact, mathematically more likely to be false than to be true.
37 references (jioannid@cc.uoi.gr) Copyright 2005 by Emergency Medical Abstracts – All Rights Reserved 12/05 – #24

Here’s a recent paper by Ioannidis when he seemed to be having a bad day. It’s a downer in which he writes about all of the problems with trying to get the scientific community to focus on addressing the huge number of challenges in improving care through the creation of valid evidence.

EVIDENCE-BASED MEDICINE HAS BEEN HIJACKED: A REPORT TO DAVID SACKETT
Ioannidis, J.P.A., J Clin Epid Epub ahead of print, March 2, 2016

David Sackett, the father of evidence-based medicine (EBM) and professor emeritus at McMaster University in Canada, died in 2015. The author, from Stanford University, presents a first-person narrative of his experiences with EBM beginning in 2004, when he met with David Sackett and the International Campaign to Revitalise Academic Medicine, up to the present. The author had most of his early EBM-related grant applications rejected and was vilified in Europe in the late 1990s for exposing the conflict of interest in pharmaceutical perks given to physicians. He was further spurned when he refused to pander to “eminence-based medicine” proponents who sought his assistance for publishing in top-tier journals. Resistance to EBM in the 1990s and 2000s occurred in part because it threatened lucrative medical procedures. As evidence of such, EBM was accepted as long as it did not consider cost-effectiveness of interventions. 

Although EBM is now widely accepted, with rigorous randomized trials, meta-analyses and practice guidelines, the author laments the many shortcomings of these publications (eg., wrong objectives, wrong surrogate outcomes, large margins for noninferiority, sponsorship by industry and gift authorship). Research further suffers from spurious risk factors, data dredging, and little relevance to health outcomes, while celebrities and science denialists sway public opinion. Physician practice is increasingly under financial (market) pressures, and authors are unlikely to publish EBM articles if the conclusion is that fewer interventions are needed. He concludes that despite increases in health care spending, medicine may now be doing more harm than good for patients and society at large. The author confesses that both he and EBM have “failed magnificently, in due proportion to our utopian ambition.”
41 references (jioannid@stanford.edu – no reprints) Copyright 2016 by Emergency Medical Abstracts – All Rights Reserved 6/16 – #13

On a somewhat better day, Ioannidis came up with 12 suggestions on ways to improve research practices, called How To Make More Published Research True. Implementation of these practices could raise the quality of research while limiting redundancy and waste. See inset below for his recommendations, along with my occasional comments (ital).

How is a front-line clinician ever to be able to grapple with all of the challenges presented by the literature? Most of us know little about study design, biostatistics, biases and all of the traps that await the unknowing but sincere clinician who tries, on their own, to effectively incorporate medical evidence into their practice. Only when biostatistics and study design methodology are an integral part of medical education can there be hope for improvement.


12 Ways to Improve Research Practices

Excerpted from How to Make More Published Research True, by J.P.A Ioannidis
Comments by Dr. Richard Bukata in italics

  1. Large-scale collaborative research
    Less likelihood for fraudulent studies when multiple entities are involved, input from more individuals regarding design and analysis.
  2. Adoption of replication culture
    Understanding of the essential features that need replication and how much heterogeneity is acceptable.
  3. Registration of studies, protocols, and the like, which can demonstrate redundancies in research
    If a study is done by a pharmaceutical company and drug A is found to not work in the treatment of disorder B, this fact should be known. Such information may impact future studies by alerting researchers and participants that the study may be in vain.
  4. Minimizing the often conflicting analyses for sponsors of research
    Sponsored studies by default are ones in which there is a premise of a positive outcome. Peer review must scrupulously and competently review such papers.
  5. Sharing data and protocols which can lead to more reproducible research
  6. Using more appropriate statistical methods
    The first person to review a paper should be a highly-skilled statistician. Only when a paper passes this first round of inquiry should clinicians become involved in the peer review of a study.
  7. Creating and upholding more stringent thresholds for declaring new discoveries or “success”
    Everyone wants to hear about major medical breakthroughs. And the news media and the involved researchers have a low threshold for declaring breakthroughs. Further studies often debunk what was first declared a breakthrough.
  8. Improved study design standards
    Study design requires a series of comprehensive skills to create a design that will effectively show outcomes—hopefully clinical outcomes and not just surrogate markers of outcomes.
  9. Greater dissemination of research
    Ideally, all research should be available to everyone. Sources like PubMed could help make this possible. Any study funded in whole or part by any government agency should be free to all.
  10. Improved peer review processes
    There are many sources of problems in peer review including conflicts of influence.
  11. More reliable reporting of results
  12. Better training for researchers in both methodology and statistical literacy
    This, as noted, is an extraordinary challenge.

ABOUT THE AUTHOR

EXECUTIVE EDITOR
Dr. Bukata is the Editor of Emergency Medical Abstracts.

Leave A Reply