Subjective endpoints, potential bias and conflicts of interest should raise questions about conclusions.
Introduction: The production and release of new antibiotics is rare and should be celebrated by clinicians. As antibiotic resistance continues to mount our options narrow and in turn our patients suffer. Recently, the New England Journal of Medicine (NEJM) published two articles on a new antibiotic that was recently FDA approved — omadacycline.
The articles compared omadacycline to moxifloxacin in the treatment of community acquired pneumonia (CAP) and to linezolid in the treatment of skin and soft tissue infections (SSTIs). Both studies yielded promising results for the new drug, which should be a cause of excitement. However, significant biases, methodological flaws and poor selection of comparator treatments should temper our excitement.
Both studies tested the new antibiotic in a non-inferiority set up. Non-inferiority studies seem to be increasingly prevalent in the literature and because they serve an important purpose, it’s important for us to understand them and also to understand why this approach is used and why it may not be appropriate.
Clinical Question #1: Is omadacycline non-inferior to moxifloxacin in terms of early clinical response for the treatment of community acquired bacterial pneumonia? 
This study was a randomized, multicenter, double-blind trial comparing omadacycline to moxifloxacin. The study excluded those with the highest severity of illness (Pneumonia Severity Index, or PSI, risk class V) as well as any patient with renal or liver insufficiency or immunocompromised state.
The investigators report that omadacycline was non-inferior to moxifloxacin for the clinical improvement of pneumonia symptoms at 72 to 120 hours. The rate of response in the intention-to-treat analysis was 81.1% vs 82.7% (a difference of -1.6% CI -7.1 – 3.8%). The per-protocol group also demonstrated non-inferiority.
Clinical Question #2: Is omadacycline non-inferior to linezolid in terms of early clinical response in the treatment of skin and soft tissue infections? 
Once again, this study was a randomized, multicenter, double-blind trial comparing omadcycline to linezolid. As in the prior study, patients with renal or liver insufficiency or immunocompromised state were excluded.
The investigators found that omadacycline was non-inferior to linezolid for early clinical response at 48 to 72 hours defined as survival with a reduction in lesion size of at least 20%. The rate of response in the intention-to-treat analysis was 84.8% vs 85.5% (a difference of -0.7% CI -6.3 – 4.9%). The per-protocol group also demonstrated non-inferiority.
On the face of it, this seems like two positive studies that should pave the way for the use of omadacycline for both CAP and SSTI. However, there are lots of issues with both of these studies. To start, Paratek Pharmaceuticals, the company that makes the drug, “designed and conducted the trial and prepared the statistical analysis plan. Analyses were performed and data interpreted by Paratek Pharmaceuticals in conjunction with the authors” (direct quote from the article). The pharmaceutical company also employed a medical writer to draft the manuscript. It’s unclear to me what the authors actually did for the study as it appears Paratek did all the work themselves.
The fact that the pharmaceutical company did this study doesn’t negate the findings but, it should make all of us pause and be skeptical. There are a number of techniques that can be used to make the drug look better than reality. One of these is creating numerous studies with different endpoints, but only publishing the positive results and leaving the other studies unpublished. This technique is frequently employed by pharmaceutical companies and leads to publication bias.
A recent systematic review and meta-analysis demonstrated that drug and device studies sponsored by manufacturing companies are more likely to be favorable than studies sponsored by other sources (Lundh 2018). We don’t know, and may never know, how many investigations were done before these published outcomes were found.
The endpoint in question for both of these studies is highly subjective and while both studies were double-blinded, blinding of the outcome assessor is not implicitly stated. There may have been selection bias as well since the patients were not enrolled consecutively in either study and it’s unclear how many patients met criteria, but were not approached. The non-consecutive enrollment is obvious when you see that in the CAP study, only 774 patients were enrolled over 14 months across 86 sites (< 1 patient/site/month) and only 627 patients enrolled in the SSTI study over 12 months across 55 sites (1.2 patients/site/month). The average ED sees far more of these presentations per week and, thus, many patients were never approached, i.e. the study group was cherry-picked. Additionally, in the CAP study, there was a 1% absolute difference in mortality. I would be reluctant to prescribe a drug that was no worse than a standard treatment if it had any increase in mortality.
While there are many other issues in both of these studies, I want to focus in on a major one shared by both; the use of a non-inferiority design. A non-inferiority study, simply stated, takes one intervention or medication and compares it to another. Typically, a new medication (or an old medication with a new indication) is compared to a standard treatment. The goal isn’t to show superiority of the new treatment, but rather to say it’s “about as good” as your standard approach.
Establishing non-inferiority can be useful when the new treatment offers a distinct advantage over the standard treatment. For instance, the new treatment can be easier to take (i.e. a NOAC in comparison to injectable LMWH) or cheaper or come with a lower side effect rate. In these situations, non-inferiority is fine and trying to show superiority isn’t necessary.
By this logic, omadacycline must be cheaper or easier to take or have less side effects. However, it is none of these things. A course of moxifloxacin runs ~ $100 US while omadacycline runs ~ $1,000 US: a 10-fold difference in price without a benefit. A 7- to 14-day course of omadacycline costs slightly more than a 10- to 14-day course of linezolid. This is likely why the omadacycline price was set at this number, but linezolid shouldn’t be our first line choice for SSTIs.
I’ve only prescribed outpatient linezolid a handful of times in my career and typically with guidance from infectious diseases. Our standard approach is a first generation cephalosporin like cephalexin ($100 for a 7-day course) with the addition of SMX-TMP ($15 for a 10-day course) if you are concerned about MRSA. Again, we see a 10-fold difference in price without a benefit. Adverse events were not statistically different in either study and omadacycline is not more convenient to take than the standard treatment for either CAP or SSTI.
Given all of this, why do a non-inferiority study? The study authors (aka Paratek Pharmacuetical) should have performed a study to demonstrate superiority of their drug giving clinicians a real reason to reach for it. The answer is simple: it’s easier to show non-inferiority than it is to show superiority of a drug. That’s it. The reason these studies were done was to give Paratek an argument to push these drugs and a selling point to clinicians that are unaware of the nuances of non-inferiority studies. Some of the safe guards present in superiority studies like blinding are less effective in avoiding bias in non-inferiority studies.
In a superiority study that’s blinded, it’s hard to bias the results by favoring the group receiving the treatment of interest because you don’t know which group is which. In a non-inferiority study, you only need to show that the new treatment is about as good as the standard treatment so you can simply assess all patients as the same thus showing non-inferiority. Additionally, Paratek set the non-inferiority rate at 10% in both studies, which is huge making it an easy mark to hit. Finally, Paratek came up with an extremely subjective primary outcome making it easier to assess patients as having the desired outcome.
The bigger question, then, is why would the NEJM publish studies that so blatantly only play the role of lining the pockets of the pharmaceutical company? Again, the answer is quite simple: because it lines the pockets of the NEJM as well. Journals make far more money in selling reprints of published articles to the pharmaceutical companies for them to arm their drug reps with than they make by selling subscriptions. This is not the first or the last time that the NEJM has participated in this scientific farce. Over the years, the NEJM has become bolder and more dismissive of the ability of clinicians to be critical in their appraisal of the literature.
Studies are no longer thinly veiled advertisements. The role of big pharma is front and center and conflicts of interest (COI) lists are a joke. Most COI disclosures are so long, longer than the article itself, that they are no longer published in the journal, but only available on-line.
This type of “scientific publication” belittles the field of medicine and shows the NEJM for what it is: a shill for big pharma. The once reputable journal should be embarrassed of itself. We, the medical community, must demand better.
*Thank you to Rory Spiegel and Justin Morgenstern for their help in crafting this article.
- Stets R et al. Omadacycline for Community-Acquired Bacterial Pneumonia. NEJM 2019; 380(6): 511-27. PMID: 30726692
- O’Riordan W et al. Omadacycline for Acute Bacterial Skin and Skin-Structure Infections. NEJM 2019; 380(6): 528-38. PMID: 30726689
- Lundh A et al. Industry sponsorship and research outcome: systematic review with meta-analysis. Intensive Care Med 2018; 44(10): 1603-12. PMID: 30132025