[News release] Publication bias and ‘spin’ raise questions about drugs for anxiety disorders
From the 30 March 2015 Oregon State University news release
A new analysis reported in JAMA Psychiatry raises serious questions about the increasingly common use of second-generation antidepressant drugs to treat anxiety disorders.
It concludes that studies supporting the value of these medications for that purpose have been distorted by publication bias, outcome reporting bias and “spin.” Even though they may still play a role in treating these disorders, the effectiveness of the drugs has been overestimated.
In some cases the medications, which are among the most widely prescribed drugs in the world, are not significantly more useful than a placebo.
The findings were made by researchers from Oregon State University, Oregon Health & Science University, and the University of Groningen in The Netherlands. The work was supported by a grant from the Dutch Brain Foundation.
Publication bias was one of the most serious problems, the researchers concluded, as it related to double-blind, placebo-controlled clinical trials that had been reviewed by the U.S. Food and Drug Administration. If the FDA determined the study was positive, it was five times more likely to be published than if it was not determined to be positive.
Bias in “outcome reporting” was also observed, in which the positive outcomes from drug use were emphasized over those found to be negative. And simple spin was also reported. Some investigators concluded that treatments were beneficial, when their own published results for primary outcomes were actually insignificant.
“These findings mirror what we found previously with the same drugs when used to treat major depression, and with antipsychotics,” said Erick Turner, M.D., associate professor of psychiatry in the OHSU School of Medicine, and the study’s senior author. “When their studies don’t turn out well, you usually won’t know it from the peer-reviewed literature.”
This points to a flaw in the way doctors learn about the drugs they prescribe, the researchers said.
“The peer review process of publication allows, perhaps even encourages, this kind of thing to happen,” Turner said. “And this isn’t restricted to psychiatry – reporting bias has been found throughout the medical and scientific literature.”
…
[News release] Scientists unknowingly tweak experiments
From the 18 March 2015 Australian National University news release
A new study has found some scientists are unknowingly tweaking experiments and analysis methods to increase their chances of getting results that are easily published.
The study conducted by ANU scientists is the most comprehensive investigation into a type of publication bias called p-hacking.
P-hacking happens when researchers either consciously or unconsciously analyse their data multiple times or in multiple ways until they get a desired result. If p-hacking is common, the exaggerated results could lead to misleading conclusions, even when evidence comes from multiple studies.
“We found evidence that p-hacking is happening throughout the life sciences,” said lead author Dr Megan Head from the ANU Research School of Biology.
The study used text mining to extract p-values – a number that indicates how likely it is that a result occurs by chance – from more than 100,000 research papers published around the world, spanning many scientific disciplines, including medicine, biology and psychology.
“Many researchers are not aware that certain methods could make some results seem more important than they are. They are just genuinely excited about finding something new and interesting,” Dr Head said.
“I think that pressure to publish is one factor driving this bias. As scientists we are judged by how many publications we have and the quality of the scientific journals they go in.
“Journals, especially the top journals, are more likely to publish experiments with new, interesting results, creating incentive to produce results on demand.”
Dr Head said the study found a high number of p-values that were only just over the traditional threshold that most scientists call statistically significant.
“This suggests that some scientists adjust their experimental design, datasets or statistical methods until they get a result that crosses the significance threshold,” she said.
“They might look at their results before an experiment is finished, or explore their data with lots of different statistical methods, without realising that this can lead to bias.”
The concern with p-hacking is that it could get in the way of forming accurate scientific conclusions, even when scientists review the evidence by combining results from multiple studies.
For example, if some studies show a particular drug is effective in treating hypertension, but other studies find it is not effective, scientists would analyse all the data to reach an overall conclusion. But if enough results have been p-hacked, the drug would look more effective than it is.
“We looked at the likelihood of this bias occurring in our own specialty, evolutionary biology, and although p-hacking was happening it wasn’t common enough to drastically alter general conclusions that could be made from the research,” she said.
“But greater awareness of p-hacking and its dangers is important because the implications of p-hacking may be different depending on the question you are asking.”
The research is published in PLOS Biology.
[News article] Corruption of health care delivery system? — ScienceDaily
Corruption of health care delivery system? — ScienceDaily.
From the 14 October 2014 article
he foundation of evidence-based research has eroded and the trend must be reversed so patients and clinicians can make wise shared decisions about their health, say Dartmouth researchers in the journal Circulation: Cardiovascular Quality and Outcomes.
Drs. Glyn Elwyn and Elliott Fisher of The Dartmouth Institute for Health Policy & Clinical Practice are authors of the report in which they highlight five major problems set against a backdrop of “obvious corruption.” There is a dearth of transparent research and a low quality of evidence synthesis. The difficulty of obtaining research funding for comparative effectiveness studies is directly related to the prominence of industry-supported trials: “finance dictates the activity.”
The pharmaceutical industry has influenced medical research in its favor by selective reporting, targeted educational efforts, and incentivizing prescriber behavior that influences how medicine is practiced, the researchers say. The pharmaceutical industry has also spent billions of dollars in direct-to-consumer advertising and has created new disease labels, so-called disease-mongering, and by promoting the use of drugs to address spurious predictions.
Another problem with such studies is publication bias, where results of trials that fail to demonstrate an effect remain unpublished, but trials where the results are demonstrated are quickly published and promoted.
…
English: Example of promotional “freebies” given to physicians by pharmaceutical companies (Photo credit: Wikipedia)
The authors offer possible solutions:
Related articles
Unpublished trial data ‘violates an ethical obligation’ to study participants, say researchers

Flowchart of four phases (enrollment, intervention allocation, follow-up, and data analysis) of a parallel randomized trial of two groups, modified from the CONSORT 2010 Statement Schulz KF, Altman DG, Moher D; for the CONSORT Group (2010). “CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials”. BMJ 340 : c332. doi:10.1136/bmj.c332. PMC 2844940. PMID 20332509 . . (Photo credit: Wikipedia)
From the 29 October 2013 British Medical Journal press release
Study finds almost 1 in 3 large clinical trials still not published 5 years after completion
Almost one in three (29%) large clinical trials remain unpublished five years after completion. And of these, 78% have no results publicly available, finds a study published on bmj.com today.
This means that an estimated 250,000 people have been exposed to the risks of trial participation without the societal benefits that accompany the dissemination of their results, say the authors.
They argue that this “violates an ethical obligation that investigators have towards study participants” and call for additional safeguards “to ensure timely public dissemination of trial data.”
Randomized clinical trials are a critical means of advancing medical knowledge. They depend on the willingness of people to expose themselves to risks, but the ethical justification for these risks is that society will eventually benefit from the knowledge gained from the trial.
But when trial data remain unpublished, the societal benefit that may have motivated someone to enrol in a study remains unrealized.
US law requires that many trials involving human participants be registered – and their results posted – on the largest clinical trial website ClinicalTrials.gov. But evidence suggests that this legislation has been largely ignored.
So a team of US-based researchers set out to estimate the frequency of non-publication of trial results and, among unpublished studies, the frequency with which results are unavailable in theClinicalTrials.gov database.
They searched scientific literature databases and identified 585 trials with at least 500 participants that were registered with ClinicalTrials.gov and completed prior to January 2009. The average time between study completion and the final literature search (November 2012) was 60 months for unpublished trials.
Registry entries for unpublished trials were then reviewed to determine whether results for these studies were available in the ClinicalTrials.gov results database.
Of 585 registered trials, 171 (29%) remained unpublished. Of these, 133 (78%) had no results available in ClinicalTrials.gov. Non-publication was more common among trials that received industry funding (32%) than those that did not (18%).
“Our results add to existing work by showing that non-publication is an important problem even among large randomized trials,” say the authors. Furthermore, the sponsors and investigators of these unpublished trials infrequently utilize the ClinicalTrials.gov results database.
The lack of availability of results from these trials “contributes to publication bias and also constitutes a failure to honor the ethical contract that is the basis for exposing study participants to the risks inherent in trial participation,” they add. “Additional safeguards are needed to ensure timely public dissemination of trial data,” they conclude.
Related articles
- Non-publication of large randomized clinical trials: cross sectional analysis (medicalnewstoday.com)
- ‘Ethical failure’ leaves one-quarter of all clinical trials unpublished (blogs.nature.com)
- A third of clinical trials haven’t published results (alltrials.net)
- Scientists voice fears over ethics of drug trials remaining unpublished (theguardian.com)
- The State of Infectious Diseases Clinical Trials: A Systematic Review of ClinicalTrials.gov (plosone.org)
- Scientists alarmed over ethics of drug trials remaining unpublished up to five years after they’re finished (rawstory.com)