Health and Medical News and Resources

General interest items edited by Janice Flahiff

[News release] Credibility of Evidence: A Reconsideration of the Logic and Strength of Our Healthcare Decisions

From the 22 May 2015 HealthCare Blog post

A few days ago, we wrote an editorial for US News and World Reports on the scant or dubious evidence used to support some healthcare policies (the editorial is reproduced in full below).  In that case, we focused on studies and CMS statements about a select group of Accountable Care Organizations and their cost savings. Our larger point however is about the need to reconsider the evidence we use for all healthcare-related decisions and policies. We argue that an understanding of research design and the realities of measurement in complex settings should make us both skeptical and humbled.  Let’s focus on two consistent distortions.

Screen Shot 2015-05-23 at 5.27.07 AM

Evidence-based Medicine (EBM).  Few are opposed to evidence-based medicine.  What’s the alternative? Ignorance-based medicine? Hunches?  However, the real world applicability of evidence-based medicine (EBM) is frequently overstated. Our ideal research model is the randomized controlled trial, where studies are conducted with carefully selected samples of patients to observe the effects of the medicine or treatment without additional interference from other conditions. Unfortunately, this model differs from actual medical practice because hospitals and doctors’ waiting rooms are full of elderly patients suffering from several co-morbidities and taking about  12 to 14 medications, (some unknown to us). It is often a great leap to apply findings from a study under “ideal conditions” to the fragile patient. So wise physicians balance the “scientific findings” with the several vulnerabilities and other factors of real patients.  Clinicians are obliged to constantly deal with these messy tradeoffs, and the utility of evidence-based findings is mitigated by the complex challenges of the sick patients, multiple medications taken, and massive unknowns. This mix of research with the messy reality of medical and hospital practice means that evidence, even if available, is often not fully applicable. 

Relative vs. Absolute Drug Efficacy:

Let’s talk a tiny bit about arithmetic. Say we have a medication (called X) that works satisfactorily for 16 out of a hundred cases, i.e., 16% of the time.  Not great, but not atypical of many medications.  Say then that another drug company has another medication (called “Newbe”) that works satisfactorily 19% of the time. Not a dramatic improvement, but a tad more helpful (ignoring how well it works, how much it costs, and if there are worse side effects).  But what does the advertisement for drug “Newbe” say?   That “Newbe” is almost 20% better than drug “X.” Honest. And it’s not a total lie.  Three percent (the difference between 16% and 19%) is 18.75%, close enough to 20% to make the claim legit. Now, if “Newbe” were advertised as 3% better (but a lot more expensive) sales would probably not skyrocket. But at close to 20% better, who could resist?   

Policy:  So what does this have to do with healthcare policy?  We also want evidence of efficacy with healthcare policies but it turns out that evaluation of these interventions and policies is often harder to do well than are studies of drugs. Interventions and policies are introduced into messy pluralistic systems, with imprecise measures of quality and costs, with sick and not-so-sick patients, with differing resources and populations, with a range of payment systems, and so on and so on. Sometimes, randomized controlled trials are impossible.  But sometimes they are possible but difficult to effect. Nevertheless, we argue they are usually worth the effort. Considering the billions or trillions of dollars involved in some policies (e.g., Medicare changes, insurance rules) the cost is comparatively trivial.

But there’s another question: What if a decent research design is used to measure the effects of a large policy in a select population but all you get is a tiny “effect?”  What do we know? What should policymakers do? Here’s what we wrote in our recent editorial in the US News and World Report….

 

May 23, 2015 Posted by | health care | , , , , , , , , , , , | Leave a comment

Unpublished trial data ‘violates an ethical obligation’ to study participants, say researchers

Flowchart of four phases (enrollment, interven...

Flowchart of four phases (enrollment, intervention allocation, follow-up, and data analysis) of a parallel randomized trial of two groups, modified from the CONSORT 2010 Statement Schulz KF, Altman DG, Moher D; for the CONSORT Group (2010). “CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials”. BMJ 340 : c332. doi:10.1136/bmj.c332. PMC 2844940. PMID 20332509 . . (Photo credit: Wikipedia)

 

From the 29 October 2013 British Medical Journal press release

 

Study finds almost 1 in 3 large clinical trials still not published 5 years after completion

Almost one in three (29%) large clinical trials remain unpublished five years after completion. And of these, 78% have no results publicly available, finds a study published on bmj.com today.

This means that an estimated 250,000 people have been exposed to the risks of trial participation without the societal benefits that accompany the dissemination of their results, say the authors.

They argue that this “violates an ethical obligation that investigators have towards study participants” and call for additional safeguards “to ensure timely public dissemination of trial data.”

Randomized clinical trials are a critical means of advancing medical knowledge. They depend on the willingness of people to expose themselves to risks, but the ethical justification for these risks is that society will eventually benefit from the knowledge gained from the trial.

But when trial data remain unpublished, the societal benefit that may have motivated someone to enrol in a study remains unrealized.

US law requires that many trials involving human participants be registered – and their results posted – on the largest clinical trial website ClinicalTrials.gov. But evidence suggests that this legislation has been largely ignored.

So a team of US-based researchers set out to estimate the frequency of non-publication of trial results and, among unpublished studies, the frequency with which results are unavailable in theClinicalTrials.gov database.

They searched scientific literature databases and identified 585 trials with at least 500 participants that were registered with ClinicalTrials.gov and completed prior to January 2009. The average time between study completion and the final literature search (November 2012) was 60 months for unpublished trials.

Registry entries for unpublished trials were then reviewed to determine whether results for these studies were available in the ClinicalTrials.gov results database.

Of 585 registered trials, 171 (29%) remained unpublished. Of these, 133 (78%) had no results available in ClinicalTrials.gov. Non-publication was more common among trials that received industry funding (32%) than those that did not (18%).

“Our results add to existing work by showing that non-publication is an important problem even among large randomized trials,” say the authors. Furthermore, the sponsors and investigators of these unpublished trials infrequently utilize the ClinicalTrials.gov results database.

The lack of availability of results from these trials “contributes to publication bias and also constitutes a failure to honor the ethical contract that is the basis for exposing study participants to the risks inherent in trial participation,” they add. “Additional safeguards are needed to ensure timely public dissemination of trial data,” they conclude.

 

 

 

 

 

 

October 30, 2013 Posted by | Medical and Health Research News | , , , , , | Leave a comment

Evidence Based Medicine not the Holy Grail??

And don’t miss the lively discussion at the end of the article..

When self-evident truth in medicine is systematically ignored (KevinMD.com article of June 3, 2012)

Some things in medicine are obvious.  Despite the endless worship of ‘evidence-based’ medicine, and the constant barrage of studies on every conceivable topic, we do certain things because we know they just seem right.  I take as evidence the fact that we daily try to save lives, devoting research time, untold gazillions of  dollars and heroic clinical effort to our continued goal of staving off death.  Why is this?  Do we know that death is inherently worse than life?  Well, since we can’t see beyond the grave, and can’t exactly engage in double-blind, placebo controlled studies about the after-life, the answer is “no.” But we assume that life is preferable to death, based on our feelings, our sense of the thing.

 

The same is true in our personal lives.  No one can show me a scientific study that details why he or she married a particular person.  No one can offer up a mole of affection for empiric analysis.  And yet, we don’t doubt the existence of romance, or the reality of love.

And yet, medicine is filled with situations in which “self-evident truth” is systematically ignored, and those who believe in it intentionally and often viciously marginalized.

For example, after years of being told that physicians weren’t giving enough treatment for pain, and after years of clinicians saying, “yes we are, and too many people are addicted and abusing the system,” the data from CDC says that far too many are dying from prescription narcotics, far too many infants being born addicted, and far too many people, young and old, are using analgesics and other drugs not prescribed for them.  To which many of us say, “duh!”

And then there’s the customer service model, the thing which causes clinicians to lose their jobs as satisfaction scores fall due to disgruntled patients (often upset over not receiving the drug they desired … see above paragraph).  This is a darling of administrators.  And it clearly has flaws…

June 4, 2012 Posted by | health care | , , , | Leave a comment

   

%d bloggers like this: