RCTs: All That’s Gold Standard Doesn’t Glitter

 

 


# 4717

 

 

Over the next couple of days I’ll be highlighting some of the interesting abstracts of slide presentations to the ICEID 2010 conference going on in Atlanta this week.

 

Academic conference presentations are not in the same league as peer-reviewed journal articles  – but they do give us an important and early look at research being conducted around the world.

 

Many of these presentations will eventually end up in peer-reviewed journals, however.  But that can take a year or longer.

 

Meanwhile, important information and avenues of research may languish. These presentations are therefore of keen interest, even if they haven’t been subjected to peer-review.

 

So as you read these abstracts, and follow news reports from this conference, I’d recommend a bit of caution. 

 

But even peer-reviewed RCTs (Randomized Controlled Trials)  (long considered the `gold standard’ for scientific research)  published in prestigious journals - deserve a dash of skepticism on the part of the reader.

 

Today a cautionary note from Johns Hopkins Medicine on RCTs. From the press release below, here is the `money quote’, but follow the link to read the whole thing (emphasis mine).

 

Overall, 41 percent of the 146 trials in the review had improper or poorly described randomization techniques. Industry-funded trials were six times more likely to have high risk for biased randomization than government-funded trials or those funded by nonprofit organizations.

 

 

First, this press release from Johns Hopkins (hat tip @Lizsherer) on potentially flawed RCT pediatric studies, followed by a few words on my part.

 

 

Pediatric Clinical Studies Appear Prone to Bias

Released: 7/9/2010 8:00 AM EDT
Embargo expired: 7/12/2010 12:05 AM EDT
Source:
Johns Hopkins Medicine

-Better design, reporting urged to ensure accurate results

Newswise — A Johns Hopkins review of nearly 150 randomized controlled trials on children — all published in well-regarded medical journals — reveals that 40 to 60 percent of the studies either failed to take steps to minimize risk for bias or to at least properly describe those measures.

 

A report of the team’s findings in the August issue of Pediatrics shows that experimental trials sponsored by pharmaceutical or medical-device makers, along with studies that are not registered in a public-access database, had higher risk for bias. So were trials that evaluate the effects of behavioral therapies rather than medication, the report states.

 

“There are thousands of pediatric trials going on in the world right now and given the risk that comes from distorted findings, we must ensure vigilance in how these studies are designed, conducted and judged,” says lead investigator Michael Crocetti, M.D., M.P.H., a pediatrician at Johns Hopkins Children’s Center. “Our review is intended as a step in that direction.”

 

Considered the gold standard of medical research, the hallmark of double-blind randomized controlled trials (RTC) is a design that rules out or accounts for actual or potential bias. Results of such studies, when peer-reviewed and published in reputable medical journals, can influence the practice of medicine and patient care. A poorly designed or executed trial can therefore lead researchers to erroneous conclusions about the effectiveness of a drug or a procedure.

 

Citing the degree of bias risk in the studies they reviewed, the researchers caution pediatricians to be critical readers of studies, even in highly respected journals.

(Continue . . . )

 

 

First and foremost, science is messy, and scientists are far from infallible.

 

Which is why I am always a little bit skeptical when I read the conclusions of the latest whiz-bang scientific study or a press release announcing an exciting new advance in medicine. 

 

Not because I harbor conspiratorial beliefs, or a deep suspicion of the motives of scientists . . . but because I view scientific discovery as a journey. . . a learning process . . . not a destination.

 

Advances in science are anything but linear, and very often we find ourselves sidetracked or detoured down some flawed alley of investigation along the way.   

 

What we know, or what we think we know, is constantly changing.  This is particularly true in medicine.

 

When I was a young paramedic, 35 years ago (back when dinosaurs roamed the earth), every doctor knew that the very first thing you did for someone in cardiac arrest (after initiating CPR) was to give them a bolus of 1 or 2 amps of Sodium Bicarb to reverse the inevitable acidosis brought on by respiratory arrest.

 

You did this even before attempting to defibrillate, since conventional wisdom said that you couldn't cardiovert an acidotic heart.

 

And so 2 amps of bicarb went in as a matter of course.  Because everyone knew that was the right thing to do.

 

Trouble is, even with our cardiac meds and defibrillators and advanced training, we were losing a lot of patients.   By the mid-1980's it became apparent that the bolus of bicarb wasn't helping, and in fact, was probably hurting patients.

 

By 1986 several scientific studies had demonstrated that rapid provision of effective ventilation and artificial circulation were entirely adequate means of managing the small amount of respiratory- (or metabolic-) acidosis that accompanied common cardiac arrests.

 

Administration of even 1 amp of Bicarb was linked to poorer outcomes, and so the automatic administration of it was removed from the ACLS protocols in 1986.

 

How could we have gotten it so wrong?

 

 

(Note: Use of Bicarb (NaHCO3), while controversial, may still be considered in some cases of prolonged cardiac arrest, particularly in cases of asystole). 

 

 

 

What seemed like a perfectly good idea in 1975 had become obsolete (indeed, regarded even as dangerous) by 1986.  Studies were conducted, and while initial survival rates increased with bicarb administration, long-term survival rates were lower.

 

 

A result not unlike that which was found a few years ago with the use of high-dose steroid treatment for SARS. It increased short-term survival, but long-term it turned out to be detrimental.

      

 

No doubt, some of what we believe to be true or prudent today may be disproved or abandoned five or ten years from now.

 

Absolutes in science are hard to find.  And the process of determining scientific `fact’ can be both arduous and prolonged.

 

While I try to highlight only reputable studies, I offer the admonition of Caveat Lector for anything you read here (or anyplace else for that matter).

Related Post:

Widget by [ Iptek-4u ]