That advice might also apply to Tamar Oostrom, assistant professor of economics at The Ohio State University, when she suspected something rotten about clinical trials for psychiatric drugs.
In short, the drugs are mysteriously reported as being 50 percent more effective when their manufacturers finance the trials than when other groups sponsor the same trials.
In what she calls the “sponsorship effect,” Oostrom said, “I compared different clinical trials in which the exact same pairs of drugs are compared for their effectiveness—the only substantial difference being who funded the study.”
The old saw goes, “There is no honor among thieves.”
Oostrom focused on 509 published clinical trials. One subject of her study was the antidepressant Effexor, manufactured by Wyeth Pharmaceuticals and available since 1993. Oostrom found that, in 12 of the 14 trials funded solely by Wyeth, Effexor beat out Prozac in effectiveness. (Note: Side effects of Effexor include dizziness, breathing problems, confusion, buzzing in the ears, hypertension, hallucinations, seizures and suicidal thoughts or actions. Side effects of Prozac include nosebleeds, reckless behavior, seeing or hearing unusual things, dizziness, confusion, seizures, muscle tremors and suicidal thoughts or actions. Choosing between the two is something like deciding what’s best to put in your morning smoothie: cyanide or rat poison.)
But though Effexor beat out Prozac in Wyeth trials, trials conducted by other funding agencies were a different story: Prozac beat out Effexor in two out of three of those.
How could this be? All tests were conducted using the “double-blind, randomized control” method, a type of clinical trial in which neither the participants nor the researchers knows what treatment is given to whom until the trial is over. In other words, each side is literally blind on both the participant and research level (hence the name “double-blind”) to eliminate any hint of bias that could affect the results. This method is considered the gold standard for studying how effective a drug is because it’s structured precisely to eliminate bias.
So how on Earth are the same clinical trials, done on the same drug, using the same procedures, giving different results depending on who funds them?
Was someone bribed? Were the results falsified? Was there something about the nature of the clinical trials—a variance of severity of symptoms between the manufacturer-funded research and those tests funded by others?
Oostrom dug deeper.
She looked at who published the studies and what studies were published. She found this startling fact: “Trials funded by manufacturers in which their drug appears more effective are more likely to be published. That connection between outcomes and publication doesn’t appear to happen as much when there are other funders,” she said.
Moreover, she found 77 drug trials, the findings of which were never published in scientific journals. Adding these unpublished papers to her analysis greatly reduced that 50 percent sponsorship effect.
“Most of the sponsorship effect can be explained by publication bias,” she concluded.
Publication bias. Another can of worms to open. What will it reveal? Collusion between drug manufacturers and the scientific journals that favor them? Kickbacks? Quid pro quo? Just as the full Watergate scandal finally yielded its secrets, so too will this story reach its denouement in due time.
The old saw goes, “There is no honor among thieves.”
Possibly, that could be amended to: “There is no honor among competing drug companies.”
As always, the public is the only real loser in this $1.5 trillion-a-year game.