Name:
Why Study Results Mislead: Gordon Guyatt, MD, discusses bias, random error, and why study results sometimes mislead.
Description:
Why Study Results Mislead: Gordon Guyatt, MD, discusses bias, random error, and why study results sometimes mislead.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/f3576fd6-dd39-404e-815e-0b95e3a2c60c/thumbnails/f3576fd6-dd39-404e-815e-0b95e3a2c60c.jpg?sv=2019-02-02&sr=c&sig=6HGAJRitLrZcjfTnupmxXQxYGzosELnknROl8QskMxw%3D&st=2025-01-15T06%3A47%3A52Z&se=2025-01-15T10%3A52%3A52Z&sp=r
Duration:
T00H07M34S
Embed URL:
https://stream.cadmore.media/player/f3576fd6-dd39-404e-815e-0b95e3a2c60c
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/f3576fd6-dd39-404e-815e-0b95e3a2c60c/6830507.mp3?sv=2019-02-02&sr=c&sig=G22yZZuA%2FpsLKHiyEVzjyzw1p5A2LRML4RGl13e2sCo%3D&st=2025-01-15T06%3A47%3A52Z&se=2025-01-15T08%3A52%3A52Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> I'm Joan Stephenson, editor of JAMA's Medical News and Perspectives section. Today I have the pleasure of once again speaking with Dr. Gordon Guyatt, this time about why study results might be misleading because of bias or random error. This topic is covered in chapter five of User's Guides to the Medical Literature, which today's guest coauthored. Dr. Guyatt, why don't you introduce yourself to our listeners? >> I'm Gordon Guyatt. I'm a professor of medicine at McMaster University. >> Dr. Guyatt, in what ways might a study be biased?
>> Well, by biased we mean a systematic difference from the truth so that whatever the true results are, a study shows something different. And there are many examples of biased results that have been widely disseminated to patients' detriment. So I'll just talk about two of them to start off with. Hormone replacement therapy The results from observational studies suggested there would be a reduction in cardiovascular disease and cardiovascular mortality as a result of hormone replacement therapy, which was widely pushed as a result.
And it turns out it didn't have that reduction at all; Drugs for treating rhythm disturbance in the heart. It was suggested that they might lower mortality. And they ended up increasing mortality, with an estimate that more people died from these drugs than died in Vietnam. So these are examples of biased results showing treatment effects when, in fact, there are no treatment effects existed or harm from observational studies. Bias, which is systematic deviation from the truth, happens less in randomized trials.
But in randomized trials, we still need to have additional safeguards against bias, which includes blinding and completeness of follow up. >> How can investigators reduce bias in studies of therapy and harm? >> So, ideally, we do randomized trials, because in the hormone replacement therapy example, for instance, the women taking hormone replacement therapy did really do better, but it had nothing to do with the hormone replacement therapy.
Instead, it had to do that these were people who were destined to do better anyway, either because they took care of themselves better or because their living situation was better. So, to avoid that problem, we do randomized trials, in which the decision about whether you receive treatment or control is made by a process analogous to flipping a coin. Randomized trials can further protect against bias by ensuring that the patients and healthcare providers who participate in the trial are unaware of whether patients are receiving active drug or placebo, which is a medication, for instance, that is indistinguishable from the active but does not contain the active ingredient.
In addition, it's possible to lose patients as the trial goes along. You lose track of them. And investigators can avoid bias by making sure they follow as many people as possible and ideally, everyone till the end of the study. >> Can you please explain the concept of random error to our listeners? >> So no matter how well the study is done, no matter how well we protect against bias, we will never be certain of the truth. And the reason for that is random error.
So, for instance, let us take an unbiased coin. If we flip an unbiased coin ten times, we are not always going to get five heads and five tails. Sometimes, we even might have an extreme, even though it's an unbiased coin, of getting ten heads and no tails at all. And the reason for that is chance, which in the context of medical research, we refer to as random error. And on one occasion, I had the best answer from a medical student about why it is that we don't get five heads and five tails when we flip a coin ten times, why we might get very different results.
And the answer was that's not the way the world is. One could imagine a world in which you always would get five heads and five tails from an unbiased coin, and it's not the one we live in. And that play of chance occurs not only in coin flips, but in medical research, where in the same way as an unbiased coin can give an extreme result simply by the play of chance, we can get difference in medical studies. We have results that are quite different from the truth, even though we've eliminated bias or systematic error simply because of this random error.
>> Dr. Guyatt, why is it important to be aware of random error when evaluating the results of a randomized clinical trial? >> The reason is because if you simply restricted yourself to risk of bias, you can find a trial where the randomization was beautifully done, where physicians and healthcare providers and patients and those judging whether an outcome has occurred are all blinded, and they didn't lose anybody to follow up, and it may still be incorrect.
And indeed, we have a lot of examples where with relatively small sample sizes and small number of events, results were far from the truth simply by chance. And fortunately, in the situations we know about anyway, we found out about that, because other studies with larger sample sizes, in fact, ended up showing us what the truth was. So the smaller the sample size, smaller the number of events, the more studies are susceptible to the play of chance, the more play of chance may mislead us.
And that's why we try to do large definitive trials to minimize the play of chance. >> Sounds like a good lesson for life as well as for clinical trials. >> You're absolutely right. And it's interesting being in the sort of medical research we do. Our sense of skepticism and taking care definitely extends beyond the research arena. >> Is there anything else you would like to tell our listeners about bias random error and why study results may mislead? >> Simply that caution is always warranted.
So, there are many reasons, because bias and random error will always be at play, sometimes more difficult to detect and others. That jumping on to new treatments that appear very beneficial may, it turns out, not be in patient's best interest. And caution is always warranted. So something new comes along. It's exciting. And people should be careful, and in many instances demand replication before administering treatments, particularly if they have lots of adverse or substantial adverse effects, cost, or burden to the patient.
>> Many thanks, Dr. Guyatt, for this overview of why results from studies might mislead. And for additional information about this topic, JAMAevidence subscribers can consult chapter five in User's Guides to the Medical Literature. This has been Joan Stephenson of JAMA talking with Dr. Gordon Guyatt for JAMAevidence.