Name:
Therapy (Randomized Trials): Gordon Guyatt, MD, MSc, discusses "Chapter 6: Therapy (Randomized Trials)" from the Users' Guides to the Medical Literature.
Description:
Therapy (Randomized Trials): Gordon Guyatt, MD, MSc, discusses "Chapter 6: Therapy (Randomized Trials)" from the Users' Guides to the Medical Literature.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/fd1e95b4-5d2b-41fd-a90d-2b1c9ae976e4/thumbnails/fd1e95b4-5d2b-41fd-a90d-2b1c9ae976e4.jpg?sv=2019-02-02&sr=c&sig=qjoeo9PZ58m8emwd4onxkjh3P6HcNA%2BBJTOLC2WJSpk%3D&st=2024-12-22T06%3A15%3A57Z&se=2024-12-22T10%3A20%3A57Z&sp=r
Duration:
T00H15M06S
Embed URL:
https://stream.cadmore.media/player/fd1e95b4-5d2b-41fd-a90d-2b1c9ae976e4
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/fd1e95b4-5d2b-41fd-a90d-2b1c9ae976e4/6830457.mp3?sv=2019-02-02&sr=c&sig=Bo2oqOjO2nFACR%2BFjkQkgW3HwPDNY5ayXFPhDd2pibU%3D&st=2024-12-22T06%3A15%3A57Z&se=2024-12-22T08%3A20%3A57Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> I'm Joan Stephenson, Editor of JAMA's Medical News and Perspectives section. Today, this JAMAevidence podcast will focus on interpreting published articles that report the results of randomized controlled trials and using the information to guide clinical practice. Our guest expert is Dr. Gordon Guyatt. Dr. Guyatt, why don't you introduce yourself to our listeners? >> I am a Professor of Medicine and of Clinical Epidemiology at McMaster University. I work clinically as a Hospitalist and spend quite a bit of my time on medical education and research.
>> Dr. Guyatt, would you please describe the three-step approach to using an article from the medical literature to guide your practice? >> Having decided that the article is relevant to the clinical question of interest, what one is trying to find out from the literature, one then looks at the validity. That's the first step. And by validity, we mean risk of bias or credibility of the results. A study that is valid is one that is likely to have a low risk of bias.
The second step is to look at the results. So, for instance, if it is a treatment study, we're looking at what is the relative risk or relative risk reduction associated with the intervention if it indeed reduces risk at all, and how precise are the estimates. And the precision of the estimates the clinician can find by looking at the confidence intervals around the point estimate of the effect. And the final step is applicability. Do the results apply to my particular patient? Does my -- Will my patient be interested?
Are the effects big enough, powerful enough of treatment given the side effects and cost that my patient will be interested in that? And have all the important outcomes been measured? Sometimes studies report some but not all of the patient-important outcomes and that limits the applicability. >> Why are published articles reporting the results of randomized controlled trials, or RCTs, considered a higher level of evidence than other types of studies? >> The issue is sometimes labeled in technical language as selection bias, but it really has to do with prognostic differences.
So, if one were simply saying we take patients in the community and we look at individuals who received a drug and those who did not receive a drug and then we see how they do and we will make inferences that if the people receiving the drug do better than those who did not receive the drug then the drug must work. The problem with that is that it may not be the drug at all but it may be the destiny of the individuals who did and did not receive the drug.
In other words, prognostic imbalance. So, the people who received the drug for one reason or another may have had a better prognosis than those who did not receive the drug. The wonderful thing about randomization is it creates, if sample sizes are sufficient, prognostic balance. So, people start off with the same fate or destiny and in the end, one can then, if there are differences, attribute it to the treatment rather than baseline prognostic imbalance.
>> Why do we need to go through the time, the trouble, and the expense of a randomized controlled trial when observational studies are so much easier, cheaper, and quicker? >> Well, I have just given you the theoretical reason and the theoretical reason is that randomized trials create prognostic balance where at the end of an observational study we may be unsure whether apparent differences between treatment and control are really due to the treatment or to the fact that the people receiving the intervention were destined to do better than those in the control group irrespective of the treatment.
And why we continue to insist on randomized trials, or one reason we continue, is the very unfortunate consequences that have, at times, occurred when we have relied on observational studies. Perhaps the most dramatic of these is the case of hormone-replacement therapy where observational studies suggest a substantial decrease in cardiovascular risk in women receiving hormone-replacement therapy. As a result of these studies, the clinical world believed the results, and women across the world for pretty close to a decade were encouraged to use hormone-replacement therapy in large part to reduce their cardiovascular risk.
When, as it turns out, when the randomized trials were done, hormone-replacement therapy doesn't reduce cardiovascular risk at all and may even increase cardiovascular risk. So, women were given the wrong advice because people put excessive faith in the observational studies. Another example of the same phenomena did less damage, but still, very much an issue, is antioxidant vitamins. Antioxidant vitamins in the observational studies were suggested to reduce cancer and cardiovascular risk.
Lots of people taking antioxidant vitamins to achieve that. As it turns out, it was healthier people who tended to take antioxidant vitamins, perhaps not surprising, and it was the fact that they were healthier in the start rather than the antioxidant vitamins that created the apparent affect. When randomized trials were done, no beneficial effects on either cancer or cardiovascular risk with antioxidant vitamins, and, once again, possibly some deleterious consequences.
>> We understand that randomization is one of the keys to balancing the study groups, but randomization alone is not sufficient. Blinding is an additional strategy that is needed. Can you describe how blinding in an RCT works and the five groups that should be unaware of whether patients are receiving the experimental therapy or control therapy in an RCT? >> Well, randomization creates groups that are prognostically balanced at baseline but there is no guarantee that they will stay prognostically balanced, nor randomization deals with the bias of prognostic balance but there's no guarantee that there won't be other biases that intrude.
It's not always possible to blind, in surgical trials much more difficult, but, as much as possible, there are five groups who should be blinded. The first are the patients. And the major reason for blinding patients is to avoid placebo effects. People who believe they're getting effective therapy tend to do better than those who do not irrespective of any biologic effect. Clinicians looking after the patients should be blinded and the major reason for that is to avoid co-intervention; that is the differential administration of interventions that may benefit patients may affect the outcomes of interest in the intervention and control groups.
Those collecting outcome data would ideally be blinded. So, for instance, if the outcome is a stroke, those who document the patient's presentation and the results of the CT scan or other imaging procedures should be unaware of whether patients received treatment or control. And those adjudicating the outcome, in other words, if the outcome is stroke, once the data has been collected, somebody has to make a decision whether the patient is to be classified as having a stroke or not having a stroke, and that adjudicator should be blind to whether patients are receiving intervention or control.
The final group or role that should be blinded is the data analyst. And you might say, well, if the data analyst can't see the data that's not going to be much good. What we mean by blinding the data analyst though is that when they analyze the data, they have the groups. What they know about the groups is not intervention or control but rather A and B and they're not sure whether A is the intervention group or A is the control group. >> After the whole trial is over, how does a physician know if the results applies to his or her patient?
>> The best way is to look at the eligibility criteria and the characteristics of the patient to the patients who are enrolled, which is typically described in the trial's Table 1. So, if your patient meets the eligibility criteria and when you look at the characteristics of the patients, the age distribution, the sex distribution, and so on, if that all fits well with your patient, then you know you are in the clear in terms of applying the results to the patient; however, be careful.
If the patient looks somewhat different, doesn't quite meet eligibility criteria, it doesn't necessarily mean the results are inapplicable to your patient. So, the question you want to ask yourself is, is the biology of the condition and of the treatment such that you would expect very different results, very different impact of the intervention in your patient than the ones enrolled in the trial? And so, for instance, a trial enrolled patients from 40 to 80 and your patient is 81, there is probably no reason to think that the biology of your 81-year-old patient is different within the biology of the 40- to 80-year-olds who participated in the trial.
And, in general, when we have looked to see if there are differences in effects between men and women, between people of different racial backgrounds, between people receiving co-interventions and not receiving co-interventions, and so on, they have tended not to show differences. So, although one is in an ideal position if your patient meets the eligibility criteria for the study, but even if your patient doesn't, the question to ask is, is the biology of your patient sufficiently different that you would expect a substantially different treatment effect?
And if the answer is no, the results may still well be applicable to your patient. >> Dr. Guyatt, could you please describe the concept number needed to treat, otherwise known as NNT? >> Traditionally, people have said if a treatment is effective then perhaps all my patients should receive that treatment. But nowadays, we are doing very large trials which are capable of detecting very small treatment effects.
And every intervention we have has side effects and costs and inconvenience associated with it and it may well be that, although a treatment is effective, the benefits do not outweigh the downsides of side effects, potential rare long-term toxicity costs, and inconvenience. So, we want to not only say does this work but how much does it work? And then the issue is quantifying how much it works. And the number needed to treat or NNT is one way of doing that.
So, for example, let's assume a treatment reduces the risk by half. That sounds very impressive. And it's at least fairly impressive if the risk of the adverse event such as death or stroke or heart attack, was 10% to start with and treatment reduces it by half to 5%. If one divides 100 by 5, 100 over 5, 100% over 5%, one ends up with an NNT, number needed to treat, of 20.
That means that you are required to treat 20 people to have one individual who would otherwise have had a stroke or heart attack or whatever you're trying to avoid, one individual who otherwise would have had that event that will not have that event. However, with that same 50% relative risk reduction, if the rate of adverse events, the likelihood of having the event if untreated is only 1% and treatment reduces it to 1/2%, then the NNT will be 100 over 0.5 or 200.
That means we have to treat 200 individuals to prevent a single individual from experiencing an event. And in the latter situation, it may well be that the downsides of the treatment make it not worthwhile for many, if not most, patients. >> Is there anything else JAMAevidence users should know about interpreting published articles that report results of RCTs? >> Well, unfortunately, what they should know is that there is a lot of spin going on.
And, in particular, in trials that are supported by the pharmaceutical industry and in which the investigators are receiving large amounts of support from the pharmaceutical industry, there are many, many examples of where the results are presented in an excessively favorable way in terms of the intervention. Nowadays, the large trials, or sometimes the small trials, industry-sponsored and conducted by industry, there is no problem in the validity thinking back to the first step in the three-steps approach to using an article, the risk of bias.
They do it very well. It's randomized, randomization is concealed, people are blinded, they follow their patients very well, but then they spin their results in a way that makes treatment seem better than it really is; seems more beneficial and less downsides than is really the case. We suggest a number of strategies and there's a chapter in the Users' Guides book devoted to that, but if I were going to give one strategy, don't read the discussions.
That is where the spin comes in and, in fact, if clinicians learn, if the clinicians get good enough at reading the literature that they can get the message and understand what happens in the methods and the results, that is what they should use with their own interpretation rather than the interpretation given by the authors, which may be influenced by the authors' intellectual and financial conflicts of interest. >> Thank you, Dr. Guyatt, for your insights of interpreting published articles that report the results of RCTs.
For more information on this topic, JAMAevidence subscribers can consult Chapter 6 of the Users' Guide to the Medical Literature, coauthored by Dr. Guyatt. This has been Joan Stephenson of JAMA interviewing Dr. Gordon Guyatt about interpreting the published results of clinical trials for JAMAevidence.