Name:
Summarizing the Evidence: Gordon Guyatt, MD, discusses "Chapter 19: Summarizing the Evidence" from the Users' Guides to the Medical Literature.
Description:
Summarizing the Evidence: Gordon Guyatt, MD, discusses "Chapter 19: Summarizing the Evidence" from the Users' Guides to the Medical Literature.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/c4a7401f-c945-4bdd-9162-43eda1936a68/thumbnails/c4a7401f-c945-4bdd-9162-43eda1936a68.jpg?sv=2019-02-02&sr=c&sig=TTMOtz5CZuXEEayMY1KgfI8Qj%2BPbkX8acQUnYuN%2Fr8k%3D&st=2024-12-21T13%3A55%3A59Z&se=2024-12-21T18%3A00%3A59Z&sp=r
Duration:
T00H10M39S
Embed URL:
https://stream.cadmore.media/player/c4a7401f-c945-4bdd-9162-43eda1936a68
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/c4a7401f-c945-4bdd-9162-43eda1936a68/6830465.mp3?sv=2019-02-02&sr=c&sig=9U4IxdkpBFsP6IHv4y78pY1aflL2m5bvbsYnobltq2U%3D&st=2024-12-21T13%3A55%3A59Z&se=2024-12-21T16%3A00%3A59Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> I'm Joan Stephenson, Editor of JAMA's Medical News and Perspectives section. Today, I have the pleasure of speaking with Dr. Gordon Guyatt about Chapter 19 of Users' Guides to the Medical Literature. This chapter, co-authored by Dr. Guyatt, discusses evaluating evidence presented in review articles and applying the results of this evaluation to patient care. Dr. Guyatt, why don't you introduce yourself to our listeners? >> I'm Gordon Guyatt. I'm a Professor of Medicine at McMaster University and the figure of championing the Users' Guides.
>> Dr. Guyatt, what are the differences between narrative and systematic reviews? >> Traditional narrative reviews have typically covered a wide variety of topics. So, they might cover the epidemiology of a condition, the diagnosis, layout various therapeutic options, talk about a number of them, and about the long-term outcomes. Systematic reviews address targeted or focused questions. So, a therapeutic systematic review would typically say, in a defined group of patients, what is the effect of intervention A versus intervention B on a particular outcome or outcomes of interest.
And the other major thing, aside from the scope of a systematic review, is that it is systematic. And by that, we mean that it takes, introduces, and carries out a number of strategies to reduce bias and to ensure an accurate conclusion to the systematic review. >> What is the distinction between a systematic review and a meta-analysis? >> A systematic review is a broader category of which a meta-analysis is a subcategory.
So, I mentioned just a minute ago that a systematic review is systematic because it has various strategies for reducing bias. Those strategies include explicit eligibility criteria for the review. Those eligibility criteria include methodologic criteria, which means that only studies that meet certain quality criteria are included. So, for instance, if it's a therapy question, we might only include randomized trials.
It undertakes a comprehensive search of the literature to find eligible studies. When eligibility studies have been identified, it looks at the quality of those studies. So, for instance, if we only included randomized trials, it would also make sure that those randomized trials were blinded and the extent to which they had complete follow-up. And then, finally, after having extracted the data, so you have all these processes of explicit eligibility criteria, comprehensive search, extraction of the data, and a number of these conducted in duplicate, at that point, you conduct the meta-analysis if it is appropriate to do so.
And a meta-analysis looks at the data from a number of studies and generates a single best estimate of the magnitude of effect and our confidence in that effect represented by the confidence interval. So, the systematic review is that systematic process of finding and generating the evidence to get an unbiased estimate, and a meta-analysis is the mathematical or statistical process where one uses the data one has collected to generate a single-best estimate of effect.
>> What else can you tell us about the process of conducting a systematic review? >> Well, I've been through that, to an extent, in the answer to the last question. One first of all starts by defining eligibility criteria, which typically have to do with who the patients are, what the interventions are, what the comparators are, and what outcomes one is looking at. Having defined the eligibility criteria, one does what one hopes is a comprehensive search.
So, that typically will be looking through a large number of databases, looking through perhaps unpublished information such as abstracts from meetings, contacting experts, perhaps looking through unpublished theses. Bottom line, doing everything one can to get a comprehensive set of potentially eligible articles. One then looks at those potentially eligible articles and applies the eligibility criteria. So, here one sees [inaudible] it the right patients, interventions, comparators, and outcome.
And often, that process is done at least in duplicate if not in triplicate to ensure people can agree on the process of defining the eligible articles. Initially, typically, it's done in the abstracts. You get potentially eligible abstracts and then you pull the articles that are potentially eligible from then and then make the final decisions. After that, one has one's eligible articles, one looks at their quality. So, I mentioned if the randomized trials, was randomization concealed?
Who, if anybody, was blinded? Did they follow patients completely? And then, abstract the data. And those processes of judging quality and abstracting the data are also typically done in duplicate to ensure that they are as error-free as possible. And then, at the end of the process, one hands the output of all this over to the statisticians for the meta-analysis. >> From the standpoint of the reader, what questions should we ask when examining review articles?
>> You should ask the questions about whether the review did, in fact, take those steps to reduce bias and to ensure accurate results. So, were there explicit eligibility criteria, and did those eligibility criteria include a methodologic criteria such as restriction to randomized trials? When you looked at what search was undertaken, does it convince you that it was a comprehensive search that got at all the relevant articles that might be potentially eligible including potentially unpublished articles?
Did they then look at the quality of the articles? So, did they simply leave it that they were randomized trials or did they ensure or address the fact whether they were high or low-quality randomized trials? And, were these processes of judging eligibility and looking at the quality of the trials done in duplicate to ensure that they were replicable? >> Dr. Guyatt, the chapter discusses a system for rating quality of evidence and strength of recommendations, and this is called the Grading of Recommendations Assessment, Development, and Evaluation, or the GRADE Approach.
How does the GRADE rating help us determine the quality of the evidence? >> Quality of evidence has traditionally been thought of in terms of issues of risk of bias. So, is a study randomized? And if randomized, is it concealed, blinded, and complete follow-up? The GRADE approach takes a broader definition of quality of evidence and thinks of it as our confidence in the estimate of effect. And risk of bias clearly decreases our confidence in the estimates of effect.
But there are other things that can decrease our confidence. So, for instance, one can have studies with a low risk of bias, but a small sample size and they will yield imprecise results subject to random error, which decreases our confidence. Studies may have a low risk of bias and the sample size may be large, but the studies may have had disparate results. Some studies showing big effects, some no effects, or even on the harm side. That inconsistency lowers our confidence in the estimates of the effect.
Studies may be low risk of bias, precise with large sample sizes, consistent results, but they may be not directly addressing our question relevant to our patients. So, for instance, a typical issue within internal medicine, if you are a geriatrician and the patient before you is 90 years old, and 90-year-olds have not been included in the trials that have included typically much younger patients, one has the issue one may be getting low risk of bias, precise, consistent results, but do they apply to your 90-year-old patient?
And the final is the issue everything may look fine but one may be worried about publication bias. So, the bottom line is what the grade process one of the things it tells us in terms of the quality of evidence is the overall quality in terms of our confidence addressing these issues of risk of bias, precision, consistency, directness, and publication bias. >> Dr. Guyatt, what kind of publication biases might you find? >> Well, the people have been worried for years about studies that have shown minimal or no effect being less likely to be published than studies of the similar intervention showing large effect.
There is a lot of good evidence, sadly enough, of publication bias. What people have done is gone back to registries of research protocols that have been addressed and vetted by research ethics boards, and found looking forward what happened to the studies that have passed that scrutiny. Well, as it turns out, the studies that show large effects tend to be more likely to be published than studies that showed small or no effect. Where that is particularly worrisome historically is when all the studies have been funded by the pharmaceutical industry and there have been a large number of relatively small studies.
That is the situation when the positive studies tend to be published and the negative studies do not. >> Thank you, Dr. Guyatt, for this overview of summarizing the evidence from review articles. For additional information about this topic, JAMAevidence subscribers can consult Chapter 19 in Users' Guides to the Medical Literature. This has been Joan Stephenson of JAMA talking with Dr. Gordon Guyatt for JAMAevidence.