Name:
The Process of a Systematic Review and Meta-analysis: M. Hassan Murad, MD, MPH, discusses the process of a systematic review and meta-analysis.
Description:
The Process of a Systematic Review and Meta-analysis: M. Hassan Murad, MD, MPH, discusses the process of a systematic review and meta-analysis.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/e829edad-388d-4060-ae34-6ee4b5ea3860/thumbnails/e829edad-388d-4060-ae34-6ee4b5ea3860.jpg?sv=2019-02-02&sr=c&sig=LjV%2FbmrECxubxtSyz6%2BJ9qGj9VN0amKgfjwZn5dSHbI%3D&st=2025-01-15T11%3A44%3A31Z&se=2025-01-15T15%3A49%3A31Z&sp=r
Duration:
T00H14M17S
Embed URL:
https://stream.cadmore.media/player/e829edad-388d-4060-ae34-6ee4b5ea3860
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/e829edad-388d-4060-ae34-6ee4b5ea3860/10625622.mp3?sv=2019-02-02&sr=c&sig=%2Bm9h4MuJpbE54x%2FGoEx7KnD4xfrO9FTyqWwq291qzio%3D&st=2025-01-15T11%3A44%3A31Z&se=2025-01-15T13%3A49%3A31Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> This is Ed Livingston with JAMAevidence interviewing Dr. Hassan Murad from the Mayo Clinic about his chapter, "The Process of Systematic Review and Meta-Analysis" in the most recent edition of the Users' Guides to the Medical Literature. Could you tell us about yourself? >> I'm a clinical epidemiologist and internist. I work at the Mayo Clinic, and my main area of research has been focused on evidence synthesis, so conducting systematic reviews and meta-analyses and developing clinical practice guidelines. >> So we all learned in medical school that the highest level of evidence is the randomized controlled trial, and we were all taught about limitations and observational data, and that how we should be looking for randomized controlled trials because they performed to minimize the bias of patient selection and whatnot.
But then those trials are somewhat limited because they have very select populations that they look at, very select outcomes, and if you look at the enrollment information for a trial, you may have 1,000 patients with hypertension and 100 of them meet the study's criteria, and you have to treat all 1,000 in your clinical practice and you may not have a lot of guidance from one particular trial. So the next level of evidence is a systemic review, or meta-analysis, and that's the aggregation of these various trails.
So could you tell us what the criteria are for a systematic review and a meta-analysis? How do you set them up? How do you decide that a systemic review is not a narrative review? >> So the definition of a systematic review is the process of collecting old empirical evidence that fits a prespecified eligibility criteria to answer a specific question. So when we say prespecified eligibility criteria, this means that there's a protocol that proceeded the systematic review.
And when we say to answer to answer a specific question, we differentiate the systematic review from a narrative review in another way, which is we're trying to look at the particular comparison, A versus B, or a certain diagnostic test versus another. So it is not a wide overview, just like your typical review article that can have sections on epidemiology, prognosis, diagnosis, treatment. So it's, again, a specific question that follows a protocol or a prespecified criteria.
>> So I often view a systematic review almost like a study in itself; it has a hypothesis, you ask a very specific question, and you have a very specific procedure for how you go about looking at the literature, deciding what to include, what not to include, deciding who looks at the papers. Is that a correct way to look at a systematic review? >> Correct. So the systematic review follows a protocol and tries to answer to a hypothesis. So we go through multiple steps that are established a priori, or before we conduct a review.
So we formulate the question. We defined the eligibility criteria for studies to be included. We developed a priori hypotheses to explain heterogeneity and also for how we will analyze data. We conduct a search, screen titles and abstracts, and subsequently look at full text of articles, assess the risk of bias and extract data, and then perform the analysis. So the statement that systemic review is a study on its own is correct because it often generates new evidence by demonstrating or establishing such an association that were not demonstrated in the original studies.
>> In doing a systematic review, are there criteria for what is a systematic review? Do you have to have a certain number of people review the papers, a certain amount of independence? What are the minimal standards for calling something a systematic review? >> In the Users' Guides, we propose several criteria that we think will differentiate a credible systematic review from one that is not, or potentially from a narrative review.
This criteria goes through the question whether the question was a sensible question, which means does it make sense to pool these studies together. And then the second criteria is looking at the search. Was the search exhaustive, looking at multiple databases and using multiple synonyms? Engaging medical librarians is also a very good thing to do for the search of the systematic review. And then we look at the selection and the assessment of the studies. Was this reproducible? And reproducible often means that it needs to be done by duplicates, so by two independent reviewers.
So our teams are always composed of pairs of people that independently select the studies, appraise their risk of bias, and extract data from them. And then after that there are some other criteria, looking at the results of the review, whether they are ready for clinical applications and whether the review addresses the confidence in the estimates of effect. So the Users' Guides criteria helps differentiate a credible systematic review from one that, perhaps, is not as systematic.
>> There are all kinds of subtleties in understanding studies, which is what the Users' Guide is all about is explaining those, and there are issues of adequate randomization or completeness of followup, how an investigator addressed missing data. What recommendations do you have for our readers to look within a research paper or a systematic review for those kinds of criteria? Is there any particular checklist or what resource would you recommend for readers to have a listing of all these things they should be looking for?
>> It depends on the type of studies included in the review. So for example, if they were randomized trails, there are certain instruments that helps assess the risk of bias in randomized trials. If the studies in the review were observational studies, there are different types of instruments and tools for that. However, when a reader is going through a systematic review, they should have a section there that talks about the risk of bias. And if they don't, that means the systematic review is really not very credible.
And in this section, they either should look for evaluation of the risk of bias of the individual studies, which is typically done in a table or sometimes in the text, but looking at every individual study, and then there should be an overall judgment about the risk of bias across all the studies. So the whole body of evidence, how much do we think the risk of bias there? And of course, this is subjective -- to some extent it's a subjective judgment. And therefore, we do it in duplicates as well. And there's no numeric scale, so numbers here, on the different scales, don't really mean much.
It's more of an overall judgment that, perhaps, could be judged as moderate risk of bias, or high risk of bias or low risk of bias. So it's a judgment, and we cannot deny that. Making it in duplicate makes it a little more acceptable. >> Can you give us some examples of what would result in a high risk of bias? >> So for example, if the study is looking at an outcome that is a patient's reported outcome, such as pain or quality of life, and the study was not blinded, that's a big problem.
Now it may not be the same problem if the outcome was mortality, for example. It'll be less of a problem if there was no blinding. So blinding will be important in this context. Another important marker of the risk of bias is loss to followup. So if you lost 10%, 20% of the patients, and then the rate of the outcome in your study is 1%, then that also represents a major problem. The other things that affect the risk of bias are allocation and concealment, whether a study was stopped early for benefit that sometimes can lead to exaggeration of the treatment effect, and there are some other factors.
>> When reading a systematic review or meta-analysis, there's always a discussion about the heterogeneity of the studies that are summarized within that analysis. Could you explain to me what heterogeneity is and what it's important? >> It's a very important issue in systemic reviews, and particularly meta-analysis. So meta-analysis is when we do the actual statistical pooling of the studies to generate one effect size. It's very important there that the studies are homogeneous, to some extent.
Now this homogeneity is determined by answering this question. Do we expect that across range of patients, which means across a range of these inclusion criteria of these studies, that we expect to see the same effect, or similar effect, of the intervention? It won't be an identical effect from across the study, of course, because there's random error and sampling error, and the studies, often, if somewhat different results because they are done in different settings. But if we expect a similar effect across a similar range of patients and interventions, then it makes sense to pool these studies together in a meta-analysis.
And in the Users' Guide, we provide several methods on how to evaluate the heterogeneity in a particular meta-analysis. Some of them are visual, by looking at the forest plot and observing whether the point estimates from the different studies are close to each other, or if the confidence intervals are overlapping, which means that the differences can be attributed to chance. And there are some statistical methods that we also discuss, such as the I square statistics and the cubed statistics.
>> Could you explain those? >> The I square statistics is the proportion of variability that is not due to chance or random error. So it's a percentage, so if the I square is 30%, that means 30% of the variability between the studies is not due to chance or random error and is due to actual real differences between the studies. So the higher the I square, the less comfortable we are in accepting this pooled estimate.
So the I square goes from 0 to 100%. If it's about 50%, 60%, you know, 70%, you know then that these studies are actually different. They are measuring different things, and you develop some discomfort with accepting a pooled estimate as the best estimate. The Q statistic is essentially a p-value from a chi-squared test that just gives you a yes or no answer, whether this heterogeneity is beyond chance or not. So you want a high p-value.
So a high p-value and a low I square indicate homogeneity and will make you more comfortable accepting a pooled estimate. >> So in the greater scheme of evidence, it seems like meta-analysis is on the top of the heap followed by a systematic review, and then after that, randomized trials. But all of them answer relatively narrow questions, and physicians need to understand what those questions were when they're applying that synthesis of evidence to individual patients.
Is that a correct assessment? >> Correct, although with the issue of the pyramid, I think the message in the User Guide is to try and think of a systematic review in these two steps where you make the two judgments. The first one is whether you evaluate the credibility of the methods of the review and you say this is a reasonable review or not. The second one is to rate your confidence in the effect, which we try not to have the systematic review or meta-analysis in the pyramid itself, because sometimes you can have a systematic review of a case series, and that shouldn't be on the top of the pyramid.
Conversely, you can have a very poorly done systematic review of randomized trials, and that should not be on the top either. So I think, from a pyramid perspective, thinking of a systemic review in these two steps is the message that I think I would like to leave our readers with. And then, of course, as you mentioned, applying this to the patients, you need to look at the criteria of the systematic review, and does this match the criteria of my patient? >> How do you use a systematic review to answer questions about clinical care for patients, who have characteristics that led them to be excluded from the clinical trials, that are summarized in this systematic review?
>> I mean a classic example we give is if a patient who is elderly and has renal insufficiency and diabetes presents to your office, and you want to give this patient a prescription of, say, statin. You try to find the trial that's included patients with these three characteristics, elderly, renal insufficiency, and diabetes. You won't find that because the trials would either exclude the elderly or exclude patients with diabetes or exclude the patients with renal dysfunction. So it's very hard to fit that patient in the trial.
However, if you have a systematic review with multiple trials that have slightly different inclusion criteria, you're more likely to find your patient among these patients and be more comfortable applying that evidence to the patient. >> I've been speaking to Dr. Hassan Murad from the Mayo Clinic about his chapter in the Users' Guide to the Medical Literature entitled "The Process of Systematic Review in Meta-Analysis." Thank you for listening to JAMAevidence. This is Ed Livingston from JAMA.