Name:
Evidence-Based Medicine and the Theory of Knowledge: Interview With Dr Gordon Guyatt
Description:
Evidence-Based Medicine and the Theory of Knowledge: Interview With Dr Gordon Guyatt
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/c1093d18-7048-4132-be79-7123d6891f07/thumbnails/c1093d18-7048-4132-be79-7123d6891f07.jpg?sv=2019-02-02&sr=c&sig=%2BJscjNDvX4cmtg%2FX785eA6fxvV5gY1KcLt%2ByNN30chQ%3D&st=2025-01-15T11%3A30%3A27Z&se=2025-01-15T15%3A35%3A27Z&sp=r
Duration:
T00H18M21S
Embed URL:
https://stream.cadmore.media/player/c1093d18-7048-4132-be79-7123d6891f07
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/c1093d18-7048-4132-be79-7123d6891f07/13416590.mp3?sv=2019-02-02&sr=c&sig=w8HGUN7Uexv9MWc97de4xGyetxfPLu1An4tvyhWZgUI%3D&st=2025-01-15T11%3A30%3A27Z&se=2025-01-15T13%3A35%3A27Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> Hello and welcome to JAMAevidence our monthly podcast focused on core issues in evidence-based medicine. I'm Amy Thompson, Associate Editor at JAMA. Today, I'm speaking with Dr. Gordon Guyatt about the three key principles of evidence-based medicine. Dr. Guyatt is an editor of the Users' Guides to the Medical Literature, and co-author of Chapter 3 in that book entitled, "Evidence-Based Medicine and The Theory of Knowledge". He's also a Professor of Medicine and of Clinical Epidemiology and Biostatistics at McMaster University in Hamilton, Ontario, Canada.
Dr. Guyatt, I'd like to begin by asking about your interest in evidence-based medicine. How did you get involved in this field? >> I was fortunate enough to come to do my residency training at McMaster University, and it so happened that it was at the time when Dave Sackett and colleagues were trying to train everybody they could in what was then critical appraisal.
And so, as a resident, I did a course on critical appraisal, which seemed kind of interesting to me. But then I went back to do my residency and then it came time to -- near the end of my residency to do for what most people was a research year, and I looked around as to what I could do in that research year. And one of the possibilities was, as opposed to doing what people usually do, an option was to take the master's in what is effectively clinical epidemiology, what they were then calling it was a master's in design, measurement, and evaluation.
And that's what I decided to do for this research year, and I found that this was fascinating stuff. And so, I ended up moving in that direction in my career, as a researcher and an academician. Initially, with critical appraisal, it was taking an article and evaluating the article, very much a classroom exercise. And then they said, we need to bring this to the bedside.
So, in other words, starting with a real-life patient with a problem that we need to address, and trying to address it. And bringing critical appraisal to the bedside gradually evolved into what we decided in 1990 to call evidence-based medicine. >> While reading this chapter, The Theory of Knowledge, I was surprised at part of the definition of evidence, as far as evidence-based medicine is concerned. I had assumed that evidence would be pretty narrowly defined to include only scientific observations made under strictly controlled circumstances, but you write that evidence includes unsystematic observations by clinicians.
Can you tell our listeners about the different types of data that qualify as evidence? >> Sure, we like a broad definition of evidence and clinical observation. One example that one could use is a surgeon, and a surgeon has been taught to do a procedure in a particular way. And then some bright, innovative guy or woman says, I'm going to do this another way.
And they start doing it another way, and they find their impression is that the outcomes are better, and then they say, I think the outcomes are better when I do it this way. And you say, well, how do you know that? Well, I did it the old way a hundred times and my impression was this happened. And now I did it the new way, another time and my impression is that it's working better. Well, maybe they're right. But the problem is that we're relying on their memory and they may be biased.
It was their new idea that may have their impression. It may be that there were other changes that were going on to the outcomes are better, but it has nothing to do with the way they did it. Other things were going on and so on. So, in other words, there's all sorts of biases. So, how could it be made more trustworthy? Well, instead of just the memory, there -- somebody comes and says, well, I documented a hundred -- the outcomes very carefully in the hundred people I did it before, and then in the hundred people I did it after and look, here's my documentation, and it looks like it works better now.
Well, you've improved things. Now you're not relying on the surgeon's memory of what happened. You have careful documentation, however there is still biases in the way the assessment was made. There could have been -- the patients could have been different before and after and other things could have been going on, other changes in care that are, in fact, responsible. So, now we could do an observational study, where in parallel, the patients are one procedure and the other procedure, different surgeons, more standardized assessments, documentation of co-intervention, other things that are done, and now it becomes more trustworthy.
And then you say, wait a minute, the patients might have been different in the two groups. Let's now randomize them so that the baseline characteristics are similar. Okay, good. Now, we become more trustworthy. And then we say, well, wait a minute, there still could be bias in the outcome assessment. Let's not only randomize people, let's make the outcome assessment blind. So, the principle here is that research is nothing more than systematic observation to try and reduce bias in getting at the truth.
And what I've tried to illustrate, it's a continuum from a clinical impression to the most sophisticated randomized trial that introduces many, many strategies to reduce bias. At the point of the randomized trial that introduced all those strategies, you have minimal risk of bias, and you can be pretty confident about the results. At the other end with the unsystematic observation, it's much less trustworthy but if it's all you've got, it's still a form of evidence.
>> So, the word evidence reflects a process of information gathering? >> The information gathering process, that's absolutely right, can be done with minimal safeguards against bias, which makes it less trustworthy, or very rigorous, extreme safeguards against bias, which makes it more trustworthy. >> Can you summarize for our listeners, the three epistemological principles in evidence-based medicine? >> Sure, to have evidence-based medicine work appropriately and optimally, we need to have a systematic summary of the best evidence, ensuring that it's representative.
We need then, once we've got the best evidence together, to decide whether that best evidence is high quality and highly trustworthy, or at the opposite extreme, very low quality and not at all trustworthy. And then you need to bring that evidence to the shared decision making so that the decision is consistent with the individual patient values and preferences. >> I'd like for you to walk us through these three principles using a topic in the current medical literature as an example. Let's start with principle number one, that the pursuit of truth is best accomplished by examining the totality of evidence rather than by selecting a limited sample of evidence, which is at risk of being unrepresentative and will certainly be less precise than the totality.
>> Okay, so let's say anticoagulation in patients with atrial fibrillation. And to make it current, let's say we're interested in the novel anticoagulants versus warfarin. So ultimately, evidence-based medicine believes in shared decision making. So ultimately, we want to sit down with our patient, we want to go over the pros and cons and we want to involve the patient in the decision and get the decision that's the best for the patient.
And clearly, that has to be based on a accurate presentation of the evidence. So, let's start from you need a representative sample of the best evidence. If you just picked one study about some novel or direct anticoagulant versus warfarin, that study may be unrepresentative for some reasons. Either it could have a higher risk of bias than other studies, or simply the play of chance, or the particular population that was enrolled, many reasons it may be unrepresentative.
So, what you want is a representative sample of the best evidence and that we believe we get by doing systematic reviews. So, what makes a systematic review systematic? You have explicit eligibility criteria. You have a comprehensive search. You assess risk of bias. You have your eligibility assessments and your risk of bias assessments made in duplicate and hopefully achieving a high level of agreement. You have a appropriate statistical strategy for combining the results and so on.
So, we have a representative sample of the evidence appropriately processed. >> And the second principle, that not all evidence is equal, and a set of principles can identify more versus less trustworthy evidence. >> We can have a wonderful systematic review, but it can be a garbage in garbage out phenomenon. So, if we have a nice review, but in fact, it is summarizing studies with high risk of bias, or not directly applicable to our patient at hand, or with inconsistent results within consistencies we can't explain, it becomes less trustworthy.
So, we've got the best evidence, and then we decide this is highly trustworthy evidence, or unfortunately, this is not so trustworthy evidence. There's still lots of uncertainty. >> The third principle is that evidence is necessary, but not sufficient and that clinical decision making requires the application of values and preferences. >> That information, hopefully, ideally summarized, and we're still working on this, but ideally summarized in ways that are optimal for presenting to the patient and involving them in the shared decision making.
And if we're talking about warfarin versus direct anticoagulants, direct anticoagulants are much more convenient. They probably, for most patients, you're not going to have worse outcomes in terms of stroke and bleeding. They seem like a pretty good choice. But if it's a individual who's paying for the treatment themselves, and they are much, much more expensive, an individual could say, well, as it turns out, if you use the warfarin optimally, the outcomes are probably the same.
I don't mind the inconvenience. I'm not interested in paying all the extra. And the warfarin may still be the best choice. >> You mentioned shared decision making. After reading this chapter, I think it's really important to stress that evidence-based medicine is not separate from the patients who we're treating. It's not simply the data available to researchers and clinicians on which they can base their opinions and their practice. Shared decision making is a critical part of the process.
>> I could not agree with you more and as -- when I present it to people I say it's a somewhat ironic principle, the key principle but a little bit ironic of evidence-based medicine, that evidence by itself never tells you what to do. It always has to be evidence in the context of the patient's values and preferences. >> How does theory contribute to EBM? >> So, in scientific inquiry there is a tension between theory and direct empirical evidence.
Historically, within medicine theory has been extremely important and by theory, I mean, physiologic reasoning. So, one may have experiments that go on in animals or in test tubes or physiologic experiments in human beings, and people then make inferences about what is going to happen when I apply the treatment in practice. So, let me give you some examples.
We know that if your bone density is lower, you're at greater risk of fractures, and a treatment came along called fluoride and when you gave the patients fluoride, their bone density went up. In theory, you would then say higher bone density, less fractures, this fluoride is going to work, and it's going to reduce fractures. Unfortunately, when it was tested in randomized trials, the fluoride increased the fractures despite increasing bone density.
The reason being that the bone that was made wasn't as good enough bone, was not the same kind of bone with resistance to fractures. We only found that out after they did the randomized trials, and then in a sense, the theory then accommodates itself to the result of the trial. Oh, more fractures, we better go back and understand the biology better. Another one that I'll tell you about is there were drugs called encainide and flecainide.
People after myocardial infarction with nasty looking arrhythmias on their cardiograms and when we monitored them are at high risk of sudden death. So, such individuals with these really nasty looking arrhythmias, you want to lower their risk of sudden death. The encainide and flecainide were brilliant at obliterating the asymptomatic nasty looking arrhythmias, all the biology indicated. And the theory if you will then following from that indicated theoretically, these antiarrhythmic agents should reduce mortality from arrhythmias.
And with this one, on the basis of what I've just told you, the FDA licensed it, but fortunately there were enough skeptics that their randomized trials were conducted. And unfortunately, these things, contrary to what theory would predict, the deaths were actually increased by use of the a encainide and flecainide. So, there's a couple of examples where the theory following from the biology would predict one outcome. But what happens is a very different outcome.
>> Is there anything else that you'd like to share with our listeners about evidence-based medicine? >> There is a perception in some people's minds that EBM is what one might call cookbook medicine. And we believe that there is -- that is as far from the truth as it's possible to be far from the truth. So, one of the issues that the clinician has to consider is, is the patient before you sufficiently similar to the individuals who were included in the study, that one is ready to apply the results with confidence to the patient?
So, I'll give you an example. I work as a general internist, and nowadays, I'm seeing a lot of people over the age of 90. When I come to treat them, I can rely on randomized trials to guide my treatment, sometimes large well done. But very few people over 90 were included in those trials. And the impact of the treatment, its benefits, and adverse effects, may not be the same in my people over 90.
And that's one of the things that I have to consider. There are often uncertainties in evidence. It's -- ideally it'd be nice if we had high quality evidence, but it may be moderate or low-quality evidence, and then the interpretation of the evidence becomes extremely important. And people may reasonably disagree about how best to interpret the evidence. And even if we have high-quality evidence, it applies directly to our patient. We all agree in the interpretation of the evidence.
People may have different attitudes toward the tradeoff. We may be, for instance, in a patient who's had a deep venous thrombosis and six months later decides whether to consider their anticoagulants, we now have a pretty good idea of how much continuing the anticoagulant will lower the risk of subsequent thrombosis and how much it will increase the risk of bleeding. But one individual facing that choice may say, I'm very averse to a recurrent thrombosis. I can live with the bleeding, I will continue with the anticoagulant and bear the burden of continuing the anticoagulant.
Another person may say, no thanks. The bleeding sounds awful to me. I can live with the risk of thrombosis, and I'm not interested in the burden of the anticoagulant. So, when you think through that process and through the key principles of evidence-based medicine, it is the farthest thing from cookbook medicine that one could imagine. >> Thank you for speaking with me today, Dr. Guyatt. More information about this topic is available in the Users' Guides to the Medical Literature and on our website jamaevidence.com where you can listen to our entire roster of podcasts.
I'm Amy Thompson, and I'll be back with you soon for another edition of JAMAevidence.