Name:
W. Scott Richardson, MD, discusses differential diagnosis.
Description:
W. Scott Richardson, MD, discusses differential diagnosis.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/5248cae0-57ab-4ec4-95a6-851ff421bce4/thumbnails/5248cae0-57ab-4ec4-95a6-851ff421bce4.jpg?sv=2019-02-02&sr=c&sig=UXSmA6uD%2BzcS4cKQUuHw%2FTkUjqXOAVC1xrvaN7tAm4k%3D&st=2024-12-21T15%3A45%3A01Z&se=2024-12-21T19%3A50%3A01Z&sp=r
Duration:
T00H24M48S
Embed URL:
https://stream.cadmore.media/player/5248cae0-57ab-4ec4-95a6-851ff421bce4
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/5248cae0-57ab-4ec4-95a6-851ff421bce4/17071252.mp3?sv=2019-02-02&sr=c&sig=%2F3HwTvRxFCKwbu9Ht%2BxpAJyjpctgn%2FffEBd9aYk4JmI%3D&st=2024-12-21T15%3A45%3A01Z&se=2024-12-21T17%3A50%3A01Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> This is Gordon Guyatt from McMaster University. I'm going to be talking to Scott Richardson. Scott, do you want to introduce yourself to our audience? >> Sure. My name is Scott Richardson. I am a general internist. I serve as the associate dean for medical education at a medical school in Athens, Georgia. >> Okay, thanks. So what we're going to talk to you today about is one of our users guides, particularly about disease probability for differential diagnosis.
Can you tell us, Scott, about this whole issue and about the literature that relates to disease probability for differential diagnosis. >> Sure. I got interested in this when, both for my clinical practice and my teaching, when I would try to apply Bayes' theorem and test results to clinical scenarios, I would ask myself and my learners would ask me where did you get the pretest probability from. And most of the answers I was given when I was first starting out didn't strike me as very evidence-based.
So I started looking around for types of evidence that can inform my estimates of disease probability for differential. And I found two. One is the subject of a different chapter, clinical prediction rules, so we won't talk about that one more. But the other is the type of studies that are addressed in this Chapter 17 about disease probability for differential diagnosis. These are usually prospective cohort studies where all enrolled patients have the same defined clinical problem, such as chest pain or back pain, and they undergo a thorough, consistent diagnostic evaluation to figure out what sorts of underlying etiologies have caused that presenting clinical problem.
And then those etiologies are reported in number and percentage. And those frequencies of underlying disease then can be used to estimate retest probability for patients with that same clinical problem. >> Does something occur to you, offhand, of a problem that you confront and what you found from the literature about approximate diagnostic probabilities? >> Sure. I see a number of patients with chronic cough.
And as an example, there is a whole series of papers of this ilk that are about the careful evaluation of people with chronic cough. And in a stepwise fashion, people have found that in most theories, all patients with chronic cough are usually explained by just four conditions, five if you count, among those who smoke, smoker's cough itself. But in non-smokers, it's usually just four conditions Upper airway cough syndrome, used to be called postnasal drip; asthma; medication effect; and usually those are the four thrown in there, smoking if that needs to be.
Oh, and reflux as well. So that allows you to start with those four and work up those four, do even treatment trials of those four, before looking at the 50 or more other causes that are much less common. >> So is it important to know which of those four is the most common and which is the least, or is it just that they're common enough that you should think about them? >> Two ways to think about that. One way, when you're picking your leading hypothesis for your differential diagnosis, you might go with the most common.
But the other way is, just as you suggest, for the other things that you want to rule out, if it's common enough, you're probably going to want to rule it out as an active alternative, even if it's not your leading hypothesis. So you can use it both ways. The most common for the thing you start with, but then common enough for the things you want to rule out. >> So can you tell us what sort of studies that you would be looking for to inform your-- our decisions about disease probability for differential diagnosis and how such studies are typically conducted?
>> They are almost always prospective studies, occasionally retrospective, but almost always prospective. They would be cohort studies. They would be not necessarily named disease probability or differential diagnosis, but they might say in there that they aimed to describe the underlying disease frequencies of patients with a defined clinical problem. There should be somewhere a definition with criteria for the clinical problem.
They usually work prospectively through a standardized diagnostic evaluation, and they would report the frequencies of these. And it turns out there are lots of them available once you start looking. >> So you mentioned that the titles may not make it obvious that this is exactly what it's about. Can you give us any clues of if we're seeing a particular patient with a particular problem, be it the cough you mentioned, be it the back pain, be it whatever particular problem, how do we find these studies?
>> Though these can be challenging to find, to start with, in the title, it may say something like the clinical spectrum of disease in patients with a chronic cough. Or it may say the etiology of someone with back pain or the epidemiology of causes of fever of unknown origin, words like that. It helps to, as always, it helps to have a librarian to assist you in finding this evidence.
And they know tricks like, for instance, using publication type, looking for publications that are cohort studies, sometimes using the MeSH term for longitudinal studies. And another aspect is that once you find one, you can often find newer ones using the related articles device in PubMed. >> So I presume we're not going to find such studies looking in UpToDate. >> Well, occasionally they may be cited.
Classic ones tend to be. For instance, if you were to look into the clinical problem of fever of unknown origin, they might refer to the one published in 1960 or '61 by Petersdorf et al. But there's been about 200 since published, and not all of those would be cited in a standard textbook. And that does beg the question, how would you find them. And to me, the answer boils down to, eventually we're going to need to do systematic reviews of this. But right now, we're finding them individually.
>> And so you said go to a librarian. Are we stuck with a librarian or any, can we go to PubMed or elsewhere that you would suggest ourselves to see if we can dig them out? >> I like trying it first myself. And what I usually do is use the MeSH heading and the key words for the presenting problem. So I would look up both MeSH and key words for, say, chest pain or back pain or cough. Using those, then I would search for the longitudinal studies or cohort studies, both as a MeSH term and as a publication type.
And somewhere in there I usually find at least one that looks like it reports it in the abstract. And once I've found at least one, I ask PubMed to show me all the MeSH terms they code with, and then I add those to my search. And once I have that, usually I find several. And with related articles, I can keep finding more. >> Okay. Well, that's a great suggestions of how we might go about it if we're going to do it ourselves. So you now you found one of these articles and you want to say, is this any good or not.
Is this one I can trust or one I can't trust. So how would we go about making that decision about trustworthiness or risk of bias? >> The user's guide chapter poses two basic questions about trustworthiness, judging the risk of bias. The first one is, did the study patients represent the full spectrum of those with a clinical problem. And this is really all about getting the denominator right, making sure all the people in the study are people with the right clinical problem and they represent the universe of people with this clinical problem.
And the guide chapter explains how you would make that judgment. After you're sure that the denominator is right, the next step is judging whether the numerators are right. And that's done with a question, that's "was the diagnostic evaluation definitive?". You'd want to see that they they'd done a careful evaluation, had sensible criteria for the disorders they diagnosed, and so forth, as outlined in the chapter so that the numerators you get are credible.
So having considered both the denominators and the numerators, that gives you an approach to judging the risk of bias. >> So you've looked at an awful lot of these studies. What can we expect? Can we expect to be disappointed by failure to meet these criteria? Or can we expect that most of what we find has done a good job and satisfied the criteria? >> I would say it's probably a middle state. Many do a good job.
Some, though, and plenty enough, unfortunately, are disappointing. So in contrast to some studies maybe where they're all terrible or other studies where they're all wonderful, this is one where it really matters to apply these users guides in judging the risk of bias because there are gems in there that can be used if you appraise them and find them. >> If they failed the criteria, we give up and throw it out, or we say we'll use it anyway but take it with a grain of salt?
>> So I find myself answering that question differently if I'm doing patient care or if I'm teaching. So for patient care, I would say sometimes we have to use it with a grain of salt if that's all that we can find. Because an informed, a somewhat informed estimate is sometimes better than no estimate. Having said that, for teaching examples, I usually like to find ones that do a pretty good job in terms of risk of bias so that we are more confident in the result.
>> So we're not going to throw them out, but maybe if we have two we might select between them on the basis of the one that's stronger. >> Yes, no question. >> Okay. Good. So now we've got one that we're going to use, either because it's not so good and it's all we could find or hopefully because it meets those risk of bias criteria. How can we use the results of these studies for clinical decision making?
>> The first way is the way we talked about, which is to say, use the frequency of the diagnosis over the denominator, you know, the numerator over the denominator. And that become a starting estimate of our pretest probability. And some people like to use that exact number, like 57 percent. I actually like the way you were suggesting earlier, and that is as probability ranges. Wow, these three conditions are high probability in most of these studies, you might say. And so we'll often look for all three conditions, whereas perhaps there's a list of many other conditions that are much lower probability and you would only look for them in selected patients.
So I use it to help frame, not just to estimate the probability for an individual diagnosis, but to help frame the whole differential diagnosis. And to return to the cough example, if you know that 90 percent or more of patients with chronic cough who are non-smokers have one of four conditions, upper airway cough, asthma, medicine effect, or esophageal reflux, you would start with that as the shortlist differential before turning to the much less common thing.
>> And I guess that might affect, first of all, your functional inquiry and the questions you ask patients. >> Oh, no question. So the review of systems becomes much more focused. Patients are often, they have a qualitative sense of this, of looking for the common and serious before looking for the rare. Many people do appreciate that. In addition, you can offer therapeutic trials for those common and serious before looking for the rare.
>> And then I guess it might also inform your initial diagnostic test selection. >> Yes. So very much so. We tend to think that we should select tests that confirm or have high likelihood ratios when positive for our leading hypothesis, the one that has the highest probability. And we want to choose tests that will help us exclude the other tests that have lower probabilities but are still high enough to test for. So in a given patient with chronic cough, for instance, if we thought it was medication effect, we'd do the test, which is to say remove the medication.
And for that one, as to confirm that. >> So the clinician has a situation where they'd be interested in looking for a disease probability study. How often are they likely to come away frustrated, not being able to find anything, or how often are they likely to come away with something that they will be able to use? >> Well, we wondered that very same thing.
We kept finding people who could tell us that they thought they'd never seen this kind of information. So some years ago some colleagues and I did a study where we did a prospective consecutive series with the literature surveys helped by a librarian, helping us search. In three months' time of an inpatient acute medicine service, we had 122 patients admitted for a diagnostic evaluation. And for 111 of those patients, they had clinical problems for which there was disease probability evidence that was of good or better quality available in the literature.
So although you might think, ah, there's not that much out there, it turns out when we looked at our sample, we found quite a number of our patients had this type of literature available. Now, two things. One, it was an inpatient medicine service, so I don't know if anyone would find the same in other contexts. And also, I haven't seen this study repeated by anyone, so I don't know if that was just a usual quirk of our service or if it would be repeated if looked at elsewhere.
>> So what you've just told me of the results makes me wonder about whether I fully understood one of your answers to a prior question. You said not only did you find 111 out of 121 that there was information, but you said that most of the time, if I've understood you, it actually satisfied the risk of bias criteria pretty well. >> Right. So we actually did a quality of evidence found in these studies.
So for all our patients we counted up their problems, 122 patients had 45 problems. For 35 of those problems, we found this evidence, which linked to 111 patients. And for each of those 35, we found two or more studies, and we ranked the quality of evidence that we found. And using an older scheme, not the risk of bias but the previous scheme from the Oxford Center for Evidence-Based Medicine, we found level one, two, or three evidence in 66 out of the 69 that we saw or roughly 95 percent.
So it was credible within the confines of this kind of evidence. Again, you know, that means there's not only evidence out there but good quality evidence out there often, at least in the hospital medicine type of problems. Examples of those problems are, like, abdominal pain, chest pain, delirium, fever of unknown origin, and so forth. >> Okay. I'm still trying to put it together in terms of the distribution of the quality.
So the earlier answer made me think, oh, 50/50 it's going to meet criteria. The answer you just gave me, 95 percent is going to meet criteria. Can you help me understand that? >> Yes. So in this one sample, for these 35 problems, it was 95 percent. But I can't really claim that it's true for all settings or for all clinical problems. So I would guess the proportion might be lower in those others.
>> Yup. That makes sense. So tell me, many clinicians might feel that they, on the basis of their training and experience, they have a pretty good idea of what the differential diagnosis is going to be of what those three or four or five top differential diagnosis items that are going to end up being the cause in most cases. Are they right, in which case, well, maybe this is not the greatest use of time, or how often are they going to be surprised or find something that they weren't expecting?
>> So that's one of the things that got me started in this in the first place. There's been a series of work since the late '60s, early '70s and carrying forward showing that clinicians have difficulty estimating disease probability accurately. What we're good at, or relatively good at, is saying this condition seems more likely than that one. So in relative terms. But we have difficulty with absolute terms. And the reason this matters is because we may be better at thinking this is our most likely condition.
So good at estimating the leading hypothesis. What we have much more difficultly with is the lower probability one. And the reason this matters is we're often ruling out serious disease with testing. Somebody comes in with chest pain, and we want to rule out tension pneumothorax, aortic dissection, acute coronary syndrome, even if those are lower probability because they're so serious. Those conditions are serious and treatable. But that's not our most likely situation.
So we need help as clinicians on the lower probability conditions, some of which, though, are not sufficiently low that we should do no test. They're just, we need to do a test that can rule them out rather than rule them in. And this is not always what the diagnostic practice guidelines focus on. Many of them focus on how to confirm the leading hypothesis, in which case you could be right that clinicians could say I already, I suspect this.
I need a test to help prove that. But for the times when we want to rule out a lower probability condition, knowing the disease probability and knowing how to select tests that are good at excluding those conditions, that's a different diagnostic strategy, and that's where we need the help a bit more. >> Can you remember -- you may not, but it might be interesting -- can you remember any time that you got a surprise from what your previous intuition was, something that changed the way you look at it when you actually looked at one of these studies?
>> Yes. One that pops to mind, among causes of hemoptysis, I didn't realize that bronchiectasis was still common in the world. You know bronchiectasis, we're taught, as if, you know, now that we have antibiotics we don't see it all that frequently, but in most of the series that have been done, it's fifth or sixth most common cause of hemoptysis. Everyone gets tumor and TB and maybe acute bronchitis, but it's those next few that are common enough, we should probably test for rather than the rare thing.
And to me that's an example of something that, wow, now that I know it I'll be a bit more careful on that part of the differential, on the active alternative. >> Okay. Thanks. Great example. In clinical care, research evidence about these disease probability, how well is it used in clinical practice guidelines about diagnostic strategies? >> Well, I think it has the potential to be used in every one of them, but right now it's not yet so.
I mentioned earlier that many of them are focused on confirming your leading hypothesis. And that's where it is sometimes cited. So in recommendations about the imaging for back pain, they usually say the most common is non-specific low back pain, and therefore, this is our recommendation for those patients. What isn't usually included are stuff about the lower probability but high seriousness conditions, whether and when to rule those conditions out in patients.
And that's where I think that adding disease probability evidence for differential diagnosis might be the most helpful, if practice guidelines were to more consistently incorporate the evidence for that side. Not just, you know, differential diagnosis, as you know, is not only what we think the patient has but also what we want to make sure it isn't. And it's in that second part that we often need more help. >> Okay. Scott, you have given a great overview of this area, which I think many clinicians might not, as you I think said earlier, sometimes people think that literature isn't there.
But you've told us very compellingly that it is there and can be of great use to us. Anything else that we haven't mentioned in the conversation that you would like to state to finish off? >> One thing is that I look forward to the day when electronic health records actually add another source, which is to say, well-maintained and curated practice databases. Because the literature has what it might be rigorously defined probabilities, but they're often from other places.
Whereas if we had them in their E.H.R., the same counting, prospective counting of all the things we find, those data might be more directly mappable to our own local experience. So I look forward in the day when we have both practice databases and literature resources available. >> Okay. That's great. And I think many physicians might welcome the idea that electronic healthcare record does something useful rather than frustrate us.
So thanks very much, Scott. That was super. I really appreciate you joining us. >> Okay. Thank you very much for having me. >> This has been Gordon Guyatt, the editor of the Users' Guides to the Medical Literature, talking to Scott Richardson about our users guides on disease probability for differential diagnose. If you want to learn more about this or other users guides, you can go to jamaevidence.com where you will find the full collection of Users' Guides to the Medical Literature.