Name:
Anna E. McGlothlin, PhD, discusses minimal clinically important difference and defining what really matters to patients.
Description:
Anna E. McGlothlin, PhD, discusses minimal clinically important difference and defining what really matters to patients.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/70902835-0561-40f6-b2d3-605452b24870/thumbnails/70902835-0561-40f6-b2d3-605452b24870.jpg?sv=2019-02-02&sr=c&sig=1Di13sF3OEDh7tLMCIizu5KHIsfw9v5oynlJMJ88T3Q%3D&st=2024-12-30T17%3A42%3A22Z&se=2024-12-30T21%3A47%3A22Z&sp=r
Duration:
T00H11M42S
Embed URL:
https://stream.cadmore.media/player/70902835-0561-40f6-b2d3-605452b24870
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/70902835-0561-40f6-b2d3-605452b24870/18608207.mp3?sv=2019-02-02&sr=c&sig=BcWSlnYO1vJu72ROsD0%2B%2FNzwX37ho8A1RC6ohwmBv40%3D&st=2024-12-30T17%3A42%3A22Z&se=2024-12-30T19%3A47%3A22Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> This is Ed Livingston for JAMAevidence, and we're here to talk about an important concept presented in the JAMA Guide to Statistics and Methods, A Minimally Important Clinical Difference. We published a chapter in the JAMA Guide to Statistics and Methods book on the subject, and the author of that chapter is here with us today. So, Dr. McGlothlin, could you please introduce yourself? Tell us your name and title. >> Hi. My name is Anna McGlothlin. I'm a Senior Statistical Scientist at Berry Consultants.
>> Could you tell us why the minimal clinically importance difference is important? >> Sure, so the minimal clinically important difference is a concept to measure whether a treatment is actually improving an outcome from the patient's perspective. So, this is especially important for things like patient reported outcomes where it may not be intuitive what a change in that outcome means from prior to treatment/post treatment.
So, the MCID or minimal clinically important difference is about trying to map what the patient's experience is to the value of the improvement that they see on that scale. >> The minimal clinically important difference is distinct from the minimal detectable difference, which is a term used mostly in the statistical literature. Could you tell us the difference between the two? >> So, when we're designing a statistical study, typically we're creating the sample size for that study to be large enough to detect a change or a difference from a control that is unlikely to be due just to random chance, and the size of the study that we create is very closely related to how likely we are to see a difference.
So, the smaller a treatment effect is, the larger a sample size we'll need. So, it's possible that we can create a study that has a very large sample size and can detect that there is a difference between the treatment and no treatment, but if that difference is not clinically meaningful, that's what we would refer to as a minimal detectable change but not necessarily a change that has clinical importance to the patients.
>> The MCID relates to sample size calculations. Could you explain how it's related? >> The MCID, what we would typically want to consider when determining a sample size study is how many patients are going to experience this minimal clinically important difference. So, we would refer to this as a responder analysis. So, we would want to detect whether more patients on the treatment experience this minimal important change relative to the proportion of patients with no treatment or on the controlled treatment that experience this change.
>> So anyone who's heard me talk, I give a lot of talks about being a JAMA editor, has heard me say that one of the most common causes for papers to fail or give us trouble at JAMA is a poorly constructed or poorly described minimal clinically important difference. So, I tell prospective authors and investigators probably the most important thing they can do when designing a study is to give very careful consideration to what the MCID is and have a rational process that they've implemented to come up with that MCID and something that they can defend.
It's amazing how common that's not done, and we find authors trying to explain away something they came up with as an MCID after the fact when things have gone not as well as they expected and their differences they found in their study may be statistically significant but not clinically significant. So, it's a big problem. You describe three processes that investigators can follow to craft a minimal clinically important difference, and they are by a consensus process, an anchor-based process, and distribution-based methods.
Could you walk us through those three major approaches to determining an MCID? >> The consensus method for determining an MCID involves gathering a group of experts to individually assess what they consider to be a minimal clinically important difference, and they do that without knowledge of what the others are considering to be an important difference. And then after those scores are revealed, they then have the opportunity to reassess their score and then come to a consensus about what's considered to be an MCID as a group.
In an anchor-based method, the idea is to take a group of subjects where you can assess their score on the scale that you're determining the MCID, so typically this would be a patient reported outcome, but then anchor those scores to some other assessment of a patient's improvement. So, you might give the subjects a questionnaire, for example, that asks them whether they feel about the same after treatment, if they feel a little bit better, if they feel much better, and then you can create a mapping between what their scores were on that qualitative scale to what their scores were on the numerical scale, and that allows this score that they have on the numerical assessment to be kind of anchored to what degree of improvement the subject actually experiences, and then distribution-based methods are more of a purely statistical concept where we're looking at the distribution of scores and the statistical properties of that distribution, so whether those scores tend to vary widely across patients or whether they're more consistent across patients.
And in the distribution-based methods, the MCID is sometimes replaced, that term is often replaced, with a minimal detectable change, as we discussed earlier, because it's not necessarily tied to what the patient experiences as benefit, but really just tied to whether the change is big enough to be unlikely due to chance. >> Is there any one of those three approaches that's better than any other, or is it just that you choose one of them based on the type of study you're doing and what kind of data you have and what kinds of data has been published before that you can rely on?
>> So, all three of the methods have their place, and certainly that depends, which method you use depends a little bit on what you're studying. Distribution-based methods typically because they don't explicitly rely on a patient's experience, they usually would not be recommended as the only way for determining a minimal clinically important difference. So, usually the anchor-based or the consensus-based methods would be preferable.
They do certainly have drawbacks. So, the anchor-based method, for example, depends on the choice of the anchor, and they can also be subject to recall bias. So, when you're asking a patient to quantify whether they felt a little bit better or much better certainly depends on when you're asking that question, and if it's from a significant time in the past, then that could create bias when you're assessing your MCID.
>> One thing you stated in your article was that, to quote you, "Ideally, determination of the MCID should consider different thresholds in different subsets of the population." That really resonates with me because I just finished handling a paper that will be published in JAMA on antibiotic treatment for pediatric appendicitis, and they have a beautiful description of how they determined their MCID in that paper, but when they surveyed surgeons who were making the decision whether to do an appendectomy or let a patient go on for antibiotic treatment, they had a very high threshold for patient safety and were only willing to accept a relatively small number of patients failing antibiotic therapy before they would say that the treatment is a failure and that established their MCID.
But when they talked to patients or they talked to non-surgeons, they got a very different value, and the actual values were surgeons felt that any antibiotic treatment of appendicitis that failed more than 30 percent of the time was a failure, and that was their MCID. And when you talk to patients and non-surgeons, they thought it was 50/50, that if 50 percent of patients failed antibiotics, that was okay. This study came out something like, I think, 75 percent of patients failed within a year, and so it was because they powered the study on the surgeons' opinion, because that was the most conservative one.
It's essentially, it's classified as a negative study, but if you were to look at it from the perspective of patients or non-surgeons, it's wildly positive. It's a wildly successful study. So, how, when you have a very widely different perception of MCID's amongst different stakeholders, how do you design a study, and how do you wind up interpreting it? >> That's a great question, and I think what we do see is that people have very different experiences and that the degree of improvement that a patient feels may differ depending on where they started.
So, these are things that I think should be taken into consideration when we're determining what the minimal clinically important difference is. So, more severe patients may require a greater improvement to actually benefit from that than a less severe patient, and when we're designing a study, it'll be really important to consider what types of patients are enrolling in the study and whether the treatment is intended to benefit patients across that scale of severity or whether it should be focused more narrowly on a smaller subset of the population, and that could determine how you design your study and whether you need to consider those groups separately or go after a broader indication.
>> This is Ed Livingston. I've been talking with Dr. Anna McGlothlin from the Berry Consulting Group who has written a chapter in the JAMA Guide to Statistics and Methods found in the JAMAevidence on Minimal important Clinical Difference, Defining What Really Matters to Patients. You can find this and a great deal of contents such as the Users' Guide to the Medical Literature and Rational Clinical Examination at JAMAevidence.com.