Name:
ImageAmy H. Kaji, MD, PhD, discusses noninferiority trials.
Description:
ImageAmy H. Kaji, MD, PhD, discusses noninferiority trials.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/d48b8fde-e694-45f2-b6b2-91d81aae44c3/thumbnails/d48b8fde-e694-45f2-b6b2-91d81aae44c3.jpg?sv=2019-02-02&sr=c&sig=z8K2BI1vkVBDjh6xl9t1M2ugMmrXTt8htHlgSBdb0OM%3D&st=2025-05-11T18%3A44%3A03Z&se=2025-05-11T22%3A49%3A03Z&sp=r
Duration:
T00H10M52S
Embed URL:
https://stream.cadmore.media/player/d48b8fde-e694-45f2-b6b2-91d81aae44c3
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/d48b8fde-e694-45f2-b6b2-91d81aae44c3/18216944.mp3?sv=2019-02-02&sr=c&sig=tHBO%2FaErf0fiOLQa5XckklV6F5ot7MDisKnwtqiEzuQ%3D&st=2025-05-11T18%3A44%3A03Z&se=2025-05-11T20%3A49%3A03Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> This is Ed Livingston, Deputy Editor for Clinical Reviews and Education at JAMA. I'm here with Dr. Amy Kaji who wrote a chapter in the JAMA Guide to Statistics and Methods on Noninferiority Methods. So why don't we start with you introducing yourself and tell us your name and title. >> Sure, my name is Amy Kaji and I am currently faculty at Harbor UCLA Medical Center. I am professor of emergency medicine. >> You've written a chapter in the JAMA Guide to Statistics and Methods book on noninferiority trials, so could you explain to us what a noninferiority trial is?
>> Sure. It's basically a study design that's used to determine whether new intervention, which might offer advantages such as decreased toxicity or cost, does not have lesser efficacy than an established treatment. The control group is an active control different from the regular trial and it's the known effective treatment that serves as the control. >> And when you're referring to the normal control you're referring to a placebo control?
>> Yes, a superiority trial. >> Yes. This is done in a circumstance when there's an established accepted therapy that's no longer ethical to do a placebo trial so you want to find out if the new therapy is as good as the old therapy. Is that one way to say it? >> That's correct. Yeah, I mean the goal is to demonstrate that it's at least as or almost as good as the existing therapy, and that's because maybe this new therapy has other advantages like decreased cost, fewer adverse effects, greater convenience but similar efficacy to the standard treatment.
>> There's this really complicated concept, at least that I find complicated, called the minimal clinically important difference which is that "just as good as part." So could you explain what a minimal clinically important difference is, how people come up with that number, and how that factors into noninferiority analysis? >> So I think that the minimally clinically important difference is a clinical question, not a statistical question, so that should be determined a priori prior to initiating the study.
And that is going to depend on prior evidence as well as what you as a clinician think is important. So things that would be important in determining this would be things like the expected event rates, regulatory requirements and severity of disease, the known toxicity of both treatments, the inconvenience of the standard treatment and the prior endpoint. So for example, you might select a smaller noninferiority margin if the disease that's being selected is really severe or the primary endpoint is something like death.
>> So clinicians come up with some kind of number that they factor all those things you just mentioned into a number that has some outcome and they decide that if the outcome is different by some margin between the two groups that it's clinically important. And that drives a noninferiority trial's statistical design, so could you tell us how that works out? >> So just say that you have a standard treatment and the success rate is about 95% and someone says OK, well, the pre-specified noninferiority margin should be 24%.
Somehow they come up with that because of all regulatory requirements as well as inconvenience, etc. Then when you test this hypothesis, the experimental treatment strategy would be noninferior if the experimental treatment -- the lower level of the confidence interval is 71%. So 95 is the active control, 95 minus 24 is 71%, so if in your trial you find that your treatment success of this experimental arm is 85%, say, and your 95% confidence interval is 79 to 92%, then you could state that the experimental treatment is noninferior because the lower limit of the 95% confidence interval is 79.
And earlier I just stated that it could be as low as 71%. It does determine the sample size too, you know, your noninferiority margin that you select will determine the sample size that you need. So you can, in general, state that the smaller the noninferiority margin that is selected the larger the sample size that may be required. >> One thing I always get confused about is whether or not you should do a one-way or two-way statistical test when doing this.
Does it matter? How do you go about that? >> Yes, so because you are only demonstrating noninferiority, you're not trying to distinguish it from superiority, you can use a one-sided confidence interval rather than the typical two-sided where negative values might demonstrate inferiority of the experimental arm. But noninferiority is demonstrated or you've demonstrated it if the lower limit of the confidence interval lies above.
If you can imagine this in your mind above or to the right of this selected noninferiority margin, so it only needs to be on one side. >> So when it's on one side do you make it .05% on that side or 2.5% on that side? How does that get calculated? >> Yes, that's correct. So you alter the 95% confidence interval such that it would be 97.5 or 2.5%. >> OK. Yeah, because I've heard people say it doesn't really matter.
You can -- even if you're doing it one way you still calculate it with the two-way .05 interval and it just gets confusing because it seems if it's one way it should be .025, not .05. So I get confused by that and I've heard people say various things. >> Yeah. >> OK, is there anything else? You covered it very quickly and very efficiently. Is there anything else you think we need to talk about to explain this concept to clinicians who I think get confused by this all the time?
>> I think one point would be that -- let's see -- for noninferiority trial and you're doing an intention to treat and a per protocol approach, in general people state that both approaches should demonstrate noninferiority; it shouldn't just be the ITT. And then it would also be maybe important to note that, you know, a noninferiority trial is not the same as an equivalence trial because demonstrating that noninferiority exists doesn't demonstrate equivalence because it's a different study design.
>> And you need a lot more patients for equivalence because you're going in both directions, right? >> You are going in both directions. >> What about superiority? How does it relate to superiority trials? >> So I think one of the main misconceptions is that, you know, someone says that, well, we failed to demonstrate a significant difference between the intervention and the control and its a superiority trial. And then one says, oh, well, that means that there's no difference.
But absence of a difference isn't the same as there's not being a difference. But in general, superiority trial requires fewer patients because you have a big margin. >> So I'm a little confused by that. So you have a bigger margin with a superiority trial than a noninferiority trial? >> Because you're going both ways. >> Uh-huh. >> So with a noninferiority trial, in general you're trying to demonstrate that you're within a smaller range and so therefore you often do require a larger sample size.
>> Oh, I see. >> Not always. And in fact, the equivalence trial would probably be somewhere in between the superiority and the noninferiority sample size. >> OK, and just for our listeners' sake, the smaller the margin the more patients you need to do your trial. So sometimes one of the things we see at the Journal all the time is that people pick unrealistically large -- >> Right.
>> -- minimal clinically important differences so that they wind up needing fewer patients in their trial. And then they get into trouble because they don't hit those margins. >> Yes. >> And that's -- we see that a lot at the Journal. Very, very common. So it's very, very important -- I tell people when I talk to groups about publishing in JAMA, one thing I talk about every time is how important it is to carefully select and realistically select the MCID when you're designing a trial.
That the most -- in my view it's the most common reason that trials get messed up and wind up not being publishable in JAMA because of that one problem. >> Yeah, I mean that is the first question that needs to be answered. And you know, as a statistical consultant one of the first questions that people come to you and ask is what is the sample size that is warranted, and for that question to be answered it's really, well, you have to define the minimally important clinical difference whether or not its for superiority, equivalence or a noninferiority trial but so important in a noninferiority trial.
>> Thanks for listening to this JAMAevidence podcast on the JAMA Guide to Statistics and Methods. For more information about this topic, go to JAMAevidence.com where you'll find a complete array of materials that help you understand the medical literature. In addition to the JAMA Guide to Statistics and Methods there's the JAMA Rational Clinical Examination, the JAMA User's Guide to the Medical Literature and JAMA Care at the Close of Life. You'll also find a variety of tools that will help you read articles and they include a glossary of technical terms, learning tools and calculators.
There's also a series of educational guides for all the content found in JAMAevidence.com. This is Ed Livingston, Deputy Editor of Clinical Reviews and Education for JAMA and co-author of the book the JAMA Guide to Statistics and Methods. Thanks for listening.