Name:
Harold C. Sox, MD, discusses pragmatic trials and how they can help address “real world” questions.
Description:
Harold C. Sox, MD, discusses pragmatic trials and how they can help address “real world” questions.
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/59e27297-c089-44db-8a45-73a6a106a92e/thumbnails/59e27297-c089-44db-8a45-73a6a106a92e.jpg?sv=2019-02-02&sr=c&sig=g%2BNUYhTfMIm94mf1LGAxwWXfQg%2B97aRZBPy%2BSnRFXTM%3D&st=2024-12-22T06%3A00%3A22Z&se=2024-12-22T10%3A05%3A22Z&sp=r
Duration:
T00H21M14S
Embed URL:
https://stream.cadmore.media/player/59e27297-c089-44db-8a45-73a6a106a92e
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/59e27297-c089-44db-8a45-73a6a106a92e/18563302.mp3?sv=2019-02-02&sr=c&sig=PJ6FD7JLamgoiZXVBBE6GP6gYmEN7Gow93oL7Oq%2BPuU%3D&st=2024-12-22T06%3A00%3A22Z&se=2024-12-22T08%3A05%3A22Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
>> This is Ed Livingston, Deputy Editor for Clinical Reviews and Education at JAMA. I'm here with Dr. Harold C. Sox, who wrote a chapter in the JAMA Guide to Statistics and Methods on Pragmatic Trials Practical Answers to "Real-world" Questions. Can we start by having you tell us your name and title? >> My name is Hal Sox. I'm an internist and Director of Peer Review for PCORI, the Patient-Centered Outcomes Research Institute. I'm also Professor Emeritus at Geisel School of Medicine at Dartmouth.
>> Thank you for joining us. You wrote a chapter in the JAMA Guide to Statistics and Methods book entitled Pragmatic Trials Practical Answers to "Real-world" Questions. I'm sure our listeners will want to know, how did you wind up with this particular position at PCORI? >> The first 15 years of my career I worked at the Palo Alto VA Medical Center, mostly seeing patients and teaching, but also doing research in medical decision-making. Then I had a spell as chair of Internal Medicine at Dartmouth. And I then spent eight years as editor-in-chief of Annals of Internal Medicine.
And I think it was my experience as a journal editor that got me the job at PCORI, 11 years now. >> Wow. What do you do at PCORI? What exactly is your role there? >> Before answering that, why don't I just tell you a little bit about a PCORI, because I think that'll make it pretty clear how what I do fits in. PCORI was created by the US Congress in 2010 to fund research in clinical comparative effectiveness.
Most of PCORI's funding is for studies that compare two or more interventions, often clinical interventions, but sometimes changes in the healthcare system to implement some clinical intervention. An example of the sort of thing that PCORI funds is a randomized trial that tested the effect of daily blood sugar checks versus no blood sugar checks on the control of diabetes in patients with non-insulin-dependent diabetes.
That was a rather typical study with actually quite an impressive result in its impact on guidelines for management of diabetes. So that's what PCORI does. And what do I do? Well, the same legislation that created PCORI required PCORI to guarantee peer-review of all research results, and to make them publicly accessible within 90 days of completing peer-review.
We're the only funding agency in the US that has this requirement. And PCORI decided to require a final research report that was in the form of a journal article. And then to create an external peer review process, just like a scientific journal has, to peer review it. And so my background at Annals made me the logical person to be in charge of developing and running PCORI's peer review program.
So I read each file report after the completion of external peer review, which is done by a contractor. So we function really just like a journal, except that we post every report on the PCORI website, whether the study is a spectacular success or not. >> So along the lines of what PCORI does, PCORI funds research projects that are closely aligned with clinical questions that clinicians deal with every single day.
So could you explain for us what a pragmatic trial is since that's more closely aligned with the types of situations clinicians find themselves in. And how a pragmatic trial differs from the usual explanatory randomized clinical trial that we're used to seeing? >> Yeah, I'd be happy to try to make that distinction. The concept of a pragmatic trial was actually first proposed nearly 50 years ago as a study designed philosophy that emphasizes answering questions in ways that are going to be most useful to decision-makers.
So pragmatic trials are intended to help typical clinicians and typical patients make difficult decisions about the tests and treatment that are used in typical clinical care settings. The real watch word there is typical medical care. So the goal of a pragmatic trial was to maximize the chance that the trial results will apply to patients that are usually seen in practice.
And just to be sure we're on the same page, that notion is called external validity. And is contrasted with internal validity, which is basically whether the study conclusions are justified by the evidence. And we'll use those two terms a little bit during my response to this question. So, just to repeat, the most important feature of a pragmatic trial is that the researchers choose patients, clinicians, clinical practices, interventions, and clinical settings to maximize the applicability of the results to usual practice, i.e., external validity.
And this contrasts with explanatory trials, which are designed to maximize internal validity. I thought that the best way to explain this would be to take just a minute to talk about randomized trials. We take them for granted, but we don't always think about why they are so important for clinical practice. So, in comparing treatment A with treatment B, lots of things could influence the study outcome. The patients who get treatment A might be older or sicker than those who get treatment B.
If treatment B works better perhaps the main reason is that the patients who got treatment B were younger or not as sick. So the factors other than treatment A or treatment B that could influence a study outcome are called confounders, because they make it much harder to decide if treatment B is the reason for better outcomes, or alternatively that the patients would've gotten better without treatment B, just because they were younger or less sick.
Or that trial results are a mix of effective B and these other clinical factors. Now there are ways to adjust statistically for these factors that could confound, if you can measure them. The real problem are the confounders that aren't measured, such as socioeconomic status or poor adherence to a drug treatment regimen. A study could appear clearly positive, but the reason could be an unmeasured confounder.
Now so far we've been talking really about studies that are based on observational research, not randomized trials. The beauty of randomized trials -- And the word beauty is just perfect for describing it. It's that every possible confounder, measured and unmeasured, is equally distributed between those who get treatment A and those who get treatment B. I always get a shiver down my spine when I say that, because it is so, so beautiful and elegant.
I digress to talk about randomization and confounding, because an explanatory trial goes to great lengths to minimize factors that occur after randomization, and might affect patients getting treatment B more than it would affect patients getting treatment A, which would then distort the results. So, here's some examples of an explanatory trial. They might exclude patients with poor adherence to treatment.
They might keep follow-up as short as possible, since longer follow-up means more opportunity for events that affect outcome more for treatment A than treatment B to occur, which, again, would make the results more difficult to interpret. An explanatory study might institute intrusive efforts, like audio taping clinic visits, measure delivery of a psychologically mediated treatment, or to participate in shared decision-making in a study comparing two methods of shared decision-making.
So in contrast to explanatory studies, pragmatic trials go to great lengths to reproduce precisely the conditions of daily community practice. This means having few, if any, exclusion criteria, embracing long periods of follow-up, which provides valuable information, but as we just said, increases the potential for confounders to occur during follow-up. And not measuring adherence because study results that include those with poor appearance is actually what you want to know when you're deciding whether to start treatment on a patient when you're concerned about the patient's adherence.
>> There are certain characteristics of pragmatic trials that were defined by Tunis. Can you tell us what those are? >> Sure. According to Tunis, the characteristic features of a pragmatic clinical trial are first, to compare clinically relevant alternative interventions. So pragmatic trials may compare classes of drugs, and allow the physician to choose which drug in the class to use, to choose the dose, and to feel free to use co-interventions, all of which are freedoms that mimic usual practice.
A second item is to include a diverse population of study participants. Eligible studies could be defined by presumptive diagnoses, rather than a confirmed diagnosis because treatments are often initiated when the diagnosis is uncertain. And sometimes don't actually give more diagnostic insight, depending on the patient's response to specific treatment. So a third element of pragmatic trials, according to Tunis, is to recruit patients from widely differing practice settings, hoping that the results will apply widely.
Thus an explanatory study comparing two surgical approaches to esophageal reflux might use only surgeons specializing in the esophagus. Whereas a pragmatic trial would include surgeons who just did a few such procedures each year. So the last of Dr. Tunis' characteristics of pragmatic studies are to collect data on a range of health outcomes. The outcomes in a pragmatic trial are going to be more likely to be patient-reported, to be global, broad in the things that they reflect, subjective, and patient-centered, such as self-reported quality-of-life measures, rather than the explanatory trial, which usually seeks objective endpoints, such as the results of laboratory tests or imaging procedures.
>> How does a pragmatic trial design influence the cost of performing a trial? >> That's a good question. And I don't think there's a definitive answer. But what we can do is to talk about some of the means that pragmatic and explanatory trials do, and what the effect is on the cost of running the study. So, explanatory trials control costs because mostly by choices that reduced time in the study. So they keep follow-up as short as possible.
And they enroll patients who are very likely to accumulate the study endpoint in the near term, because then you can stop study when you've reached your target sample size, or your target endpoint size. Pragmatic trials also have strategies for keeping the costs down. One of them is to use existing clinical data sources, such as electronic health records, Medicare records of claims reimbursement, and the National Death Index for the outcome of all-cause mortality.
Another approach is to simplify recruitment of participants by going to registries of patients with the target condition to get lists of patients to approach about participating. And, of course, explanatory trials can use this strategy as well to keep their costs down. Reducing the number of follow-up contacts is another strategy that fits in pretty nicely with the philosophy of the trial because it reduces the intrusion on the normal clinical practice.
And finally, avoiding measuring the dosage of the intervention. This is a really critical point. Many interventions, both clinical and administrative, are dose-dependent; the effect is dose-dependent. If the patient doesn't take the medication, or the clinic does not implement the intervention then it doesn't have an effect. And that could confuse the interpretation of studies because an intervention can look ineffective when, in fact, it just wasn't used much at all. So there's a risk to avoiding measuring the dose of the intervention.
>> When these types of trials come to JAMA, we tend to spend a lot of time talking to them. Pragmatic trials look a lot different than explanatory trials, and we're, sort of, locked into explanatory trials and how they should work. We sometimes struggle with pragmatic trials. Could you tell us what are some of the inherent limitations of pragmatic trials? >> In short, there's no free lunch, is there? The main limitation of a pragmatic trial is a direct consequence of choosing to conduct a study that puts few demands on patients and clinicians and mimics clinical practice faithfully.
The electronic health record as a data source means easy, inexpensive access to clinical information. But the lack of systematic data collection in clinical practice, except for height, weight, and blood pressure, means that there's going to be more missing data, which is a curse of the interpretation of any trial. A lower burden of data collection means simply fewer patient data; for example, to make adjustments in the analysis of a randomized trial for differences in co-variants between the two study arms.
Same problem occurs when you use a wide range of clinicians rather than experts in the target condition. Data collection tends to be more inconsistent from patient to patient. And perhaps there is a wider range of ancillary treatments than there would be in expert practice where specialty practice guidelines may narrow the width of the spectrum of interventions that occur. And then a third limitation is it's much easier to get uniform execution of treatment protocols in studies that are explanatory in nature rather than pragmatic.
So these limitations lead to more variation in the study data, and can lead to less precise answers to the key questions that inspired the trial. So as I said at the beginning, there's really no free lunch here. The trade-off between the explanatory-ness of a study and its pragmatism is always going to be with us. >> Can you discuss the balance between clinical trial generalizability, a characteristic of pragmatic trials, and the ability to answer a narrow clinically relevant question such as occurs with explanatory trials?
>> Well, we've talked about the terms explanatory and pragmatic as if they were two categories of trials. The purely explanatory and the purely pragmatic. In fact, these two terms mark the end of a spectrum of study designs. In practice, some features of a study are pragmatic, and others are explanatory because achieving internal validity generally comes at the cost of reduced external validity.
Successful researchers make good choices in the design of a trial, but whether authors label their study as pragmatic or explanatory, readers need to pay close attention to the study characteristics that maximize its applicability to their patients and to their practice style. >> Yeah, it's interesting, because I recently had an experience processing a manuscript of this sort of design where patients were offered surgery versus medical treatment for a particular condition.
And in the spirit of being a pragmatic trial, they gave patients the option of pursuing the standard treatment or being randomized into the trial. Two-thirds of the patients said they'd take the standard treatment. They wanted nothing to do with the clinical trial. They then randomized the remaining third into the two different interventions. And a criticism of that trial as it went through the system was, well, it's not really a clinical trial, it's really an observational study because the patients aren't truly randomly allocated at the outset of the study, that there's already a bias built in to the patient populations.
And that may be true from a purist perspective. But on the other hand, as you've mentioned earlier, that's what happens in clinical practice. That's what patients do. And that's how clinicians interact with them. So I was really struck, because ultimately we wound up not classifying this as a clinical trial. But the investigators said it is. And it was an interesting problem, and this balance between trying to explain the result of a particular intervention versus this pragmatic approach, which is how it plays out in clinical practice.
And I'm not sure what the right answer is. It's very, very difficult reconciling some of these problems. >> Here's a thought, maybe useful, maybe not useful for you. One thing that I say frequently is that most decisions about practice ultimately come from a body of evidence. And systematic reviews are the tool to characterize a body of evidence and summarize the results. And guidelines, practice guidelines, insurance company decisions to cover a practice are generally based on systematic reviews.
And when you get a heterogeneous body of evidence it can be really difficult to interpret. And so any factors that contribute to heterogeneity should be important to measure and to use, if possible, for example stratify the analysis of the systematic review according to some factor that might be responsible for the heterogeneity. So where a study design lies on the spectrum of highly explanatory, the highly pragmatic could be helpful to make sense of a difficult-to-interpret body of evidence.
But as far as I know, there's no good quantitative measure of where a study lies on that spectrum from purely pragmatic to purely explanatory. But it's something to be thinking about as we refine our methods for dealing with a body of evidence. >> Thank you, Hal. Thanks for listening to this JAMAevidence podcast on the JAMA Guide to Statistics and Methods. For more information on this topic, go to JAMAevidence.com where you'll find a complete array of materials that will help you understand the medical literature.
I'm Ed Livingston, Deputy Editor for Clinical Reviews and Education at JAMA. And co-author of the book JAMA Guide to Statistics and Methods. Thanks for listening.