Name:
Recommendations About Screening: Interview With Gemma Louise Jacklyn
Description:
Recommendations About Screening: Interview With Gemma Louise Jacklyn
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/4397e35d-44cf-4ede-a22a-e9260a7db2bd/thumbnails/4397e35d-44cf-4ede-a22a-e9260a7db2bd.jpg?sv=2019-02-02&sr=c&sig=ljmZqGwQadCMILn6z0kmyr0Rtdg777mpfln67rrC64c%3D&st=2024-11-05T05%3A49%3A22Z&se=2024-11-05T09%3A54%3A22Z&sp=r
Duration:
T00H33M43S
Embed URL:
https://stream.cadmore.media/player/4397e35d-44cf-4ede-a22a-e9260a7db2bd
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/4397e35d-44cf-4ede-a22a-e9260a7db2bd/12975326.mp3?sv=2019-02-02&sr=c&sig=Kr8RJ8SqJNAej7H3mCFTj4YIGHPClbQoRnq%2F0uA07V4%3D&st=2024-11-05T05%3A49%3A23Z&se=2024-11-05T07%3A54%3A23Z&sp=r
Upload Date:
2022-02-28T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
[ Music ] >> Hello. I'm Ed Livingston, Deputy Editor of the Clinical Reviews in Education for JAMA. [ Music ] In today's JAMAevidence podcast, we're going to review disease screening and how well-intentioned credible experts can look at the outcomes of the same studies and arrive at very different conclusions. Screening for disease seems like an intuitively obvious thing to do. Find a disease early in its course, then start some sort of a treatment so that the disease can be contained before it becomes a problem.
As it turns out, it's not so simple. As more is learned about the good and the bad about screening, more screening recommendations for disease screening are being reversed. This is because the downsides of screening are much less obvious than the upsides. In this podcast, we review how the benefits of disease screening can be influenced by a number of factors. Those factors include the effect of harms of screening, over-diagnosis, lead-time bias, length-time bias, and how all these factors influence experts' thinking and how different experts look at the same data and arrive at different conclusions.
>> I have no family history of breast cancer. This came out of the blue. It was an absolute shock. >> There was no history of breast cancer in my family so I didn't think it applied to me. >> If I had waited until I felt it or my OB/GYN felt it, it would have been huge. My prognosis would not be what it is because I had the screening mammogram. >> I had my mammograms yearly. I didn't have radiation or chemotherapy because early detection.
>> These women spoke on the Charlotte Radiology YouTube production, Living Proof That Screening Saves Lives. And they were all diagnosed with breast cancer, got treated, and are grateful that their lives were saved by mammography. It's easy to understand why these women feel this way. They underwent a test, it showed they had cancer that could kill them, they got treatment, and they're still alive. But does breast cancer screenings really save lives? To these women it seems like it, and that perception comes from clinicians making very strong statements like these.
>> Let me make myself clear. The data is overwhelming that mammography done annually saves lives. >> That was Dr. Gilda Cardoso [phonetic] of the VCU Massey Cancer Center when interviewed by CBS. She's of the opinion that aggressive mammography saves lives. Okay, true. But how many lives? That's the trick and that's where all the controversy lies. We're going to use breast cancer as a platform to understand the various ways that screening can look better than it really is.
This is important because in a sense we may be misleading ourselves about the benefits of certain screening technologies and would be better served by paying more attention to the limitations of screening and better understand how to detect and treat disease in ways where we can make meaningful differences. This is Otis Brawley, Chief Medical Officer of the American Cancer Society. >> All the science seems to show that screening decreases the risk of dying from breast cancer by 20 to 30%. That means among women in their 40s, 70 to 80% of those who were destined to die from breast cancer are still going to die from breast cancer if they get the best mammography, they get optimal treatment and optimal therapy.
And that translates into we shouldn't be satisfied with 20 to 30%. We need to do better. We need to do research such that we can develop things that are better. >> Dr. Brawley's point is that the vast majority of women who are destined to die from breast cancer will do so irrespective of any screening or treatment that we have to offer. And this is not a small number of women. In fact, it's the majority of women who have clinically important breast cancer. Again, from the Charlotte Radiology YouTube video.
>> You couldn't feel my lump. It had to be found with a mammogram to be able to catch it in that stage and then not have to go through radiation or chemo because of that, just by simply having a mammogram. If I had waited another year to have my mammogram, I am not sure I'd be here talking to you. I found it early, and so we have every opportunity to kick this thing to the curb. >> So each of these women honestly believe that their lives were saved because they had breast cancer found in mammography. But to some extent their understanding of their disease isn't exactly right.
And why isn't that right? One way to look at this is to look at high-quality studies that show how much a screening test like mammography does in terms of saving women from dying from breast cancer. To do this, we're going to look at numbers presented in an article published by Lydia Pace and Nancy Keating in the April 2nd, 2014, issue of JAMA. They published an article called "A Systematic Assessment of the Benefits and Risks to Guide Breast Cancer Screening Decisions." If you look at a population of women aged between 39 and 49 who don't undergo breast cancer screening and follow them for some period of time, about 32 per 10,000 will die of breast cancer.
If they have mammographic screening, there's still going to be about 29 per 10,000 that die. In other words, screening saved about three women out of 10,000. After some adjustments are made, this amounts to about a 20% reduction in breast cancer mortality attributable to mammography. So far, so good. A woman's life is of incalculable value. If you screen a bunch of women and save three out of 10,000 lives, it should be worth it. But maybe it's not worth it because in order to save one woman's life, about 1,900 women need to undergo screening.
If screening were perfect and there were no harms associated with the screening, this would be okay. But there are harms. One of the most important limitations of cancer screening is a phenomenon known as overdiagnosis. This occurs when a screening test shows a cancer that in itself would have never resulted in a patient's death. For some diseases such as breast and prostate cancer, this is a surprisingly common phenomenon.
To understand what the phenomenon of overdiagnosis is all about, I spoke to one of the authors of the JAMAevidence textbook chapter on disease screening from the Users' Guide to the Medical Literature, Gemma Jacklyn. >> This phenomenon called overdiagnosis and overtreatment, where you're finding cancers that would never go on to cause harm or death in a woman's lifetime, that we end up treating them because we can't differentiate between the cancers that would go on and cause harm and the one's that wouldn't. >> This is not a small problem because for the case of breast cancer it's likely that many women are given a diagnosis of cancer for disease that would have never harmed them.
Yet they get very aggressive and dangerous treatments for these cancers. >> There is a pattern, not just in the United States but internationally, that varies more overdiagnosis and overtreatment for every breast cancer detected. So we know in the UK that for every breast cancer death prevented, three women will be overdiagnosed and overtreated. So that's three women who receive unnecessary cancer treatment, so surgery, chemotherapy, radiotherapy, things like that.
With prostate cancer screening, it's not a close call. It's very clear that the harms outweigh the benefits, and men are not recommended to go for prostate cancer screening unless they're at very high risk of getting prostate cancer. >> Going back to our population of 10,000 women aged between 39 and 49, there'll be as many as 104 women with an overdiagnosis of breast cancer. That means for the three women who clearly benefitted as a result of this screening, up to 104 women will have been diagnosed with a breast cancer and received treatments such as surgery, chemotherapy, and radiation that they never needed.
These are not benign interventions, and this is a very significant problem, which is why many experts believe that screening for breast cancer in this age group is not appropriate. Getting back to our women who gave testimony on how breast cancer screening saved their lives, what some of them don't know, and we have no way to predict, is that some of them underwent treatment for cancers that would have never been a problem for them. They strongly believe that these cancers were going to be lethal, but that was not the case for some of them.
For breast and prostate cancer, overdiagnosis is a major issue and one of the more important reasons that screening for these diseases is controversial. False positives are also a problem. >> So we're talking now about false positives. So someone who, on the test, the test will basically say that they have the disease, but actually, they don't have it. >> Their breast cancer example, about 6,100 women will have a mammogram showing cancer that will require a biopsy that, in turn, shows no breast cancer.
Overdiagnosis is the Achilles heel of cancer screening. It's a major problem for breast cancer and an even larger problem for prostate cancer. So to recap, to save the lives of three women in our group of 10,000 aged between 39 and 49, up to about 100 will experience overdiagnosis and get fully treated with surgery and radiation and chemotherapy for a cancer that never would have been a problem for them. Another 6,100 will be told they have cancer based on a mammogram and then require invasive biopsies only to find out that they don't have breast cancer.
Conceptually, overdiagnosis and false positives are easy to understand as limitations of disease screening. There's two sources of bias that are not so obvious, nor are they easy to understand. And these are lead time bias and length time bias. To help understand these, I'm bringing in the big guns. Gordon Guyatt is one of the principal authors of The Users' Guide and a leading figure in evidence-based medicine. Let's start with Dr. Guyatt's definition for lead time bias. >> Lead time bias is relatively straightforward.
I'll explain it by saying we have two patients whose fate is identical. They are patient one and patient two. And unfortunately, their fate is to get cancer and 10 years after the cancer appears, they're going to be dead as a result of their cancer. And that doesn't matter if they are diagnosed early or late. Their fate is going to be the same. In other words, any screening test will not -- and earlier treatment that results will not be prolong their lives.
So that's their situation. In both cases, their cancer is going to become symptomatic at year seven. So it appears but not until year seven does it get large enough or extensive enough to present with symptoms that will lead to the diagnosis. So if they wait to be symptomatic from the time of diagnosis, they're going to live for three years. However, at year two, the cancer becomes large enough that if they underwent screening, the cancer would be detected.
Now, patient one does not go for screening. At seven years the cancer is detected when it becomes symptomatic, and the patient lives for three years beyond the diagnosis. In patient two, patient attends for screening, as it turns out, two years after the cancer appeared, just at the beginning of the period when it might be diagnosed by screening. That patient then lives eight years after diagnosis.
So if you are just looking at what happens to people diagnosed on screening or diagnosed symptomatically, the screened patient lives eight years; the unscreened patient lives only three years. >> Lead time bias makes screening studies look better than they are. Gemma Jacklyn. >> Lead time bias is why survival always rises following screening. And the reason for that is when you think of survival as a period of time between diagnosis and death, that is survival time.
But what we often do is we think about that longer survival means delayed death, but in fact, it can just mean earlier diagnosis. So what lead time bias essentially is, is that people who are screened always seem to live longer because diagnosis is made earlier in screen-detected cases compared to people who are diagnosed when they're clinically symptomatic. So they're just living longer knowing they have the disease. They're not actually necessarily living longer in total. >> So lead time bias means that for two patients who have cancer and will die no matter what, after some fixed period of time after the cancer starts, patients who get screened look like they live longer because they have a lead on having the diagnosis of cancer before another patient who gets the diagnosis later on when the disease progresses enough to become clinically obvious.
When trying to avoid the prospect of lead time bias, study results should be assessed for a couple of things. >> The way to eliminate lead time bias is definitely to have a randomized controlled trial of the affected screening or mortality or morbidity. So the key thing is measuring your outcomes. You don't measure your outcome by survival time because if you measure your outcomes in a randomized controlled trial of screening using survival time, it's always going to favor the screened group because they're always going to look like they're living longer if you use survival time.
Whereas if you look at overall mortality, so comparing deaths in the screened group compared to the unscreened group, you eliminate survival time. So you're not going to end up with lead time bias. You're eliminating lead time bias because lead time bias is essentially early detection with no improvement in outcome. So more diagnosis time, not more lifetime. >> To avoid the effects of lead time bias on screening, don't look at study results in terms of survival times, only as overall mortality.
The other mechanism for dealing with lead time bias is to do randomized clinical trials and ensure balance between the screened and non-screened populations and measure the outcomes in terms of overall survival and not survival time. How do people get misled? >> So a lot of the time in a screening promotional material or advertisements that you might see, you hear that breast cancer screening, for example, increases survival time and it might do in terms of outcomes measured.
But it's a biased measure of the efficacy of screening. >> The other bias that is intuitively difficult to understand is length time bias. This involves a biased estimate of the benefits of disease screening related to the length of time of the disease process. Instead of the scenario that we used for lead time bias, where we considered what survival looks like in screening two patients who have cancer progressing from start to finish at the same rate, in length time bias, the effect of screening is considered in two very different patients who have different disease durations.
One patient gets the disease and dies very quickly from it. The other may have it for many years before it progresses enough for them to die from it. One obvious example of this is prostate cancer. Some, especially young men, get a virulent form of the disease and rapidly die from it. Others get a relatively benign form of the disease and live for many years after it's first found. Gordon Guyatt explains. >> I'm going to, once again, have two patients. The first patient is the same as in the scenario that I've just told you about.
So that person -- their cancer starts. Two years after their cancer starts, it gets large enough to be detected by screening. At year seven it becomes symptomatic. And at year 10, unfortunately, the patient dies as a result of the cancer. Once again, screening and the early treatment that results from screening doesn't help. Whether screened or unscreened, their fate is the same. Our second patient, in this case, is very different.
Whereas patient one had quite a slow-growing tumor, patient two has a much more rapidly-growing tumor. And in this case, from the time the cancer appears to the time where it might be picked up with screening is only a year. But the time from when it could be picked up from screening to when it becomes symptomatic, because it's a much more rapidly-growing tumor, is only three months.
And unfortunately, with this rapidly growing tumor, the patient is dead within, say, a two-year time point. So we have one slow-growing tumor with a long duration in which screening might pick up the tumor, and a second rapidly growing tumor where there's only a very brief duration between the time where it could be picked up by screening and the time it becomes symptomatic. >> The first patient develops a cancer that can first be detected by screening at about two years after the cancer first started.
It becomes clinically obvious at about seven years, and the patient is dead at 10 years. That means that a screening test could be positive anywhere between years two and seven. So there's a five-year window in which a screening test can be positive and the patient doesn't otherwise know they have the cancer. The second patient has a cancer that can be detected by screening at one year, is clinically obvious at three months, but the patient is dead at two years. This patient only has a three-month window in which a screening test can be positive. These two patients have very different lengths of time during which a screening test might be positive.
Thus, the term length time bias. What does this do to clinical trials with screening? >> Now, let's picture a study that looks at the apparent effective screening. The people who are screened are much more likely to be like patient one, with slow-growing tumors and a substantial lifespan after the diagnosis by screening, because by nature the tumors are slow-growing. Patient two, very unlikely to be diagnosed by screening because there's only a short hiatus and a very aggressive tumor.
And that cohort appears to do terribly. Well, they do do terribly. Again, screening appears to be of benefit in terms of a long duration of life from screened diagnosis to death. But, in this case, it's because of the very much higher likelihood that a slow-growing tumor with a destiny to have a long duration from diagnosis to death appears in the screening cohort, and the aggressive tumors with a short duration from diagnosis to death are much less likely to appear in the screened cohort.
>> Length-time biases controlled for in screening trials by performing randomized clinical trials with long time horizons so that you can capture patients with both aggressive and less aggressive disease and using an intent-to-screen analysis. Again, Gemma Jacklyn. >> Randomized controlled trials where you end up with two groups where we're going to have an equal distribution of this heterogeneity of cancer, so we're going to have an equal distribution of the slow and aggressive cancers between the groups.
And then we can avoid length time bias by using something called the analysis by intention to screen. So that is why analyzing the groups according to how they were randomized, not whether or not they turned up to screening. So whether or not they received the intervention. We analyze them according to the group that they were randomized into. So we're ending up with a comparison that's equal in terms of the heterogeneity of cancer, whereas if we mix them up according to the people who attended screening versus the ones that don't, we're mucking up that randomization process so we don't have comparable groups where the heterogeneity of cancer is equal between the two groups.
We might have, for example, more aggressive cancers in the people who didn't turn up for screening. >> Diseases like cancer are frightening. When screening was first started for these sorts of diseases, physicians honestly believed that finding them early could result in treatments that would save lives. Intuitively, this makes a huge amount of sense. But as we've learned, there are many ways to be misled by screening tests. >> The first general principle of screening is that it's different from diagnosis. With diagnosis and treatment, a patient comes to us and says, "I have a problem.
Can you help me?" In the case of screening, it's different. We're saying to the patient, "We think you have a problem" -- that is, you're at risk of a particular bad thing like cancer happening to you -- "and we think we can help you." "Oh," the patient says, "you do?" All right. But we are coming to the patient and telling them they have a problem that they didn't know they had.
So if anything, the responsibility for being sure that we're doing people some good is greater even in the screening situation than it is in the diagnosis and treatment situation. >> Another way in which screening can be made to look better than it really is, is called healthy volunteer bias. This is a really important concept. For screening, it means that people who are healthy intend to take good care of themselves tend to show up for screening. I point this out because this concept is important and receives very little attention in other kinds of health outcomes research.
There are many patients who have very little interest in taking care of themselves. They tend to be in the lower socioeconomic strata, and they create the appearance the disparities exist because of socioeconomic conditions. The same sort of phenomenon occurs in screening trials and was demonstrated in a well-known trial of breast cancer screening. >> The healthy volunteer effect. People who attend to screening tend to be healthier compared to people who don't attend for screening. The best example of this is actually in one of the randomized controlled trials that was the first one ever done, which was in New York, in America, called the HIP trial, the Health Insurance Plan trial.
And what they found is that in terms of all-cause mortality in the intervention group, if you look -- they split up the intervention group between people who were screened and people who refused screening. And they found that all-cause mortality was much higher in the people who refused screening in the intervention group compared to the people who turned up and also, again, higher in the control group. But then what they did, something even more interesting, was you could think, okay, well, maybe screening is great and it's preventing lots of breast cancer.
Therefore, all-cause mortality is going down in the people who were screened. But they looked at something called cardiovascular disease which shouldn't be affected by breast cancer screening. And they could have looked at something like stroke. They could have looked at any other disease, but they specifically looked at cardiovascular disease and compared it in the intervention group between the women who were screened and the women who refused. And again, the women who refused screening had a higher rate of cardiovascular death compared to the women who were screened.
So it was about double. And there's no reason why screening should prevent cardiovascular disease when you're screening for breast cancer. >> Healthy volunteer bias should be carefully considered when looking at screening trials because there's a much greater likelihood that patients who are quite fastidious about their own health will aggressively pursue screening and treatments, much more so than people who are less concerned about their health. We have a responsibility to carefully consider all the ways that screening can be successful or fail.
When counselling a patient about screening, it's important to ensure they understand both the positives and the negatives of the screening itself. There's got to be a benefit. As was shown for breast cancer, there's a benefit, but it may be offset by many harms. If a screening test is positive, it may then lead to a recommendation for very aggressive interventions. Patients must be prepared to accept these interventions, and if they're not, screening should probably not be offered.
>> We should be confident that the people who are diagnosed are, in fact, going to be interested in the treatment. Because if they aren't, then we haven't done any good diagnosing them early. So for instance, let's take prostate cancer screening with PSA. It's controversial whether it does extend lifespan. But let's assume that there is a small gain in lifespan in terms of prostate cancer mortality.
Well, a patient who is diagnosed with prostate cancer, say, on the basis of PSA, is now told, okay, if you undergo this surgery, there is a small benefit in terms of lifespan that you are going to get. But they are also told, as they would be, that this surgery has a substantial chance of making them impotent and also a substantial chance of making them incontinent of urine.
Patient may say, boy, for a small gain in lifespan, I'm not ready to undergo these adverse consequences. And if they make that decision, you did them no favor by screening in the first place. >> With cancer screening, there tends to be an overemphasis on cancer deaths. Cancer is a very scary disease and it seems obvious to our patients that if you find a cancer early in its course and you take it out, you'll save their life.
However, as physicians we know that this is not the case for many cancers. What physicians tend to consider less often are the limitations of screening trials that we've reviewed here and the harms that can occur from screening. Issues such as over-diagnosis, false positives, lead and length time bias seriously limit the screening literature and the benefits that can be derived from screening. As we showed for breast cancer, for 10,000 women in their 40s, three will be saved from cancer death by mammography, but along the way, about 100 will be overdiagnosed and get treatment they didn't need and all the complications that go along with those treatments.
Another 6,100 will have false positive diagnoses and undergo invasive procedures to prove that they don't have breast cancer. In a sense, by emphasizing the potential benefits of screening for disease and placing little emphasis on the harms, we've done a disservice to our patients. Gemma Jacklyn explains. >> It's really important that they know about the harms as well as the benefits. And the reason why I emphasize harms there is because in the past a lot of screening programs or a lot of the promotion around screening has focused on the benefit.
People know that it can reduce your risk of dying from a specific disease. And that's fantastic and that's a big benefit. But there are also big harms that haven't been as widely acknowledged and discussed. And women, in order to make an informed choice, need to be aware of the harms as well as the benefits so that they can make a decision based on their own preferences and values. And that's really important. >> When thinking about disease screening, remember that if you find what you're looking for, you're giving someone a diagnosis of a disease that they didn't know they had.
Even though there's the potential for saving that person's life by treating the disease early, for many patients, the lesions that are found will have never caused a problem. Yet patients will get very major treatments with great potential for complications that they never needed because they were overdiagnosed. Large numbers of patients will have false positive tests, causing them to undergo procedures to confirm diagnoses, and many of these tests involve invasive biopsies that the patients didn't need. The literature can be misleading about all this because of issues like lead time and length time bias.
When assessing the literature for screening, it's important that studies are designed to minimize the risk of bias. There are four major characteristics of studies to look for in order to have confidence in the results. The first is that the trial should be randomized with good balance between the two groups. The second is that survival time should not be used as an outcome and the endpoint should be expressed as some proportion of the outcome such as overall mortality. Third, the threshold for a screening test that's considered as a positive test should be the same at all treatment sites.
For example, if a PSA test is considered positive at a threshold of three at one site, it should be considered positive at three at all the sites. This was a problem that occurred in the European prostate cancer screening trial where many of the centers had their own thresholds for what they considered a positive test. Last, the outcome assessor must be blinded to whether or not the patient was screened. Aside from ensuring that the literature is unbiased, clinicians have an obligation to consider the full spectrum of benefits and harms of any screening procedure.
>> I think it's really important that clinicians go there realizing that their own preferences and values can bias how they interact with that patient and that it's important that they remember that screening is a personal decision. It's a personal choice. One woman might be willing to accept the harm, knowing that there's more risk of overdiagnosis and overtreatment compared to the benefit of having their life prevented. And they might be willing to take that risk and that's fine for them.
But there might be another woman whose preference is not to undergo unnecessary investigations and treatment. And they might be happy to kind of let sleeping dogs lie, that they're happy to be well and wait if ever clinical symptoms arise. And that's -- it's a personal choice based on their preferences and values. And if they're provided with balanced information that helps them decide and if the doctor can help provide them with good quality evidence-based balanced information to help them, you know, increase their knowledge and make an informed decision.
If clinicians go there with an open mind and know that a woman's choice is her own and it's not ours to make as a clinician, and I think that's really important. It's hard sometimes -- all clinicians go in with their own biases and their own ideas of what the benefit and harm is. And sometimes it's just easy to push someone towards a decision for an intervention when you know that the benefit far outweighs the harm. But screening, it's a really close call. And so it's important that we don't push someone towards a certain decision, but we can help make an informed decision.
So shared decision making is really important when it comes to screening because it is a close call. There's -- for example, with breast screening, it's not clear that the benefit outweighs the harm, and in fact, we think the harm outweighs the benefit. But whether a woman wants to win big and accept that and still get screened in terms of win big, in terms of prevent dying from breast cancer, or whether they would rather just live their life kind of in ignorance and bliss, it's their choice, their choice whether to be screened or not.
>> We have a responsibility to fully inform our patients about screenings' benefits and harms. >> It's a package and you're setting yourself -- by having that one test that people just don't think about a lot of the time because we're being taught not to. There's been a lot of persuasive, you know, go and get it. It will save your life. And people don't really think about it. And then they don't realize that they get onto this whole cascade of further investigation, further treatment.
And you do read a lot of personal stories where women, once they've been diagnosed -- so they've gone through that cascade of investigation -- once they've been diagnosed, they start reading up on breast cancer screening and breast cancer and realize about all this uncertainty that they were never told about. And they just feel duped. And they feel, you know, really betrayed by the health system and by medicine because they feel like they might have been diagnosed with something which was never gone on to cause them harm. And they just don't know at an individual level.
And that's -- gosh, I feel traumatic just thinking about that for an individual woman having to go through that. [ Music ] >> I hope you now understand the nuances of screening and how screening can be both good and bad for patients. More information about this topic is available in the Users' Guide to the Medical Literature Textbook and on our website, JAMAevidence.com, where you can listen to our entire roster of podcasts. [ Music ] I'm Ed Livingston and I'll be back with you soon for another edition of JAMAevidence.
[ Music ]