Name:
Statistics for Orthopaedic Postgraduate Exams
Description:
Statistics for Orthopaedic Postgraduate Exams
Thumbnail URL:
https://cadmoremediastorage.blob.core.windows.net/d3122f9a-faa5-462e-a3de-4ab90ae97407/videoscrubberimages/Scrubber_1.jpg
Duration:
T00H33M37S
Embed URL:
https://stream.cadmore.media/player/d3122f9a-faa5-462e-a3de-4ab90ae97407
Content URL:
https://cadmoreoriginalmedia.blob.core.windows.net/d3122f9a-faa5-462e-a3de-4ab90ae97407/Statistics for Orthopaedic Postgraduate Exams.mp4?sv=2019-02-02&sr=c&sig=b2TfHFRwACsk3JtqURAfNwTLRkJfPafFP2jsQY%2FMN94%3D&st=2024-12-08T17%3A40%3A53Z&se=2024-12-08T19%3A45%3A53Z&sp=r
Upload Date:
2024-05-31T00:00:00.0000000
Transcript:
Language: EN.
Segment:0 .
Is this what is this teaching? This is today's the talk is about statistics, and we've run already a couple of hot seat sessions. Obviously, statistics the massive topic. You could spend hours and days lecturing on it. And David's kindly taken this task of trying to summarize some of the statistics topic in a short presentation.
He obviously will not be able to cover everything at all, but will try to cover more topics in the hot seat sessions. Also, we are blessed with having Shaun again with us and other mentors, sir and. Ramesh anthrax sent their apologies. They're both called away. Thank you.
So go ahead, please, David. OK, so thank you, everyone. I'm going to talk about statistics. I think I got the short straw, but myth the I volunteer to do this. Sorry, excuse me, back up. Can you hear me properly, I hope. Yes, we can't hear you very well.
OK so as for US has said statistics will come up. It is quite a big topic. You'll see quite a lot of people getting it easily wrong or easily. But it is actually quite easy to get. There are lots of definitions that you need to know. I can't stress them quite hard enough things like sensitivity and specificity, but we'll try and cover those.
What I'm going to do is go through this more as a way of how I learnt of my questions. And one of the best questions I found a question that came up in the exam was from the Oxford slide show presentations on the Oxford basic science thing. I'm going to start off with how do you go about setting up a clinical trial? This can be staffed in any different way. One thing that you support, I think this is a question.
So one of the key questions that Sian asked tell me about the last research project you were involved in, and this is almost a prompt to say talk about how do you set up a clinical trial? And this little diagram does show quite clearly what the things they want you to talk about. So in principle, there are four key things biko, because biko, so patient or population, the intervention, comparison and outcomes, preferably clinical outcomes and our situation.
So we need to identify a problem or think of interest to be studied. This could be we do a bit of literature research first to work out what gold standard is to compare to. And we also want to ask the question not what are we trying to prove and also ultimately the null hypothesis. And we design our study. We look at our population that we want to talk about and we talk about how we're going to do a methodology of it.
We may also employ a statistician because we're not statisticians to do a power analysis to work out the number of people that we need to add, the amount of population we need to make sure this is relevant to apply to the population as a whole. We also wanted to find the outcome while its positive was is negative. And we also need to look at getting ethics approval. Most trials have.
You've been involved in the trial set up. This is what we will go through. We also now talk about registering a trial, much like we have to register an audit when we're doing audit paper trail at work and again, how we conduct and we conduct the trial, collecting our patients and also analyzing our data and collecting our data. Then you display your results, whether it be a post presentation or a paper in the JRPGS, if you're lucky enough.
And hopefully this is something that will create change in clinical outcomes. Sorry about going through the. Claw palsy. Well, it is about this. So then they might ask again the things that what is a hypothesis seeing as you mentioned the hypothesis, the reality they want to know, that is the statement of the assumption that you're making that whatever you're testing is going to happen, whether it is true or not.
Ultimately, the null hypothesis is the primary assumption that the difference that we're seeing in our trial has occurred purely by chance. That is what they want you to say. That is the key definition in that situation. And generally, we perform a study to disprove the null hypothesis and reject this null hypothesis. So we want to show that this statistical difference, why we're doing this intervention.
So hip replacements work. For example, another question they might talk about is what we've discussed earlier with ramjet. Talk about data and central tendency and what is dispersion. So in this situation we're talking about mean is the average means the average of the data set median is the middle value mode is the most frequently occurring value of the data set and a normal distribution.
They're are all equal. So I don't know if you can see that line there. So this is another Gaussian curve, the bell shape, and that's what they talking about in terms of normal distribution where everything is equal. And this person we're asking about what is the variability of the data set, if all the values are the same, the suppression should be zero. And there are various ways of looking at the dispersion in statistics.
Talking about quartiles standard deviation and variance for a set of data, the range equals lowest and highest numbers. And again, that's something they may also ask. These are all questions they'll try and build in and the basic stuff they want to know from terms of statistics. And we might press you a bit if you're doing well and ask you about how when do you use a parametric test and.
When and why use a non-parametric test, so parametric tests are statistical tests, we use a lot of data that is on Gaussian, so normal distribution and they are more powerful than more normal non-parametric test that should be used on that non-parametric data, where the data may be skewed positively or negatively. So that previous diagram I'll show you the curve was a nice, lovely bell shaped curve.
You might have a skew data where it's more towards the right or more towards the left, and that would be your non-parametric data. So focus your non-parametric data. We know from the Gaussian distribution it might be like, for example, heat or height or weight. It should be a follow a pattern of a population. So you would expect not everyone's most people are going to be around the average weight.
We say 70 kilos for a male, maybe 65 for a lady, for example, the consequence of using the wrong data set. Oh, sorry, yeah, one's if you use the wrong test. So the consequence may be the wrong test. It depends on the sample size. Sample size is too large. You can maybe get away with it. However, if the sample size is small, the results of the data that the p value are more going to be likely to be inaccurate.
OK I think this is also a very common question, we touched on this in the hot seat questions, talking about systematic meta analysis and systematic reviews in terms of its level of evidence. So this is what they want you to talk about when talk discussing there. Think of that pyramid triangle in terms of the levels. So I don't think we have meta analysis on this, but this is the gold standard in terms of it's a mathematical and statistical analysis of combined results of two or more studies, typically randomized control studies where the hypothesis addressed the same hypothesis in the same way.
So it's got to be marrying up of different studies that are very similar. In that way, you get a large population and hopefully you have a better ability to hypothesize that what's happened in those studies is going to be presented in normal population. Then you have slightly lower levels of static, the psychic level. So you've got the more systematic reviews, the Kessler rate praised and those are an overview of primary studies.
They don't necessarily have to be randomized controlled trials. They can be cohort studies or observational studies, but they tend to be the same level as the study level. So if it's a systematic review of randomized controlled trials, then there will be the same level of randomized controlled trials. If a systematic review of cohort studies at the same level of cohort studies, something just to remember.
And these are sort of the key things again, expert pinning down the bottom. They might ask you again, why is expert opinion important? So we have to accept that we've got some very senior experienced surgeons out there who know their view have got very good view, and we do value their opinion because they've seen trends come and go over their career.
Another popular question is what is bias? And no, it's not. The referee was a Liverpool fan last NIPE. So I mean, that is the flaw that is in any study, there's going to be impartiality that's introduced introduces error into any part of your study, from selection to publication. So selection bias, that's something that we will come across.
You're going to have a non random selection of patients for population. The patients will select themselves sometimes because the ones who will come and have a hip replacement. We're not going to go around saying you need a hip replacement or people say no, but I'll have one anyway. Then we've got the experimental bias. So this could be reduced by randomization and also experimental bias against errors in classification, for example.
Are we getting our data correct? Observational bias. So there is a competing hip in these schools. This could be done by person filling in the form of the patient or the physio or the doctor doing it. And again, we also have to remember that people have two knees and two hips so adjacent pathology can skew the score.
So although we're saying talking about that need that you had done, they may be happy with it, but it's the left knee they haven't had done. That's causing all the pain and that's keeping them up at NIPE. Publication bias only publishing positive findings, not negative ones. That is something that we are all guilty of. We don't want to say actually, all these hundreds operations I did all went badly wrong.
I'm not going to tell everyone about it. So we tend to talk about 100 operations that all went right. So how do we reduce bias? That's again, a follow up question. So randomization, masked blinding. So we talk about double blinding that we're talking about triple binding, where double blinding the patients don't know what they're going to get.
We don't know what we're doing. And also the people analyzing the data don't know which group they're looking at. But then we can also have age or sex matched to reduce that confounding variables. But sometimes we can't fully eliminate bias. So that's why we have to accept that this is something that we haven't touched on yet, but this is something that comes up, popped another pop the question.
Name some outcome scores used in orthopedics, so we all know about the Oxford Shoulder. Oxford scores very, very popular. So if you so, I would recommend having it at the back of your mind how you go through it. So you those 12 questions from 0 to 4 in terms of how you're feeling and the top scores 48. And that means someone's got a normal knee or normal hip, and it's very much a functional score and it's reliable and validated.
That's a key thing. That's why if you're going to talk about it, you want to talk about that. And it's also very good. It's used as a research tool, and we use it as a way of documenting change. So if you look at the angle, they do tend all the patients an Oxford knee score before their operation, six weeks after operation, six months after and then the year after, so that they're looking at that progression in terms of patient satisfaction.
Again, vital validity again is important in terms of experimental value that represents a true value. And then reliability is something that can be reproduced from patient to patient and a surgeon to a surgeon. I've got this in one of in my exam. Does anyone know what it is, any any?
They will want to talk about it or try to carry on? The reason why I offer this, because it's quite a dry subject. I don't mind talking about it, but you might be getting bored by now. It's important you get a chance to talk about these things, not just yourselves, but also to others as well. So know how your sound box plot is something that's quite common.
Very popular. It's a convenient way of graphically depicting groups of numerical data through their five numbers summaries, so we've got the most observation lower quartile meaning, quartile, upper quartile and large swaths of the box. What can even be drawn in this case? We've done it horizontally, or we can be done vertically. It contains the middle, which this bit here and this bit here is 50% of the data, ok?
The upper edge of the box represents the top 25 of the top. 75% I'm so I get it right. So the top 25% are above. And then the lower line represents the 25% Also, everyone else below the line and the middle typically is the median value and it's not necessarily in the middle of the box if it's skewed data, for example here. OK, so we know median is the most is the middle, most of the range, not mobile to the most commonly occurring or mean, which is the average.
The point outside the box represent the outliers. So again, this might be a very. Quit saying they may show this to you and say, what's this? And describe it, so you need to be able to talk about this in the exam situation. You can do this. Hopefully in under a minute and then they can talk about other things. OK right.
Another thing they might also ask about, so this is pictures from the Oxford slideshow haven't got rid of them is what is a power calculation? So that one is discussing power analysis. Again, we're not experts. This is something we talk to our statisticians suspicious about. It's a way to calculate how many subjects are needed in the study so that we can demonstrate a statistically valid.
The conclusions can be drawn extrapolated to the rest of the population. That's the answer they want you to say. They might ask how you do it. What sort of things assumption there are some good things about it? I never really understood it, so I have to be honest with that one.
Yeah, if I could say something. I don't think many people understand it, but I don't think you need to understand it for the exam. You could just say, I will ask a statistician to make our analysis for me, if you understand you're looking at NIPE. But trust me. Exactly, Yes. Used to me to explain it to myself once.
But I keep. The key thing is you need to have a power analysis at the beginning of any clinical trial. That's what they want you to appreciate. To know how big your study has to be to allow you to extrapolate your results to the rest of the population. I think that is what the level that we need to know.
But if they ask you what, how you calculate it, it probably means you're doing well. So it's just, as I say, if you have an idea, have the back of your hand as your little ace card. Not something to produce straight off, but just have a nice card. A very popular one as well is screening programs. How do we go about doing a screening program? So we all know the World Health Organization guidelines from six to eight, I hope.
No, no, I yeah, I can barely remember them. So the key thing is a disease that's important. The treatment that's available, the latent stage where it can be picked up and treat it easily. A natural history that is understood and agreed policy on who to treat the itself. There must be a must be available to everyone acceptable to the patient, sensitive and specific. And cost effective.
So those are the key 9 points they want you to remember. It's a bit of a mouthful, but it does sort of. If you can do that very quickly, you've got at least a 6 in that situation. The idea of, of course, is obviously the cash disease as early as possible, so that any intervention doesn't cause too many problems and allows a patient to have a normal life afterwards. And the test itself must have cause make it be worse and actually the disease itself.
But some we have to remember that sometimes. So when the next thing they might talk about is defined sensitivity and specificity and accuracy, because that's something that becomes very important when we're looking at tests. So sensitivity, does anyone want to say what that is? So David wanted to know if you have a simple, straightforward exam definition of sensitivity and specificity prepared for your, you're by the table.
OK, I will define sensitivity as the probability of the test or the ability of the tests to pick all those with a disease. And the specificity is the ability of the test to exclude those without a disease. Yeah so yes, it could be a little bit more accurate. So they want to say the ability of the test to exclude false negatives and then sensitivity and specificity, the ability of test to exclude false positives.
So yeah, it's not very snappy, but I think you've got the gist of it. That's great. So that kind of thing. And then the next thing is they want you to draw the table is a 4 by 4 table. What I will ask you to do that it's mean to do it in this situation, but I don't so bad. My computer doesn't do it.
So this is what they want you to produce, so you ignore the truth at the top. So has the disease, does not have sorry, has the disease does not have the disease a 1 on one side? Positive and negative. So has these true positives. Does not have the disease. False positives like my computer is trying to switch my cell phone and then underneath negative, false negative and true negatives as low sensitivity as we said, it's the true positives over true positives plus false negatives.
So the ability of the test to exclude false negatives, those who picks up everything except the fuse are actually negative. OK so we've got two negatives here over 2a negatives plus false positives. So the ability of the test to exclude false positives, right? And then I get this, I used to get very confused as to be honest, I used to get positive predictive value and negative value the wrong way around as well with those.
But the key thing is it's true positives plus true positives over false positives. OK and again, all the negatives from negative predictive value. Just a memoir. OK, but this is something that you would like they would like you to produce in the exam. If you can practice doing it, it's very quick. You can do it in 30 seconds and you can, as your drawing talk about it.
You may sound mad to everyone. You're doing it like if you're doing it in the library, but it's good to talk about it as any diagram you're doing talk as you draw because you don't want to waste time. To I won't do this event. Next one is talk about errors in the study. Again, that's something they might ask about. It may be, they may talk about, talk about the research project you did.
What type of errors did you encounter when you were doing this? So they'll have different ways of asking this question, but believe they want to talk about errors for that one. So you've got type 1 error, which is otherwise known as an alpha error false positive. So thinking there's a difference when there isn't a true difference and incorrectly rejecting the null hypothesis.
This is usually by reducing or lowering the p value. The type 2 error, or a beta error where the false negative thinking there wasn't a difference when actually there was a true difference and incorrectly accepting the null hypothesis. This usually is of caused by small by the bank having a small study, so it's too difficult to really top 10 people you can not really extrapolate.
The difference was if you have 1,000 people, you're more likely to have a stronger power and hopefully overcome this. So type III error, that's something that we rarely see but correctly rejected null hypothesis. But incorrectly attributing the cause can sometimes happen. And often you see it when you see in the Daily Mail one week it, the study says. Was it?
I've gone, so HRT gives you heart attacks next week to go. Actually, no, it doesn't. So don't ignore the last paper, but those are all things that we have to be wary of. OK and I think we talked about this early as well with Ahmadinejad talking about survivorship. So it's quite a dry topic, but again, this is a kaplan-meier curve.
So the solid part of the line is the survival curve for the sample subject again. Look about the y-axis and the X-axis. The X-axis over time y y is percentage of people. Well, survivors with the implant. OK, so how many so you put 100 hips in? Hopefully, at day one, 100 hips don't need revising. However, at day year 20, the situation 60% of needed 60% are still surviving.
40 days revising. Not a good prostheses in this situation. The top line, if I could get this right. Typically, we have a 95% certainty, the overall survival curve of the entire population within the dotted line, so these two dotted lines that hopefully represent 95% of the people. OK, so we all have some outlines, we'll have some in there.
And this is the study of the outcome of intervention is plotted as we going along, so every time we get an event, we'll lose. Some patients might be four deaths or maybe revisions. So we have to include that when we're thinking about it. David, can I just. Just the doctors talking about it again, talking to the talk in the mirror. There are lots of textbooks that have very good ways of just finding it.
So I would highly recommend to just have a go. Talking about it. Another question that does come up again, they're talking about studies, so tell us about regression correlation. So regression and these are sort of techniques or mathematical studies that we use to calculate how well variable is associated with one another and hence how one will change from the other.
So linear regression the correlation is similar and easily confused, to be honest. In some situations, it makes doing them both. So correlation describes the relationship where there's a positive or negative relationship. And the question is about the strength of that relationship between. With regards to the variables. So you may very well have a graph plot graph with lots of different points or I don't have 1 to apologize, but you will see them quite clearly where there are lots of little dots along the line and you will notice that there is a correlation from y to the X-axis.
And the line would indicate I draw on this, but we just pull this there. OK so you might have a little cross dot along here. OK and so there may be a correlation. So there's no true linear thing. But so correlation is there's a linear relationship with the sub with y and X-axis.
And then the regression is the strength of that relationship. I hope that comes across well. I think, yeah, we can't see it, it's OK. OK, now, Russell, those are the main questions when I looked when I went through all different papers and things like that, these are the main questions I think that come up regularly, but there are still things that you need to think about when you're doing this. As I said, I can't strongly recommend a strongly recommend knowing your definitions and just saying them to yourselves.
OK, if we'll practice them with a friend beforehand, because the exam is only a couple of weeks away, there are some important ones. As I said, we've covered so sensitivity, specificity. If you want, you can look at odds ratio, but power. Your incidence and prevalence and relative risk. Those are all things that could go into it. But this will take an hour or so to do the whole topic within about 20 minutes.
And then more things to go on about got 8 minutes left, so I did have a hot seat question that we could do. Thank you, David. That's great. I think you are amazingly managed to squeeze a lot in less than half an hour. Dry topic you also it's very challenging and you did that very well.
Thank you, David. Is it really contained a lot more than I thought in the limited time, so that's very useful. So I think if you would like we have a candidate if you'd like to, unless someone wants to add something. No, there's a lot of jumps about the kaplan-meier because a lot of questions came up in just a few other questions as well came up during the presentation.
So to reiterate, the kaplan-meier is if you're looking at the data, for example, you want to find out the survivorship of a hip replacement. You start off with X amount of patients, which is 100 percent, and you keep adding to that on the left side of the graph. And so always you have 100% at the start. As you move through the years, you get drops. These drops are revision of the prosthesis, ok?
They're not deaths, they're not other events. They're revision of the prosthesis. Sometimes in the they mark, they put dots into the signify other events to try and show you how many such as deaths or to, for example, loss to follow up. And so on, when a patient became lost to follow up. OK, so that those need to know what those dots mean, and usually there is an explanation at the bottom for those dots.
They're not always the same, depending on what type of kaplan-meier you're looking at. So just to make sure you all understand those drops are the end result that they are watching for. So in the case of prosthesis, it's the presence of a revision. For example, in cancer, you could make it as a presence of recurrence, as a but if a patient dies without recurrence, that's a successful cancer treatment.
Does that make sense? Guys, I hope it does. Yep so if I got a bit confused there, so I'm running my brain. Difficult topic. And sometimes you went through a lot into getting those terms perfect. The way you did was amazing. Breast was very good. David and shuras explained this and to clarify points clearly.
And that's why here for one more question, I think from Amgen, because it seems quite relevant, but I'd be surprised if you do get asked in it in the exam. But just for the sake of explanation, standard deviation is the number of patients that are in a curve. 95% from the mean or median if it's a standard curve, 95% of the patients will fall within one standard deviation.
Two standard deviations is and I think, 97 or 98 point five, I can't remember. The exact number variance is a measure of how different the patients are from the mean. So, for example, if you've got a broad variance, that means you've got a graph, which is very if you can see my picture, you've got a graph, which is a very broad and the number of patients across the basis of very high. If you've got a narrow graph, you've got a very narrow and number of patients and they're all very close within that tells you how distributed those patients are in relation to the mean.
And finally measured, sorry standard error of the standard deviation or standard area of the. Is a measure of how accurate you reflect your study is. So it's a reflection of where your study stands in the normal population. So it's very simple example. If you want to find out is height related to cancer? Just let's say you decide to examine an X amount of male patients and you measure their height and you measure the incidence of cancer in that group of patients.
The good way to check if those group of patients you have taken as your study group reflects the population is to measure how far off the mean your group of patients is to the normal peer group of patients to the normal population group of populate your own population group. So you've got a big population, all males, you've got their mean height. You're also now taking a selection of those males for a study to see what incidence of cancer occurs at certain heights of patients.
You measure the height and you've got a mean height for your group of patients. You also have the mean height for your general population. So that you can. That's a measure of your study being reflective of the true population. Thank you, Sean. I think now we're running out of time.
So I'd like to thank David again and Sean for all the time they took in preparation. David for the nice presentation. I think is popular again, Thanks to everyone who attended. We had 39 participants attending this presentation. And this now we've had this short session earlier, we were going to run another one organizing this, I know it takes a lot of effort on your part.
It's a lot of work. Thank you for us. Fine, thank you. Everyone saw it and this meeting now. And good luck, everyone. I'll keep you posted. And we will try to improve the technical part of the meeting next time. And please do feel, feel the feedback in because it's quite important that feedback for us.
OK you guys see your.