Statistics and Methodology in Orthopaedics, with special guests Sameer Parpia and Sheila Sprague

Statistics and Methodology in Orthopaedics, with special guests Sameer Parpia and Sheila Sprague
OrthoJOE
Statistics and Methodology in Orthopaedics, with special guests Sameer Parpia and Sheila Sprague

Sep 10 2025 | 00:18:01

/
Episode September 10, 2025 00:18:01

Hosted By

Mohit Bhandari, MD Marc Swiontkowski, MD

Show Notes

In this episode, Mo and Marc are joined by special guests Drs. Sheila Ann Sprague and Sameer Parpia (Associate Professors at McMaster University and Senior Editors for Statistics and Methodology at JBJS) in a discussion on the importance of methodological rigor when designing clinical trials. 


Subspecialties:
 

  • Orthopaedic Essentials 

Chapters

  • (00:00:03) - Ortho Jobe Podcast
  • (00:01:00) - Senior Editors Introduction
  • (00:02:18) - The OsteoTrauma Trials
  • (00:04:12) - How to do a randomized trial in orthopedics?
  • (00:06:45) - The Leap of Proposals
  • (00:09:04) - Should Bayesian Analysis be Advanced in Orthopedic Research?
  • (00:12:03) - Samir and Sheila on Submitting a Paper
  • (00:14:10) - Biostatistics and Methodology
  • (00:15:34) - The Science Editor's farewell
View Full Transcript

Episode Transcript

[00:00:03] Speaker A: Welcome to the Ortho Jobe Podcast, a joint production of the Journal of Bone and Joint Surgery and Ortho Evidence. Join hosts Mohit Bhandari and Mark Swankowski as they discuss current topics and publications in the world of orthopedics and beyond. [00:00:19] Speaker B: Well, good morning, Mark. How are you this fine morning? It's a rainy morning in southern Ontario, so I'm hoping you're getting some sun where you are. [00:00:27] Speaker A: I'm just checking it out. No rain for the first time in many, many days. It's. It's almost like Noah should be showing up around here lately. But, yeah, it's good. You've been to Tim Hortons yet or is that after this? [00:00:40] Speaker B: I have, I have a little bit, A little bit. Just, just a little bit in my cup, but I, I'll do my Tim Hortons run after. After. [00:00:47] Speaker A: Yeah, yeah. So part of my delay in getting on, I did struggle with the zoom link, but I also, the, the espresso machine at home was acting up a little bit, but anyway, it came through in the end. [00:00:59] Speaker B: Lovely. Well, you know, you know, we've been doing a series of team introductions and you know, just as we continue to continue to globalize, you know, the Journal of Bone and Joint Surgery, we continue to engage, you know, a host of international experts. I thought it'd be appropriate for us to talk about something that I know you and I think very, very strongly of. And I know during your 10 as the journal editor, the importance of statistics and methodology was paramount, just as the importance of high quality peer review has been paramount in the journal's history. And I thought it would be appropriate to maybe mention and discuss and share two individuals I know you have a relationship with as well, historically as well as in the research, Drs. Sheila Sprague and Dr. Samir Parpya, both who are coming to us with a host decades combined, many, many decades of experience and knowledge in biostatist and I would say health research methodology. So welcome to you, Samir. Welcome to you, Sheila, for joining us. [00:02:03] Speaker C: Thank you. [00:02:03] Speaker B: And thank you again. Also, if I can just jump in and say for taking on, I think, very, very important roles as senior editors at the journal in statistics and methodology. So welcome to you both. [00:02:16] Speaker D: Thank you. Pleased to be here. [00:02:18] Speaker B: Well, let me start off, Mark, if I could, Sheila, you know, you have, I mean, a pretty busy, pretty busy life. I know that a little bit firsthand from all the work that you and I did. And Mark, I think has been involved in some of those large trials. You have a history of doing very Very large trials you're at, currently an Associate professor at McMaster University, run a relatively large clinical trials program that's brought in probably 50, $60 million in the last several decades. Probably randomized 50,000 patients worldwide. And that may be conservative, but with all the things going on, why. And why now for you? [00:02:56] Speaker C: Yeah, I think it's a really great opportunity for me to be able to contribute to the literature through all the expertise that I've gained over the years, through the opportunity to conduct some of the largest trials in our field in orthopedic trauma. So the trials, it does allow for a good knowledge of the rct, like the design, the conduct, the implementation. But along with that comes a lot of planning studies, including scoping reviews, systematic reviews, meta analysis surveys, and then many of the other studies and publications that come along with it, such as a lot of patient engagement, work, sharing patient stories. There's also a lot of secondary analyses looking at important clinical questions and framing the next questions for clinical trial design. So I feel like over the years I've been involved in virtually every type of clinical study that has been done, so I feel like I have a strong methodological understanding of pretty much all designs and I can contribute to the literature by carefully reviewing and critiquing and ensuring that the publications meet the top methodological standards for the journal. [00:04:11] Speaker B: That's great. [00:04:12] Speaker A: Can I jump in here, Mo? Nice to see you, Sheila. [00:04:15] Speaker C: Nice to see you too, Mark. [00:04:17] Speaker A: Yeah. And I think all of us in the orthopedic research community are struggling a bit with these days of uncertainty and edicts and changes that are really causing us great difficulty to do our work. But I don't wish to talk about that because there's really not much we can do about it at this time. But I do have a question that I know you have come across many, many times, and with your experience with doing randomized trials, a clinician who says, I want to study condition X or injury X, and how do I know if it's possible to do a randomized trial in this condition? So just some very basic things for our younger research community in orthopedics. [00:05:03] Speaker C: Yeah, I think one of the biggest things is to look and see what's in the. Like to kind of go to the literature first and see if it is a question that needs to be addressed and that there is clinical uncertainty within your research question. Then we also tend to do a survey to see if there is clinical equipoise for the question being asked and if there's interest within the community. To do the trial. And a lot of it is talking to colleagues, talking to experts, writing your one pager. And I always have everybody start with the pico question. That's kind of a good way to see if it is a clinical trial question that can be answered. And looking at then from once you have your pico, which is population, intervention, comparison, outcome and time frame, you can frame that into a clinical question and see if it is indeed answerable by rct. And that kind of gives you the starting point to start talking to colleagues, clinical trialists, statisticians, methodologists, to build your idea. From there. [00:06:08] Speaker A: Can you come up with a ballpark estimate of how common a condition or an injury needs to be before it becomes possible with all your years of experience? [00:06:19] Speaker C: Yeah, I think that's a pretty tricky, tricky question. And there are like, I'm not an expert in rare disease trials, but I do think it is important too that rare injuries and that do receive some methodological considerations and such. [00:06:40] Speaker B: I mean, the one thing I would jump in and maybe I'll pass that same question on to you, Samir. So Sameer, again, associate professor at McMaster University, leading many, many large trial groups. Often you are the individual really thinking through the statistical analysis plan and helping, you know, investigators and teams ultimately be successful in what they do. Is there a point at which a trial is just not feasible? And how do you decide that early enough? Sheila's talked about making sure you've got a really thoughtful review, you've probably done some pilot work. Anything else you would use as a decision, say, okay, this is just not going to work, that the trial may not be feasible. [00:07:18] Speaker A: Right. [00:07:19] Speaker D: It's a good point. Just building on Sheila, we do look at feasibility of recruiting patients, and that's the main thing. And if it's a very rare disease, maybe a traditional randomized trial is not feasible, but there are alternative single arm trials that could be done. The other thing that we look at is the event rate. So we may get the population, but the event rate is really low for us to detect a difference that we're looking to detect. And in that case also there are alternative designs that we may consider, such as single single arm or maybe some Bayesian adaptive methods to do this. Ultimately, if we cannot recruit the patients to answer the question properly, then we deem the trial not to be feasible. And this is done by, you know, building networks and surveys and so forth to see if we can, if we can recruit the number of patients. And that's how we usually determine whether something is feasible. [00:08:13] Speaker C: I Just add to that, we often do like a pilot. We often do a Pilot study of 40 to 50 to 100 patients to both test the protocol and to see how quickly you enroll. Because I often think people do tend to overestimate how many patients are seen and how many will agree to be in a clinical trial. The pilot phase or the vanguard phase is incredibly important to do. And it's also very important for getting grant funding to ensure that your trial can be completed. [00:08:47] Speaker D: Great point. [00:08:48] Speaker B: Super helpful. Did you have something there, Mark? Did you want to say. [00:08:51] Speaker A: I think we're kind of moving off that topic a little bit. Sample recruitment is extremely important to see if. If the disease or the injury disappears, which it so commonly does, as we've all experienced. But Samir, I think you're aware that in the orthopedic community there's a, I would say a rapidly increasing interest in Bayesian statistical approaches. And I just wonder if you could just share with our audience, are there things we should be concerned about as we move into this type of analysis? [00:09:25] Speaker D: Sure, maybe. I'll summarize. So Bayesian analysis looks at taking your prior knowledge, integrating it with the current observed data, and then giving a result, combining those, as we call it, a posterior. Now, the biggest criticism of Bayesian is that your prior knowledge is something that we have to determine. It can be based on prior literature. It can be based on what individuals think. Now, you can determine that and it can be skewed to what people believe and may not be the truth. It can, you know, if you base it on core data, then it can be biasing. So that is the biggest thing because there's a lot of subjectivity in determining that prior. And so that would be one of the biggest Bayesian criticisms that there is out there, that how do you determine this prior knowledge, what is considered good data, what are people's beliefs? But that is, you know, people are working on this to determine what is the best way to determine a prior. What I would say is other than the priority. Bayesian analysis are still, you know, they still. It doesn't preclude you from doing a good study. You still need enough patience. You still good. Need good study conduct. You still need good measurement. All the standard things that you would do in a health research study are still required for Bayesian. Bayesian will not fix a flawed design or methodology. So, you know, if you see something Bayesian, I wouldn't say that it's better than anything frequentist. It still has to meet those high standards that we want for any Health research. But what I would say is that they should pre specify the prior and justify why they have picked that prior. And if the prior is very strong, you may want to include other priors for sensitivity analysis. So there's a neutral prior where it just assumes that we don't know what the treatment effect will be. For example, those are the things that you want to look out for. Whether the prior was pre specified. You can't look at data halfway through the trial and then update the prior based on what you see. You know, similar to what we do in traditional clinical trials, we don't want to look at data and then make decisions. And you want to still maintain very good high study conduct, have adequate sample size, minimize bias. So those things are standard whether you use Frequentist or Bayesian, and they apply under both frameworks. [00:11:48] Speaker A: That was extremely helpful, very clear. And I think our listeners can understand why we have people like yourself and Sheila on the review panels trying to decide what's best to publish in the journal. So thank you for that. [00:12:03] Speaker B: And maybe I'll just again, just ask more of a broad question, something for definitely for our viewers and our listeners who are thinking about, you know, submitting a paper, and we hope they are to, you know, where we are right now at the journal. What advice would you give them? Maybe I'll start with you, Samir, first, around the biostatistics of just moving forward and thinking about the statistical analysis component. And similarly, Sheila, when someone's at the point of submitting, what kind of things should they be thinking about, about the methods and how they're going to present that? [00:12:32] Speaker D: I mean, this could be simplistic, but the first thing I look for is clarity of the research question, clarity of the statistical analysis plan and the design. You know, it's very challenging if these things are not clear and then we're trying to interpret what's happened. So that is the first thing that I look for is like clarity in the research question, the design and statistical analysis plan. The second thing I would say is that statistical analysis plan should match the design and the research question. They should be corresponding. And then for me, the last thing is that there should be objective interpretation of the results and the discussion and it shouldn't be, you know, we're trying to avoid spin and then relate that to what has already been published. [00:13:15] Speaker B: Anything to add to that, Sheila? [00:13:17] Speaker C: Yeah, and I also look for very clear and concise writing and ensuring that the instructions to authors were followed and also making use of many of the tools and checklists that have been developed by experts and methodologists over the years. And many of these like including like the consort statement for randomized trials is commonly the big one. And a lot of these are available on the journal's website. So taking the time to look at the journal's website, follow the instructions for authors and make use of the tools will make your methods section a lot clearer and easier for reviewers, your peer reviewers to evaluate and follow your paper and also to proofread your paper and look for typos, formatting and things like that, because that can also be a little distracting for reviewers and such. So it have a very polished and professional piece before you submit. [00:14:08] Speaker B: And maybe I'll just finish off here. Maybe I'll. And Mark, I'll give you the last word after I finish with this one, which is I think there's, you know, biostatistics and methodology. I think while they're often grouped together and they should be are actually quite separate in the way we might approach a manuscript, for example. And maybe you can help me and make sure I'm getting this right. I think I have it right, but I'll get your insight on this. A paper that comes in with methodological flaws. Sheila, I do not think in revision we can fix for the most part, if it's a true core methodology issue, it's either we accept it with limitations or we just basically say that, you know, it doesn't meet the criteria. Would that be reasonably fair to say? Can you make a study that's been conducted with poor methodology better somehow in a review process? [00:14:55] Speaker C: No, but I think you can be very transparent. Like if you didn't blind or if randomization wasn't concealed, for example, if you stated it at least lets the reader know know what the limitations are. [00:15:08] Speaker B: Right. So transparency would be. [00:15:09] Speaker C: Transparency, I think is very, very important. But you can't go backwards and conceal your randomization if you didn't conceal your, your randomization. [00:15:18] Speaker B: So I think a great rule of thumb is if, like it's okay if things didn't work out quite the way you thought in Methods, but make sure that during the discussion we're more than transparent about the limitations that each of those potential methodological limitations poses. So that would be a good way to handle that. Sameer, is there ever a situation where just where, let's just say, inappropriate or incomplete statistical analysis couldn't be redone? In other words, that to me would be a situation where as long as the data is available, those could be rerun and in other words, a statistical analysis itself would Never itself, if it's not done correctly, be a reason to reject a paper more. It would be work with the authors and hopefully be able to get the appropriate analyses done. Would that be a fair statement? [00:16:02] Speaker D: I think that's completely right. So as long as you have the data, you can tweak the statistical analysis plan to match what's required in the study. But I think you're right that the methodological issue you can't go back and fix. So that's a bit more challenging. But we definitely can work with the authors to have a more appropriate statistical analysis plan and results shown in the paper. I think that's fair. [00:16:24] Speaker B: That's great. And I know both of you will be working closely together to help us think through how we work the flow of papers that come into the journal to optimally allow, you know, the best information to get out and to help authors ultimately create a message that's strong and important. [00:16:40] Speaker A: Well, I, I think you gave me the last word or last statement, Mo, so I'm happy to take it. So I have been around the journal on the board since, I don't know, 92 or 93, something like that. And I was in the room when Jim Heckman, who was the editor in chief at the time, decided that we really needed to up the game of statistical analysis of manuscripts. And he selected Jeff Katz from the Brigham, who has been great in more than two and a half decades working with the journal, along with Eleanor Lucena and Stephen Lyman from HSS and Andrew Schoenfeld, also from Harvard. And I think our listeners got a great sense with the addition of these two experts of, of how important it is to get things right, because as we know, people actually treat patients based on what's published, and it needs to be right. So thank you for joining the team, Samir and Sheila. Our readers are grateful, and even more importantly, our patients are grateful. So thanks very much. [00:17:45] Speaker D: Thank you both. Thanks for working with you both. [00:17:51] Speaker B: Sam.

Other Episodes