Episode Transcript
[00:00:03] Speaker A: Welcome to the Orthojoe Podcast, a joint production of the Journal of Bone and Joint Surgery and Ortho Evidence.
Join hosts Mohit Bhandari and Mark Swankowski as they discuss current topics and publications in the world of orthopedics and beyond.
[00:00:18] Speaker B: Well, good morning, Mo.
[00:00:20] Speaker C: Morning.
[00:00:23] Speaker B: It's a real early morning where I am. I'm all the way on the west coast, so it's still dark out here in Seattle and the coffee shops haven't opened. But I'm sure you're well prepared, as always.
[00:00:39] Speaker C: I always have something with me. Yes. This is a warm cup of coffee. It's actually Tim Hortons coffee, but I've put it into kind of, you know, class it up just a little bit.
[00:00:48] Speaker B: Yeah, that's great.
I hope to be into something like that very shortly when they open, but, you know, it's fall, there's lots of sports activity going on.
A dramatic end to the Ryder cup in New York City with notoriously bad fan behavior, which is unfortunate for the US but the Euros won and were dramatically ahead. But it, it came down to the last few matches, so that was good. And then, you know, I've got several football teams that I followed and my USC Trojans lost their first game and the Vikings lost again.
So there's a little bit of a downer here in the, in terms of the football part of my life. But there's always orthopedics and academics. Yes. And always science can uplift, you can always uplift forward and all these things that really change your attitude at 6 o' clock in the morning.
And this, this is an episode where one of us picks an article that we've published recently and decided to highlight. And I have my choice, which is a manuscript about patient reported outcome measures, which has been a big part of my academic career, having developed and validated several PROM instruments. And this is a very interesting article that comes out of Beijing. It's kind of a new concept in this. It's the smallest worthwhile effect as a promising alternative to the MCID in estimating proms for adult idiopathic scoliosis. And this is jargon that may be over the heads of several members of our listening audience, but.
So this smallest worthwhile effect enables patients to evaluate the expected value of a treatment by weighing its benefits, risks and, and costs. And it is proposed as an alternative to the mcid, which is a PROM measure, minimal clinically important difference.
And their study was attempting to determine the SWE estimates and MCID thresholds in patients undergoing surgery for adult Idiopathic scoliosis, which are huge surgeries and involve a fair amount of risk both in post operative complications and, and risks of poor outcomes, et cetera. And so they took a cohort of patients and examined them at 2 years post operatively, and they had 119 participants, mean age of 26, plus or minus 7. And they measured the Swe50 estimates and then compared them to the MCID thresholds for an instrument that was developed through the Scoliosis Research Society. It's called the SRS 22 questionnaire.
And they did the statistical analysis, which I'm going to ask you to comment on, and they concluded that this SWE can serve as an effective alternative to the mcid for interpreting PROMs at a minimum of two years.
And maybe I'll start with, can you clarify what the MCID is and how it's calculated?
[00:04:19] Speaker C: Yeah, and I might even take it back if I could, Mark. You know, so obviously, like, I think I, I met you, or I, I'd heard of you at the time of your development of the SMFA and the mfa, the Musculoskeletal Functional Assessment Tool. And this was when Gordon Guy, back in the early 1990s was really promoting at McMaster, evidence based medicine. In fact, he coined that term, you recall, in 1989, 1990. So back then it was always, you know, clinically important benefit, which was, if you think about it from the point of view of physicians and surgeons, it would have been, okay, I believe I understand the pathophysiology. Therefore, I believe as the surgeon treating you, this is the important measure. Ten years later, Gordon wrote a paper, and I remember very specifically he had said, we've got to move away from clinically important outcomes to patient important outcomes. Which gets back to this whole idea of MCID and everything else that follows, which was, it was a paper and something was called the ACP Journal Club. American College of Physicians had a journal club, and they published it 20 years ago or so.
And bottom line was, is the argument that we have to really, ultimately look at what is important to our patients and ultimately let them make the decisions. And so they came up with this idea of minimal important difference, minimal clinically important difference, and a host of permutations there often. But the principle was when you have a continuous score system, a score as a number rather than infection. Yes or no?
It's a scale.
How do you know when a scale is. How do you know if 1 point versus 10 points versus 20 points means it's something important? So they developed these statistical Techniques and you can imagine, right, one way was, as Gordon talks about now, it's called an anchor. You say you compare it to something. So in the simplest point of view, you would over a period of time give people a scale and they'd get a score, 20 points, 30 points, 40 points, whatever it is, as they were improving.
But then they would also be asked independently at that time, you know, how do you feel? Do you feel better? Do you feel not so much better? And then so they were trying to find an anchor that was something that was patient driven to say, well, the minute the patient says they feel better, what was the actual difference like that were that that the scale was picking up when they suddenly noticed something is better or worse. So it was this anchor based method that became the methodology and the, and sort of the gold standard, which is you got to find something you can anchor against. In this paper, that's exactly what they did. I think the, in fact they use the term we used an anchor.
But as you can imagine, as science evolves, there are other methods where they said, well, let's say we don't have an anchor. We're just going to use the data we have on this tool, on this scale with all the curves that develop the histogram, the curve, the normal curve, whatever that curve is.
We're going to use something called a distribution or a statistical method. We're going to come up with standard errors and we're going to try to figure out what we think based on this.
Jeff Norman and a bunch of other scientists at McMaster, a number of them did. It came up with a whole bunch of theories. But one of them was, you know, kind of a minimal important difference can be roughly half of the standard deviation, you know, of the actual scale you're using. It was a rough estimate and they seem to think it was pretty robust. So now it's been this discussion around, well, which is better? I will tell you that at least the thinking at the university that I'm at and with some of the individuals who are really prominent in this field, they believe anchor based methods remain the standard.
And as I understand something that moves to like this idea of the smallest, the smallest worthwhile difference, smallest worthwhile effect really is an anchor based tool that looks at patient important anchors. But still what it does is it adds one other element which is the costs and the benefit. I'm sorry, so the benefits and the risk. So it's actually doing the modeling which it says, Mc Ide says, here's a scale, here's the Minimal important difference. We think that works that would be clinically important, let's say, or patient important.
I believe what we're talking about in this paper is they've actually modeled out the smallest worthwhile difference or smallest worthwhile effect in consideration of both the benefits and the harms.
You can see how that would actually play out. Now you can get into the statistics of people arguing about was it done accurate, was it enough sample size. But in principle the idea is a good one. And I think in principle that's what we do in real life, right? You weigh out the benefits and the harms. You don't just say, okay, well here's all the good that can happen. Oh, by the way, you can have a horrible event and the patients have to weigh both of those out. So I think that is sort of an apprais how I see how all this all fits together.
[00:09:00] Speaker B: Yeah, you make a very important point about the sample size in this particular investigation. And so often in the journal we publish manuscripts that are on the, the early part of the curve in introducing concepts that maybe aren't perfectly done in this case because of sample size and a single diagnosis, etc. But it's to put it out to the orthopedic academic community as a new concept for them to consider further investigation.
The use of this tool. I have had many times when manuscripts are presented or published on the mcid and oftentimes the reader or the listener will conclude, well, it didn't meet the mcid.
You've outlined how it's calculated and it's always important to understand that the MCID is different for different patients.
And what we're talking about in the academic or scholarly environment is groups of patients where the MCID is established. But if you took the MCID for a professional athlete with Achilles tendinopathy, that would be very different than the MCID for a 60 year old sedentary individual. And that's important for clinicians to understand that it's a calculation of groups, not for individual patients. So that's something I always try to point out when people are concluding that an intervention is, is not worthwhile because it didn't meet the overall MCID for that tool.
[00:10:46] Speaker C: Yeah, and that's a great point too, Mark, because it, you know, like what you're saying is on average, when looking at all different folks in all different experiences, it doesn't seem to have an effect if in fact it didn't. Right. Let's just say, however, exactly to your point, you interpret an average. That's the problem always when you're trying to make these, you know, judgments and what they've been doing now, and some of them, and Jason Bussa, who we've had on a few times has been doing some analyses where what they'll do is they'll, they'll break the MCID into a proportion. So what they'll do is they'll say, okay, there's a, there's a study that has an average mcid, an average effect that was below what would be considered the average minimal, clinically important difference threshold. But then they'll look at every person, they'll say, well, actually, as it turns out, 30% of the patients in this study actually exceeded the MCID. And that's exactly what you're getting at. And it may turn out that these 30% were an important subgroup of individuals that you don't want to just throw the treatment out and say it doesn't have an effect, it has a varying effect. But the average effect may be misleading sometimes. And so I think breaking it down into the proportion of individuals so you know, what they'll do is they'll take study patient one to 119, let's say, was this study, they'd say, did this patient exceed it or not? Yes or no. And then they can look at a proportion there as well and create a confidence level around that, which I think is another way to look at information and data. And if you see differences, then you can start exploring why.
[00:12:12] Speaker B: Yeah, yeah. Well, let me conclude by pointing out to the audience that I've passed on the hot seat to you in terms of the person who signs off on what we actually publish in the journal. And it's, these decisions are difficult because, you know, the vast, vast majority of readers of the journal don't really get a whole lot of joy out of these statistical type analytic papers. But yet they have an important role, which is why we, we, we publish it. And it's, it's a, it's a very, very difficult thing to decide what percentage of, of, of how we make decisions of what we're going to publish are in this really high scientific, statistical realm.
And can you just, I mean, you've been at this over a year now and, and you've sensed the struggles and how to balance things out. Where are you with your thinking on this?
[00:13:16] Speaker C: Yeah, there's, well, there's two things that, that are truisms, right? If.
Accept my paper, Boy, oh boy, they got it right. They got it right. Of course they did. Accept my paper, man. They got it Wrong. They got it so wrong. So. And I do appreciate, you know, I write and I continue to write and I have the exact same experiences that everyone does. So you know, we're all biased and we all think our work is, you know, is, is worthy of, of of praise and certainly getting disseminated. And I. And our job is to do everything we possibly can to take that, to take the work that is worthy and disseminate as widely as we can. But there's always tough decision made.
And I think learning a lot from you too, Mark, is, you know, you had a very strong team of people around you that were, you know, collaborating and helping, you know, balance sort of all those, all those considerations. The statistical side is really important and having a strong methods appreciation. I think we're seeing just overall the quality of methods understanding in our field is just, you know, go back to 1990 and now. Right. It's just completely different.
Just so much more knowledge. Right. And now what we're getting into is really in many cases complex modeling and modeling issues. And that is a different level up and there's a different level of expertise and that just again talks about how far we've advanced. So I look at the, the more us, the more scrutiny we're putting on the analytics is because the analytics are certainly now getting much more complex and much more in their, in their results have huge important impact potentially. Right. So it's really just building a team and continuing to scratch your head when you think there's, you know, when there's, when there's, when there's important issues that you want to make sure you get out, but you also know you want to get them out in a way that allows the Journal to communicate its message, you know, locally and abroad. And that's what we're trying to do. And it's certainly been a huge, huge learning curve for me on a personal note. But I know you're there, so, you know, yours to call away.
[00:15:20] Speaker B: Yeah. Well, I'd also point out that you brought on additional expertise into your leadership group and methodology and statistics and expanded that group that really is built on the outstanding work that Jeff Katz and Eleanor Laina and etc. Etc.
Andrew Schoenfeld, who really done great things, Stephen Lyman, etc. And it's really, really important that we have that kind of expertise that are figuring in to the decisions made at the Journal. And my end comment is just to remind the audience that we practice double blind review.
And so we do everything we can to try to remove bias out of the initial review process before it moves to the next level. And finally, I'll point out that my personal last three submissions to. To the Journal were rejected, so. But I'm not angry about it. But you'll be hearing from me if the fourth one gets rejected.
[00:16:22] Speaker C: You have my email.
Everyone has my email. That's.
They can find me. And trust me, I will tell you, Mark, as I understand that you told me this because many people do find you. I get found a lot. So I'm happy to have those discussions. Obvious, obviously.
[00:16:37] Speaker B: Yeah. Well, that's great. And you're off to a great start, and congratulations. And I know the journalist is going upward and onward and into new horizons. And having just experienced the annual editorial meeting or. We're all very excited about where we're headed as an organization. So congratulations to you and enjoy the hot seat.
[00:17:00] Speaker C: It's a hot seat, but, you know, maybe on. Maybe the last cliche here is we're standing on the shoulder of giants, so there you go.
[00:17:06] Speaker B: Okay, well, cheers. Have a great day, Mo.
[00:17:08] Speaker C: You, too. Have a great day. Bye.