Improving MRI for Localized Prostate Cancer Diagnosis: The Need for a Standardized and Quantitative MRI Approach - Tyler Seibert

May 3, 2023

Matt Cooperberg interviews Tyler Seibert, a radiation oncologist and assistant professor at UC San Diego. They focus on the use of MRI for localized prostate cancer, which has drastically changed the diagnostic pathway for prostate cancer in the last decade. However, the variability in the way the MRI is conducted is a problem. Each imaging center sets up its own protocol, and this lack of coordination globally is problematic. While Pi-RADS guidelines are helpful, they are not perfect, and protocols are not uniform. The goal is to have a more quantitative MRI, where everyone can have a standardized frame of reference. The path forward is to ensure there is a more coordinated effort globally, have quality assessment, and develop a more quantitative MRI.

Biographies:

Tyler Seibert, MD, PhD, UC San Diego Health, San Diego, CA

Matthew R. Cooperberg, MD, MPH, Professor of Urology; Epidemiology & Biostatistics, Helen Diller Family Chair in Urology, University of California, San Francisco, San Francisco, CA


Read the Full Video Transcript

Matt Cooperberg: Hi, I'm Matt Cooperberg. It's a pleasure to welcome you to another installment of the UroToday Localized Prostate Cancer Center of Excellence interview series. It is a real pleasure to be joined today by Tyler Seibert, whose day job is a radiation oncologist at UC San Diego where he's assistant professor, but has really been shaking things up in his other roles in radiology and bioengineering in the fields of both imaging and polygenic risk assessment before and after prostate cancer diagnosis. Today we're really going to focus on theĀ  first of those, I don't know if we should say hobbies or career [inaudible] in terms of really trying to get us to a point of doing better with MRI both before and after diagnosis. So Tyler, welcome. Thanks for joining us.

Tyler Seibert: Thank you so much. Happy to be here.

Matt Cooperberg: So lots of ground we can cover here, but maybe we can start by you just giving us a sense of where you think the field is, kind of the progress we've made in recent months, and where the remaining work to be done is. Where are the remaining problems in terms of how we're actually using MRI today for localized prostate cancer?

Tyler Seibert: Yeah, great, sure. So I mean the background, I think a lot of the audience will already be familiar, that things have changed tremendously in the last decade or so, really not even a decade, it just completely changed the diagnostic pathway for prostate cancer where MRI is commonly done before biopsy and many large trials demonstrating that benefit. The important thing to remember though is that all of those trials have been carefully designed and they've been done in places with a lot of experience and there's some involvement in some of them of inexperienced readers, but there's a huge body of literature showing that prostate MRI is not easy and it requires expert radiologists who then get better over time. And that's one of the issues. I think that issue is well known but there are other issues that are not well known.

For example, how good is the camera, so to speak? So the MR scanner is not really a camera, but they are really highly complex pieces of equipment that are managed locally by each imaging center. It's almost bizarrely variable in my opinion. It doesn't come with a set of default this is your prostate MRI protocol. Each place sets up their own protocol and this is not a coordinated effort globally and it really should be.

Matt Cooperberg: Maybe we can start there actually, because this is something I only became familiar with a few years ago. I always kind of assumed you took the shrink wrap off your GE or your Siemens and you pushed prostate. Why is that not the case?

Tyler Seibert: I think it's historical. I mean, part of it is that each site seems to set it up to the liking of their local radiologists, but it is super variable. I mean, the decision making could be happening by the physicist, it could be happening with the requests from the radiologist and it can be done by the techs or radiographers depending on which country you're in who are running the machine. And literally the number of ways you could make a prostate MRI with your machine is essentially infinite. So I like to use the analogy of a DSLR camera, and I'm not even a photographer so I shouldn't use that analogy, but I know it has the F-stop and the focus and that you can put different lenses on and it's too complicated for me, so I know that it's too complicated for me.

MR scanner is far beyond the level of complexity and if you wanted everybody to take a picture of an orange on a table and make sure that the picture of the orange was quantitative so that you would get a value to get the same reproducible value every time, you wouldn't let everybody just play around with the settings and the lighting and do it however they wanted. You would make sure everybody did the same procedure and we're not there yet with MRI.

Matt Cooperberg: Are the standards either within the PI-RADS specification or otherwise that should be helping here or are those not adequately codified yet?

Tyler Seibert: Yeah, I mean I don't want to make it sound like this is complete chaos. There are guidelines. Pi-RADS is tremendously helpful because it tells the radiologist how to report the T2 weighted component, the DWI component and things, and come to a score. So it's much, much better than everybody just eyeballing it and deciding whether they think there's cancer there. And the PI-RADS documentation does come with instructions, parameters for your protocols, should do X, Y, and Z, but it's inadequate for getting you to where you can be quantitative and it's not perfect either and the people who wrote the guidelines would say the same.

Matt Cooperberg: Do you think there's questions that our listeners should be asking of their radiology colleagues in terms of how the machines are protocoled? Is there a good way to do it, a bad way to do it? Or are there just six relatively decent ways to do it that will allow you to calculate a PI-RADS score?

Tyler Seibert: Yeah, I don't think we know. I mean, to get to a PI-RADS score, you can do follow the PI-RADS document and you can get to a score. The question then is does a PI-RADS three at your imaging center mean the same thing as a PI-RADS three at the next imaging center? And we unfortunately know that it doesn't, because that's been published. There was a wonderful paper, both of our institutions have participated in that, it's anonymous, you don't know who's on where, but it shows the positive predictive value at each institution. And it's like anywhere, it goes from for a PI-RADS three and above the positive rate to value is something like 20% at some institutions and 70% at other institutions. Now, there's a lot that goes into that. It could be who's your population? What's the pre-test probability? How do you make your decisions about who gets an MRI? Who are the radiologists and how good is the biopsy?

Are you targeting the lesion or are you not? Are you using a fusion approach, cognitive approach? And you know these things better than I do but there are a lot of things that could be contributing including which scanner you're using. All of that goes in. But at the end of the day, what you have is this really broad variation and positive predictive value. And even within institutions, they show the error bars on the kind of spread on those positive predictive values at each institution and some of them are gigantic. So if you're a patient or a physician and you're trying to decide where do I send my patient or where do I go for my imaging, probably wherever your insurance company tells you, but did I get a good MRI? There are ways of doing quality assessment, they've been standardized in some way. I really admire the work that's been done on this PI-QUAL so that you can score how good was the diffusion on this particular patient, how good was the T2 on this patient? How good is the institution?

And that's a good step in the right direction, but it still doesn't get you to am I getting the same answer? Would I get the same answer if I got my prostate MRI done at center A versus center B and with radiologist C versus D? And you won't, we know that. But how reproducible are those things is really unknown. And that's why when we talk about our work, like what are we trying to do, it's really getting to where we have something that's more quantitative where you get a number. And that doesn't mean that the interpretation will always be identical, but at least your frame of reference is the same, That's the goal.

Matt Cooperberg: So let's get to that because it does seem like there's two levels of issue here. There's how the machine is set up and then there's the human interface to the machine. The study that I think you're referring to is the one Antonio Westphalen wrote, these five major centers, but I'm sure you've also seen Geoff Sonn's paper from Stanford, high volume MR program. At that institution, same machines presumably, the positive predictive value ranged from 40% to 80% depending on which radiologist pulled the study off the queue. And that's coming from a pretty high volume institution.

Tyler Seibert: So that should also address the urology aspect to some extent because you'd think that it's random in terms of who's going to do the biopsy at that institution and then also the great point referral patterns for which patients are you sending to MRI. That's one of the things that could definitely impact positive predictive value, but you'd think that that would be pretty much random in a single institution also.

Matt Cooperberg: So tell us now, what is your view of the path forward to do better with what is probably a phenomenal technology at the magnet level, how do we get that to better clinical actionability?

Tyler Seibert: Okay, yeah. So there are a few steps, I think. One is just first of all getting everybody using the same protocol. Now you can't actually make everybody use the exact same protocol because the protocol is specific to the scanner, the version of software you're using, the magnet strength, all kinds of things. How powerful are the gradients that you're using? There's all kinds of things that go into this. So you would have to have variation across vendors and scanners and whatever, centers. But if you and I buy the exact same scanner model and have the exact same version of software, if you have a different protocol than I do there should be a reason why your protocol is different than mine. There should be a logical reason that's been empirically tested. It shouldn't just be that we each eyeball it and do what we want because how are we learning anything that way?

So one thing is getting everybody on the same protocol and then seeing if you do that how much variation do we see? So we are testing those hypotheses with some studies that are coming. If you really standardize the technology, then what variation is left? I think it'll still be substantial. Then the next thing is how do you address that substantial variation? And I think for that it is taking, like you said, this tremendous amount of information that you've acquired, that you've obtained through your scan, and instead of having it filtered through the eyeball test, you put through a quantitative modeling procedure that gives people the same, like I said, the same frame of reference. I still do think that the radiologist's interpretation will matter. A good analogy right now on the current era is PSMA PET.

So you have PSMA PET and it gives you an SUV. It's not a perfectly good quantitative value, but it is something that we can at least start on the same page and we can say, yeah, this was a 20 or this was a three, and you have some sense for what that might mean. It might matter though the details of how that was acquired. It certainly matters which tracer you're using and what the pretest probability was and what the shape of the lesion was. And is this a patient with a Gleason six and it has a spot that looks sclerotic in a bone and has a little bit of PSMA. Well, why did you even order the PSMA PET in the first place on a Gleason six? But if you did, you shouldn't believe that that's cancer in that bone. Whereas if they have a Gleason nine and on a PSA of 45, then it changes the equation. So I think that there still needs to be some sensible interpretation, but getting you to a number moves things a whole lot more toward the potential for being standardized.

Matt Cooperberg: So can we get to that number with the the existing T2 diffusion weighting type protocols that everybody's doing now, but just put a more sophisticated filter on top of that? Or is that new protocols? I know you've been involved with restriction spectrum imaging and some other modifications to the protocols that actually go into the machine.

Tyler Seibert: Yeah, I think it's a yes and yes. So you could do better just with the current imaging by using the data more sensibly. We've done some work on this and I think it's compelling. But then moving to if you then know what you're trying to do, so if you have a model, so we use RSI, restriction spectrum imaging is a framework that we're using. There are other ones that are heading toward the same direction. We're working on the same problem. There's like a VERDICT, for example, that our colleagues at UCL are using in London, hybrid at University of Chicago. So there are various models out there that you can use, and they're all thinking about similar concepts in terms of measuring quantitative diffusion and potentially T2 aspects as well, luminal water imaging.

So you have these different models and you could apply them to different data sets, but if you knew this model works, it reproduces the biology of what we're trying to find, then it would only make sense to just acquire the data in the way that will give you the best chance of fitting the parameters of that model in the most accurate output. So to me, you can start with any data and you can do a better job than what we're doing now, but better than is if you've learned that you should acquire your data so that you get the answer that you're looking for, which is what is the probability that this patient has a high grade cancer? I mean, that's what we want to know.

Matt Cooperberg: Last [inaudible 00:13:59] you mentioned you work in London, in the UK, a couple times, so the UK paradigm for a few years now has been no MR lesion, no biopsy. They're very... I don't want to say aggressive... but they're very comfortable using MR as pretty much the sole means of following patients on active surveillance, of guiding focal therapy decisions, et cetera. I know Mark Emberton has been on record saying this should be a radiographic diagnosis not a tissue diagnosis. And low risk cancer is down to 7% in the UK partly as a result of this. Are we there? In other words, with all these issues about variation in MR performance, particularly going back to that paper of Antonio's, that 25 to 75% range, that's the best case scenario. These are all high volume academic centers.

Those are not the community site where they read a mammogram in the morning and an MR of the prostate an hour later. So how far are we from being able to implement a UK model, and let's ignore the cost issue, here versus the UK versus everywhere else in the civilized world. Let's forget about cost. How far are we from doing something like what the UK does and how are they doing it in the UK? How are they dealing with standardization there?

Tyler Seibert: So the standardization is not clear to me. I mean, I think they still have a lot of variation in terms of how they're acquiring the images and certainly the machines that are available there's a lot of variation there. There's a lot of differences between one center to the next. It's also, I think, different if you're in a place with more resources than less in the UK, that's my impression. It makes sense. London is different than rural Northern England. But they're doing well. Like you said, they're not diagnosing a lot of low grade disease. They're reducing the number of biopsies. They can implement MRI at a per person and it seems cheaper than we can because of our complex payment model. That's another conversation for another day.

It's not perfect, but it does well. And with the long-term results of ProtecT out, it's hard to make an argument, I think, that you're missing really a lot of deadly disease. That's highly unlikely. At the same time, I always say I will get my PSA tested and I'm okay with that because I'm not afraid of the downstream consequences very much. I will be very reasonable. I will not be over-treated. I have a lot of knowledge to protect me from making really poor decisions, but I also would never get a biopsy without an MRI because why would I want somebody to put needles in? And then you're right, that maybe you can miss something if you don't get a biopsy and your MRI is negative. On the other hand, if you have a lesion and it doesn't get a needle into it, then what was that procedure for?

And the worst is, and I see it a lot, is patients will come to me with a low grade cancer diagnosed in the community without an MRI on systematic biopsy alone. And I'm looking at the PSA density and their overall trajectory and I'm thinking this just doesn't really add up. I had a patient that was Gleason six with a PSA of 50 and they sent him to me for long-term ADT and radiation. I'm thinking, I mean, for Gleason six alone that sounds tough. It's a tough sell. You're going to tell this guy's going to be on ADT for two years? But on the other hand, with a PSA of 50 do we really think that we found the disease? And of course we hadn't. So I sent him for an MRI and you know there's an anterior lesion. It's not surprising. This is a very standard story, but it happens still.

So how far away are we? Well, we need to get everybody to do MRI before biopsy, I think that's important. What we do with that information, the details need to be worked out. That's not totally clear how we then implement it. How do we deal with the quality? Is there great inflation that you have to... and what is a PI-RADS three? But in terms of, part of the problem is just having the bandwidth in terms of radiologists who are good and who have the experience to do it, because it's not like they're inherently bad radiologists, they just don't focus on this thing. They focus on other things. So we need to train them. But if you have the demand, then you'll have the radiologists. If you start ordering MRIs for everybody with a prostate you better have enough scanners and things to do it. That is a problem in the UK because it becomes a competition for MRIs for other things, but I think that the expertise will get better.

But there's another aspect beyond just even having the radiologist be good that you and I have talked about before, is making images that... you and I are not radiologists but we work with imaging to plan our treatment and we talk to our patients about it, and we'd like to show them the images. Having images that just aren't so difficult that only radiologists can look at them would be a big step forward so that urologists could feel comfortable that when they order it, they're not getting some kind of black box that only the radiologist score comes back. All I get is a three, a PI-RADS three, because I can't look at it myself and know what I'm looking at so that's all I get.

That's not as good as if they order a CT scan and look at the pelvis. We know how to look for lymph nodes, for example. So I can do a spot check on my own and kind of feel like I know what's going on. If we get people to where they look at an MRI and they know what to do with that information themselves to some extent, then it becomes, I think, more appealing for the ordering physician and for the patient.

Matt Cooperberg: That point about interpretability by the patient's bedside is a really important one. Actually, Mike Leapman and one of his trainees, I think, has a [inaudible 00:20:11] where they look compared biomarkers, liquid markers to MR in helping patients make biopsy decisions. And even in scenarios where the test characteristics, the negative predictive value would be, on paper, is better for the marker than the MR. They did a couple focus groups. The patients really liked the picture. And I use it all the time, but as you say, it's a little hard to go over the diffusion-weighted pictures with the patients where I say, "Now this is the spotted cow in the snowstorm, and you've got to trust me, it's a little darker here or brighter in this spot."

Tyler Seibert: Yeah, and that's where, I think, we've made a lot of progress with RSI. Now we've got these really nice overlays that look kind of like a PET scan overlaid on a CT or a PET on an MRI, and so the lesion just pops out. It's very easy for the patient to recognize where the cancer is and to have a conversation about it. Pictures are powerful. I think it makes sense that patients make decisions based on biopsy. That's to be expected. Also, when it comes to active surveillance, if the MRI is showing no change, for better or worse because we don't actually know how to best use MRI in active surveillance, but if you had MR that is working really well, then I think keeping patients on active surveillance and having them feel comfortable, the PSA bounces around a little bit, it makes people nervous.

But if you can say, "I did a scan and look, that tumor, there's where it is, it hasn't changed. It looks exactly the same," then you could convince people to not be treated, to not get repeated biopsies that might lead to an upgrade that is a little bit of artificial. I think there's a lot of potential there, but it needs to be accurate too. Like you said, their confidence exceeded the empiric results compared to the other biomarkers. I think that's probably true too.

Matt Cooperberg: Any last thoughts? What's the next exciting study to watch out for coming up?

Tyler Seibert: What's exciting, I think that the next exciting things are really doing careful validation for the location of the lesion. So we've done some nice work at the patient level, because you can't from the beginning say you need to have, I think, identify a lesion and then apply some kind of quantitative marker to that lesion and decide whether it's true. That's the common approach. But I disagree with that approach because then you had to first find the lesion which required an expert radiologist and the quality of that expertise is going to determine whether you even found the right spot. So I'm not sure that that's the best approach, a priori, as a non radiologist engineer. I think it makes more sense to say I have an auto segmentation of the prostate. I don't need even a human to do that. It's very easy.

We find the prostate, you tell me what's the worst quantitative value, what's the highest probability of cancer and the probability map, whatever that is, that is what tells you whether you need to biopsy the patient at the patient level. But the second question is where do you put the needle? Do you just still do it systematically only or do you have a target? But more interesting to me even than that question, as a radiation oncologist, is where do you direct your therapy? So the FLAME trial showed this great oncologic, I'd say it's a home run in oncology, where it was a randomized trial in the Netherlands. They did treatment of the whole gland versus treatment of the whole gland with a boost of radiation dose to the lesion on MRI. And that is done simultaneously so there's not even extending the length of treatment.

The patient doesn't even know it's happening, really. I mean, they wouldn't feel any different. And what they found was increased disease-free survival, so less recurrences. And on patterns of failure analysis, they found that the local failures were reduced by two-thirds and the distant metastasis or regional plus distant, because that's what they had, a composite endpoint, but metastatic disease, whether nodal, regional or distant, was reduced by almost half with no increase in toxicity. So that's the key. I mean, this wasn't overall survival, but you have a clinically meaningful benefit with no increase in toxicity. The catch is that that trial used three centers with a radiologist who is an expert in prostate cancer that sat next to the radiation oncologist and found the lesion with them. And we don't even really know how they did it. So they have this beautiful trial, but to reproduce those results for our own patients, radiation oncologists practically need to know how to find the tumor.

And we have a really nice study where we gave radiation oncologists images and told them where the tumor was, and some did okay and lots didn't. And 93% of the 44 participants missed, completely missed at least one of the cases. So out of an average of 18 tries to find 18 different tumors, they were median completely missing, 0% overlap with the expert defined tumor, three. That's not very reassuring at all. And you're not doing much benefit if you're hitting the wrong spot. But if you give them images where you can see things, the RSI images, suddenly their performance dramatically improved. So that's the thing that I'm working on, and the next step is to... and I'm glad that you're involved in this, I'm excited this is UCSF/UC San Diego collaboration... but then checking to see how well does our quantitative probability map correlate with the actual whole amount histopathology from prostatectomy specimens. That's our next big thing.

Matt Cooperberg: Because of course we deal with the exact same issue when it comes to focal therapy, which is being used more and more. And it is exactly the same issue as does the urologist know where the tumor is at the time of the focal treatment?

Tyler Seibert: And how do you document that? Was the spot hit, right? I saw a patient this week. He's a recurrence after focal therapy. But we do this all just kind of anecdotally, it's easy to nitpick about the recurrences that come to you in clinic. But what we really need is why can I not see where was the thing in the first place? How was it treated exactly? What was the dosimetry, so to speak, of that thing? And can we systematically and empirically figure out if there was a reason for this failure and it could have been predicted in the first place? Was it a training issue? Was it just bad biology, whatever, so that we can avoid these kinds of things.

Matt Cooperberg: Truth and data. Thank you so much for your time. It's been great.

Tyler Seibert: Thank you. Sounds great.

Matt Cooperberg: Good luck. Bye-bye.