I work in radiology with MRI as a tech. We use AI slightly differently to the examples here, but it’s changing a lot of what we do. It’s more about enhancing images than directly about diagnosing.
The image is denoised ‘intelligently’ in k-space and then the resolution is doubled via another AI process in the image domain (or maybe the resolution is quadrupled, as it depends on how you measure it. Our pixel count doubles in x and y dimensions).
These are 2 distinct processes which we can turn on or off and have some parameters which with we can alter the process.
The result is amazing and image quality has gone up a lot.
We haven’t got a full grasp yet and have a few theories. The vendors are also still getting to grips.
We think the training data set turns out to have some weird influences on requires acquisition parameters. For example, parallel imaging factor 4 works well, 3 and 2 less so, which is not intuitive. More acceleration being better for image quality is not how MRI used to work (except in a few edge cases).
Bandwidth, averages, square pixel, turbo factor and appropriate TE matter a bit more than they did pre-AI.
Images are now acquired faster, look better and sequence selection can be better tailored to the patient as we have less of a time pressure.
I’d put our images up against almost anything I’ve seen before as examples of good work. We are seeing anatomy and pathology that we didn’t previously appreciate. Sceptics ask if the things we see are really there, but after some time with the images the concern goes away and the pre-AI images just look broken.
In the below link, ignore Gain (it isn’t that great), Boost and Sharp are the vendor names for the good stuff. The brochure undersells it.
My partner had a clinician review her paperwork and say "why are you here" explaining the enhanced imaging was leading to tentative concerns being raised about structural change so small it was below the threshold for safe surgical treatment.
Moral of the story: the imaging has got so good that diagnostics is now on the fringe of over diagnosing and the stats need to catch up
This has been a thing for a long time, with MRI in particular.
It gets quite philosophical. To diagnose something you need some pattern on the images. As resolution and tissue contrast improves you see more things, and the radiologist gets to decide if the appearance is something.
When a clinician says there is a problem in some area of anatomy and there is something on the scan, the radiologist has to make a call.
The great thing about being a tech is that making the call isn’t my job. I have noticed that keeping the field of view small tends to make me more friends.
A half imaged liver haemangioma, a thyroid nodule or a white matter brain lesion as an incidental finding are a daily occurrence at least.
I think it cuts both ways though, over-testing vs under-testing, as the question is when do people actually get access to imaging, and is there more pro-active imaging screening we should have done.
A good friend recently had an unrelated routine surgical procedure go awry, that lead to a CT scan to check on the damage. The CT scan ended up finding stage 2 cancer, larger than a billiard ball, in an organ that is going to be surgically removed. Our friend had absolutely no symptoms of any kind related to the cancer. There is no reason he would have gotten a CT scan other than the unrelated surgical accident. Imagine in 5 years he finally had had some symptoms, they do the scan & and find its stage 4, sorry.
The fact that we only have routine screening regiments for a handful of cancers (breast, colon, prostate, skin) is something that I've been thinking about a lot lately.
Yes. And the balance between under testing and some loss of early treatable detection and over testing with some incurring unneeded operations is a hard one. Everyone tends to the over test side. For some things like knee operations the evidence appears strong that surgery is the worst path to take in most cases. Surgery stems from improving imaging of knee joint tissue. Treatment regimes need to catch up to return to a sweet spot of detection and remediation.
My partner feels her breast tissue calcification detected in improving imaging of annual checks should have been left alone and incurred discomfort and scarring she didn't need, but we both know breast cancer survivors who owe their life to detection and intervention
One of the first things AI will be really good at will be image post processing. Even then I, in case of medical diagnosis, I'd prefer to have an actual person compare the "RAW" image to whatever the AI came up with. Simply because post processing can create artifacts that can throw you of quite a bit.
Regarding tue quality of imaging: I tend to agree, and the better imaging gets the more we will have to relly on humans to judge whether or not treatment is necessary or recommended. That judgement alone is, IMHO, in the same league as full self driving and requires general AI.
The raw image is not good and isn’t usable. AI denoises and this is what makes it usable. Then it doubles the resolution.
There is no point in reviewing the raw image as it doesn’t add anything. A study is generally 300-1000 images. If you’re going to review the raw and therefore look at 600-2000 images you’ve just wasted everything AI gained and you might as well not use it.
I acquire the images and I look at what I get and re-run anything that’s got artifact. This is not unusual and generally happens due to movement, excessive image noise or incorrect parameter selection.
I don’t review the raw, but I do review the output. Keep in mind that MR images have always been very heavily processed at every stage of image formation as any way of squeezing more out the the acquisition will have time benefits.
There are a ton of ways a tech can create a misleading or faulty image with parameter selection which leads to an incorrect diagnosis. This could happen quite easily and AI being part of any error m is not something that keeps me awake at night, unlike other other parameters with which I have seen mistakes and have made them myself.
I did my Masters in NMR. Can confirm a lot of ML based plug-and-play solutions are helping denoising k-space.
Trivia: I am also one of the pulse sequence developers affiliated to Siemens LiverLab package on Syngo platform :) [Specifically the multiecho Dixon fat-water sequence]. SNR improvement was a big headache for rapid Dixon echos.
Ha, small world. Thanks for your work, I used to use this daily until a year ago, now my usage is less frequent.
I guess Dixons are still a headache with their new k-space stuff as Boost (the denoising) isn’t compatible with it yet. Gain is but looks distinctly lame when you compare it Boost.
We are yet to see the tech applied to breath hold sequences (haste, vibe etc), Dixon, 3D, gradient sequences and probably others.
I’m looking forward to seeing it on haste and 3D T2s (space) in particular. MRI looks very different today compared to how it looked just 6 months ago.
I’d compare it to the change we saw going from 1.5T to 3T, just accelerated in how quickly progress is being made.
I have long since left collaboration with team at Cary, NC. But all I can say there was a great deal of interest in 3D sequence improvement by interpolation with known k-space patterns like in the GRASE or PROPELLR sequence for e.g. They also learned a good deal from working with NYU's fastMRI
I just went through this with my knee surgeon, he was raving about the quality now coming out of the 3T scanner, saying that after the latest software update it gives better quality than the experimental 7T scanner, to the point where they are considering abandoning the 7T scanner study they were doing prematurely.
AI denoising may be making information that actually is in the image easier to spot, but AI upscaling is just inventing detail that doesn't actually exist in the source image, which seems rather dangerous for this use case.
It’s ‘only’ converting each pixel into 4, so the starting point is not going to change a lot. Also keep in mind that interpolating has been part of image reconstructions for 20+ years. MR suffers from slow acquisition times so we cheat and image resolution is rarely symmetrical in pixel dimensions when you compare x, y and z directions.
Previously we just made pixels square (made up data) then doubled the pixel count (more made up data). It was that dumb.
I’ve tried acquiring an image and up scaling it 2x. Then acquiring the same image at double the resolution and not upscaling.
It’s hard to compare as the longer acquisition is often hampered by patient movement.
When I set up a scan with the AI I’d estimate it as adding about about 30% signal. I make up a scan that will look good. Then turn the AI on, then shorten the scan or increase the resolution such that it’s 30% ish down on signal, then I press go.
We are seeing things we didn’t previously, particularly with cartilage injuries.
The same way we do for everything. We scan in multiple planes and image weighting’s (t1, t2fs, etc). Ax, sag cor. We do other angles for various things too. Eg for knees we do dedicated views for the patella cartilage and ACL.
The image is denoised ‘intelligently’ in k-space and then the resolution is doubled via another AI process in the image domain (or maybe the resolution is quadrupled, as it depends on how you measure it. Our pixel count doubles in x and y dimensions).
These are 2 distinct processes which we can turn on or off and have some parameters which with we can alter the process.
The result is amazing and image quality has gone up a lot.
We haven’t got a full grasp yet and have a few theories. The vendors are also still getting to grips.
We think the training data set turns out to have some weird influences on requires acquisition parameters. For example, parallel imaging factor 4 works well, 3 and 2 less so, which is not intuitive. More acceleration being better for image quality is not how MRI used to work (except in a few edge cases).
Bandwidth, averages, square pixel, turbo factor and appropriate TE matter a bit more than they did pre-AI.
Images are now acquired faster, look better and sequence selection can be better tailored to the patient as we have less of a time pressure.
I’d put our images up against almost anything I’ve seen before as examples of good work. We are seeing anatomy and pathology that we didn’t previously appreciate. Sceptics ask if the things we see are really there, but after some time with the images the concern goes away and the pre-AI images just look broken.
In the below link, ignore Gain (it isn’t that great), Boost and Sharp are the vendor names for the good stuff. The brochure undersells it.
https://www.siemens-healthineers.com/magnetic-resonance-imag...