Table of Contents
How AI is Changing Medical Imaging
Traditionally, doctors or radiologists read medical images produced by X-rays, MRIs, and other types of scans. It’s always been that way and for some doctors, that should never change. However, in some hospitals, artificial intelligence (AI) is actually taking over this job.
Artificial intelligence is completely capable of some diagnostic interpretation of images. This is an area that has some doctors feeling pretty nervous, though, leading to calls for regulations in the area of AI medical image reading.
Doctors Aren’t Always Right
In radiology practice, it’s estimated that 3-5% of images are misread or misdiagnosed… by doctors themselves. People are not infallible, not even if they’ve spent years in medical school. This is by no means an isolated issue, either. Misdiagnosis is a common problem. While AI won’t necessarily eliminate any margin of error, it’s definitely can reduce the margin. It’s been shown that machines can, when taught correctly, be very helpful in determining which images need a closer look by a doctor.
If you have a machine that reads images just as well as a human, could it be beneficial to use it for medical imaging? Perhaps yes. First of all, the machine can often detect issues that are smaller than humans would normally notice. This can lead to early diagnosis of issues, meaning the patient can seek treatment long before the issue becomes a bigger problem. We know that treating cancer in its earliest stages gives the patient a better chance of survival, so this would be an enormous benefit.
Secondly, it’s a great time saver. You can have a machine read images for you while you go about your day. If the system flags something, you get a notification. It’s a simple way to eliminate the time burden of scanning images and allows doctors to focus more on their patients.
How Artificial Intelligence Works to Read Medical Images
AI medical imaging is not as crazy as it sounds. We’ve been using AI for years to identify images in non-medical settings, so why not use it to detect tumors or abnormalities in scans?
Essentially, AI uses computerized algorithms to work through and understand data that’s presented. Since diagnostic images are really just a lot of complex data, it makes sense to have a machine work over it. Since AI learns as it goes, it can be fine-tuned and taught to recognize more and more abnormalities over time.
Until recently, the majority of AI imaging was performed only to determine if there was an issue and not the extent of the issue. It would be beneficial to have AI rate the severity of a tumor or lesion and then determine its priority status. It’s necessary to determine the sensitivity of the AI, as well. For example, AI can detect very minor changes in an image, which could mean doctors can catch a cancer or other disease earlier than usual. Again, however, AI isn’t able to detect specific changes or outcomes in images. That may be something that could be taught, but in most cases, if the AI flags an image, a doctor will follow up.
It’s been proven that a brain MRI with early signs of an ischemic stroke is far more likely to be read accurately by a machine, as opposed to a human. There are things that human eyes are more likely to miss and in a case like this, it could be the difference between life and death.
If we can prevent deaths by allowing a machine to handle the detection of very small changes in MRIs and other types of scans, then it could actually improve patient survival rates. Again, it all comes down to teaching the AI correctly with proper protocols in place.
Where AI Gets It Wrong
One of the biggest potential issues is not artificial intelligence missing a diagnosis, but rather over-diagnosing. A doctor can look at a unique image and determine if the person it shows simply has a different look than someone else. AI is far more likely to become overly sensitive to changes in the image and create a false positive. The medical liability implications are concerning and this is likely why the world of AI in medicine is moving so slowly.
We need to have specific regulations in place to prevent mistakes from impacting people’s lives. Imagine if multiple people received false positives for potential brain tumors … there would be panic. It could even result in excessive treatment.
To prevent this, scientists are working on supervising the AI’s learning period and helping it distinguish between images. While the machine is capable of learning and “reasoning,” it can still end up learning the wrong things, much like a small child. It’s important to have a human scientist helping to direct the learning.
Another challenge occurs when an AI is shown poorly developed or blurry images, it can drastically impact how it reads future images. This was proven by a study that used X-rays that even radiologists found difficult to read. Those same images were not easily read by the AI. It misinterpreted the images, which corrupted the data the computer was using to learn from. When incorrect information is presented, the machine cannot learn correctly.
In the future, we can expect to see more and more AI in medical imaging. As the machines become more intelligent and educated, the risk of false positives will drop. The earlier health problems can be detected, the faster we can treat them and this may lead to saving lives. It’s essential that medicine continue to move forward, whether we use technology like artificial intelligence and machine learning to do so.