University tests AI-powered ‘radiology assistant’
There are various known issues with screening mammography.
Probably the most important one is the fact that even though relatively few women actually develop breast cancer, many women are asked for additional imaging following screening mammography (such as diagnostic mammography, ultrasounds and MRI), which is a big cost both in terms of money spent and the stress it causes patients.
Putting AI to work in radiology
Dr. Krzysztof J. Geras, assistant professor, department of radiology, at the NYU School of Medicine, led an AI-powered effort to tackle this challenge.
“Our intention is to decrease this number of additional imaging, and AI is the means to achieve that,” he stated. “It also is known that a small fraction of cancers are missed by the radiologists during a screening mammography exam. We were also hoping that our AI tool would help catch these cases, which could potentially save lives.”
The name of the proposed technology is “ResNet-22.” It is a type of deep convolutional neural network. The way it works is by learning from a very large number of image/label pairs.
“In this case, we trained the network by showing it approximately 800,000 examples with the correct diagnosis outcome many times,” Geras explained. “The whole training process took approximately three weeks. In order for this to be possible, we needed to use a very powerful computer with a graphical processing unit.”
AI becomes an assistant to radiologists
The way the NYU School of Medicine envisages using it in practice at NYU Langone Health is as an assistant to a radiologist.
“That is, the radiologist would see the images the way they currently see them,” he said. “Then, if they deem necessary, they could ask the AI for its opinion. The AI can give a radiologist a predicted probability that the patient has the answer and point to the parts of the images that it considers the most suspicious – if there are any such parts of the image.”
The team hopes that in this way the technology can increase the confidence of radiologists and decrease the number of patients that are asked for additional imaging.
“Our AI is currently not deployed in our hospital,” Geras explained. “The results in a paper we published are from a retrospective reader study. In the reader study, the radiologists were asked to give their predictions of probabilities that the patients had breast cancer based on screening mammography images. We asked the AI to do exactly the same.”
An interesting finding when the team averaged predictions of radiologists and the AI was that the average was more accurate than either of the two separately. It is showing that AI and radiologists are using different features of the data, he said.
Looking toward a pilot
“It could be relatively easily integrated with the real clinical pipeline,” he stated. “We currently are considering a pilot study at NYU Langone to further validate it in a clinical setting.”
The most important hard result the NYU School of Medicine team achieved was increasing the AUC (a measure of accuracy) of radiologists in discriminating between patients who have and do not have cancer from approximately 0.8 to approximately 0.9 through using the AI as a second reader.
“A random predictor achieves an AUC of 0.5 and a perfect predictor achieves an AUC of 1.0,” Geras explained. “The main factor that allowed us to achieve such strong results was the size of the data set that we used for training our neural network. It would be extremely difficult for any individual radiologist to see that many images paired with the final diagnosis through their entire life.”
The neural network was learning tirelessly for weeks – as the team keeps working on improving the design of the network and keeps accumulating more data, the results will keep improving, he added.
AI advice straight from the expert
Dr. Geras has some advice for healthcare provider organizations working with AI and related technologies to improve care and gain efficiencies.
“Perform extensive testing of any AI technology before deploying it in clinical practice,” he advised. “Even though the results that many groups achieve with AI are very promising, deep neural networks that we use are known to be sensitive to the changes of the test data distribution. We cannot currently give any guarantees that the AI would be as accurate if it was applied to a significantly different population or images acquired with different imaging equipment.”
There currently are many companies that are aggressively marketing AI as a silver bullet to solve healthcare’s many challenges, but it is important to consider the safety aspect of this technology before going all in, he added.
“In reality, achieving clinical value and routine use is still a few years away,” he predicted.
Finally, it is very important that AI is not one monolithic thing, he advised.
“Different AI solutions applied to the same task might be completely different in their performance and robustness,” he explained. “They also differ a lot in terms of the level to which they can explain their predictions. We still need a lot of research in neural networks and medical imaging to realize the full potential of this technology.”