The Future of Radiology And Artificial Intelligence

Share on twitter
Share on linkedin
Share on facebook
Share on reddit
Share on pinterest

What if an algorithm could tell you whether you have cancer based on your CT scan or mammography exam? While I am confident that radiologists’ creative work will be necessary in the future to solve complex issues and supervise diagnostic processes, A.I. will definitely become part of their daily routine in diagnosing simpler cases and taking over repetitive tasks. So rather than getting threatened by it, we should familiarise ourselves with how it could help change the course of radiology for the better.

Radiologists who use A.I. will replace those who don’t

There is a lot of hype and plenty of fear around artificial intelligence and its impact on the future of healthcare. There are many signs pointing toward the fact that A.I. will completely move the world of medicine. As deep learning algorithms and narrow A.I. started to buzz especially around the field of medical imaging, many radiologists went into panic mode.

Bradley Erickson, Director of the Radiology Informatics Lab at Mayo Clinic told me that some of the hype we hear from some of the machine learning and deep learning experts saying that A.I. would replace radiologists is looking at radiologists as if they were just looking at pictures. That would be me saying while I look at programmers, all they do is typing, so we can replace a programmer with a speech recognition system, he added. Langlotz compared the situation to that of the autopilot in aviation. The innovation did not replace real pilots, it only augmented their tasks. On very long flights, it is handy to turn on the autopilot, but they are useless when you need rapid judgment. So, the combination of humans and machines is the winning solution. And it will be the same in healthcare.

Thus, I agree with Langlotz completely when he says that artificial intelligence will not replace radiologists. Yet, those radiologists who use A.I. will replace the ones who don’t. Let me show you why.

What do cat intestines, X-ray lamps and the history of medical imaging have in common?

The field of clinical radiology started obviously with the quite coincidental discovery of the X-ray by Wilhelm Conrad Röntgen on 8 November 1895 in Würzburg, Germany. Within two months, the X-ray mania ran over the world. Sensational headlines in newspapers propagated the “new light seeing through flesh to bones”, while one inventor even speculated that “soon every house will have a cathode-ray machine”. Any similarities about hyped technologies coming to mind?

Thomas Edison became so excited about the new discovery that he even wanted to create a commercial “X-ray lamp” (unfortunately, his efforts failed) and tried to get an X-ray of the human brain in action (sadly, that was not a success either). His latter endeavour let story-driven reporters go nuts: they were allegedly waiting for the innovation outside his laboratory for weeks in vain. Some went as far as to fabricate images about the human brain. One of them turned out to be a pan of cat intestines radiographed in 1896 by H. A. Falk!

While some early efforts turned out to be huge blows and impossible projects, X-rays got acclimatised in medicine. Something similar will happen with A.I. and healthcare soon. I hope with fewer cat intestines, though.

Radiology has been the playfield of technological development since the beginnings

In the TV series, The Knick depicting the first decades of modern surgery and healthcare, an inventor gets in touch with the hospital manager in his office to present him with a new idea, the X-ray machine. It turns out that it takes an hour or so for the brand-new machine to take the picture! Currently, if you go to the hospital to get the annual check-up on your lungs done, the X-ray procedure will take a couple of minutes in a fortunate situation, and some more until you get the results.

Plenty has changed since those experiments with the ‘X-ray lamp’, but one thing was constant: rapid technological development in radiology.

A bigger range of tools and higher precision

Approximately half a century after the discovery of the X-ray, ultrasound joined the methods of medical imaging. From the mid-sixties onwards, the advent of commercially available systems allowed wider dissemination. Rapid technological advances in electronics and piezoelectric materials provided further improvements from bistable to greyscale images and from still images to real-time moving images. And it is also amazing to see how we went from room-sized, clumsy ultrasound machines to portable ones circa within another half of a century! In 2016, Clarius Mobile Health introduced the world’s first handheld ultrasound scanner with a mobile application. The doctor can carry around the personal ultrasound device for quick exams and to guide procedures such as nerve blocks and targeted injections.

Now, let’s look at body scanners. The first CT scanners were introduced in 1971 with a single detector for brain study under the leadership of Godfrey Hounsfield, an electrical engineer at EMI (Electric and Musical Industries, Ltd). The very first MRI scanner was built by Raymond Damadian in the 1970s by hand, assisted by his students at New York’s Downstate Medical Center. He achieved the first MRI scan of a healthy human body in 1977 and a human organism with cancer in 1978. The first functional MR imaging of the human brain is produced in the early 1990s. By the early 2000s, cardiac MRI, body MRI, fetal imaging, and functional MR imaging became routine exams in many imaging centers.

Along with precision comes automation

Thus, the history of radiology shows the expansion of means as well as the increase in precision so far. While the latter is still in focus, there is also a visible shift towards making radiologists’ lives easier by automation. As radiologists need to go through more and more images every day, it becomes inevitable that part of their job can be automated. When we can train algorithms to spot and detect many types of abnormalities based on radiology images, why wouldn’t we let it do the time-consuming job so we can let radiologists dedicate their precious focus to the hardest issues?

When deep learning becomes possible and the algorithm could teach itself while radiologists rate its effectiveness, it’s going to get better just by working more. This is an opportunity we have to grab. This way radiology would be one of the most creative specialities in which problem-solving and a holistic approach would be the key.

So, it certainly would not mean that A.I. would take over all the tasks of radiologists. As Erickson put it, if you look at the frequency of findings and diagnoses on medical images, there are the common ones where A.I. could help, but there is a really long tail, uncommon but really important things that we cannot miss. He believes that it is going to be difficult for deep learning algorithms to identify those. But where do we stand with technology at the moment?

Could A.I. predict whether you would die soon?

Scientists at the University of Adelaide have been experimenting with an A.I. system that is said to be able to tell if you are going to die. By analysing CT scans from 48 patients, the deep learning algorithms could predict whether they’d die within five years with 69 percent accuracy. It is “broadly similar” to scores from human diagnosticians, the paper says. It is an impressive achievement. The deep learning system was trained to analyse over 16,000 image features that could indicate signs of disease in those organs. Researchers say that their goal is for the algorithm to measure overall health rather than spot a single disease.

This is just one of the numerous initiatives on developing artificial intelligence applications to support the field of radiology. You can take a look at this article published by The Medical Futurist Institute in npj Digital Medicine journal, or the online database we keep updating ever since. It currently has 79 entries, of which 39 belong to the field of radiology. That is undisputedly the medical field with the highest number of A.I. initiatives.

However, the ongoing research does not mean that we are already at the stage where average patients will have to face their exact life expectancy based on their medical images when they go to the hospital.

What are the challenges in introducing A.I. to the radiology department?

In order to have some estimation of when machine learning might be introduced on a wider scale, we have to look at how machine learning takes place in radiology. The process usually goes like this: the algorithm should be fed by thousands, if not millions of images and learn to spot differences regarding tissues. Just as in the case of computers recognising images of dogs and cats. If the algorithm makes a mistake, the researcher notices it and adjusts the code. Thus, it is a rather lengthy process that needs tons of available data. Erickson believes that the result will look like the following: we’ll do the high volume exam, and the algorithm will probably create a structured, minable, preliminary report. So it will do the quantification that most humans hate to do and it will do that very well, he noted.

Anna Fernandez, Health Informatics/Precision Medicine Lead at Booz Allen Hamilton told me though that there are several challenges in building these discovery and analytic platforms – from acquiring access and ingesting the data, sufficiently annotating the data, storage strategy, governance/policy use throughout, and types of analysis enabled via the platform. The biggest challenge is sufficiently annotating the data to allow different views of it (full right to owners, restricted subset to others) and enable discovery across the connected data sets in the platform.

Moreover, hospitals also need to be convinced that A.I. algorithms work. Fernandez believes that it will be a step-wise process by for example taking advantage of hybrid internal and external “crowdsourcing” with sufficiently anonymised data.

For example, a vendor can have established data science algorithms based on anonymised data from their hospital network, then the new hospital can employ the algorithm and further refine it to the anonymised “local” data sets (that may include additional patient variables) to customise it to their population. As the hospitals see a “win,” they may be encouraged to release a more restricted anonymised data set to contribute back to the vendor solution. So it’s a little bit similar to how you try to go into the cold water on a hot summer day. First, you look at other people doing it, then you realise it’s safe, so you put your toes in the water before entirely going under.

We get to have A.I. analyzing our CT scans

How can we depict in a concise and easy-to-grasp way how the human-A.I. collaboration will unfold in the field of medicine in the years and decades to come? Andrew Ng, founder of deeplearning.ai described five levels of automation. The Medical Futurist implemented this concept in medicine, and explained these levels with current examples and future scenarios in this article.

Below is such an infographic that helps in visualising the spectrum of automation in medicine, ranging from human-only (level1) to fully automated (level5).

Future radiology is expected to work with level 3 (A.I. assistance) and level 4 (Partial automation) algorithms. What do these mean?

At the third level, the A.I. system supports physicians in clinical decision-making via suggestions. For example, after scanning a database of chest CT scans, the A.I. considers the chest CT results of a patient being investigated and highlights suspicious signs. These signs are then further investigated by the physician.

On level four, with partial automation, an A.I. system can come up with its own diagnosis; but if it’s not confident enough about it, the A.I. turns to physicians for help. Several companies are working on such solutions today. A practical example is Behold’s Class IIa CE marked platform that is used by UK’s NHS Trust to help clear the radiology backlogs in lung cancer screening. It can process adult frontal Chest X-Ray examinations and has two key outputs. It either flags the image as “suspected lung cancer” and prioritises the patient for a radiologist consultation, or it identifies the image as normal – although the image will also be audited by a radiologist.

Palo Alto-based Nines developed an A.I.-system that can identify potential cases of intracranial haemorrhage and mass effect from CT scans. It then flags those cases for radiologists to review.

Radiology’s Future is A.I.

All in all, research trends and experts underline how A.I. will revolutionise radiology in the long term. Thus, rather than neglecting it or feeling threatened by it, the medical community should embrace its achievements.

Yes, it is possible that a big chunk of the tasks radiologists do today will be automated, covering all repetitive, data-based tasks. It will free up capacities for more meaningful assignments, and the A.I.-based technologies themselves will be designed and controlled by radiologists.

As Erickson put it, rather than pushing off machine intelligence as being a threat to their job, instead, radiologists should engage it, because it’s something that can really help patients. I’m sure it will dramatically change what radiologists will do over the next ten years, but you should also keep in mind that eventually, radiology ten years ago was nothing like what it is today. So it is just one of those things where we need to make sure that we keep at the forefront; that we keep in mind that what matters most is taking care of patients. I could not agree more and could not express it better.

0 replies on “The Future of Radiology And Artificial Intelligence”

Related Post