Radiologists are often in short supply and overworked – deep learning to the rescue

0
64
7 min read
Want create site? Find Free WordPress Themes and plugins.

Deep learning, the AI subset of machine learningthat uses neural network algorithms to identify patterns in data, is proving to be extremely effective at analyzing digital representations of sensory information: images, sounds, even odors.

The recent explosion in deep learning research is partly due to significant investments by online services like Baidu, Facebook, Google and Microsoft aimed at providing automatic identification and categorization of photos and videos to improve search accuracy, ad targeting, marketing data collection and other commercial uses.

However, research into image processing and pattern recognition have origins that long precede the era of cat videos and vacation pictures. Early efforts focused on military applications that provide a glimpse at the far-reaching ramifications of recent research. Now, the flow of technology is reversed, from often trivial consumer uses to life-saving implementations of deep learning in medicine, agriculture and search and rescue.

Radiology has emerged as one of the most significant uses of deep learning, where the same algorithms that can tell a beagle from a boxer in your photo album can also identify tumorstuberculosis and heart disease in medical images. Employing AI to increase the efficiency of radiologists could solve significant problems in both emerging and developed economies, where, ironically, each faces a productivity crisis, albeit with different origins, in their imaging labs.

In the US, the volume of imaging performed, and hence the payments to hospitals, is dropping as health insurers strive to cut costs. According to one hospital administrator,

Everyone’s being asked to be more productive than what they’ve ever been. We are all being asked to work smarter with fewer resources.

Furthermore, the proliferation of imaging providers has created the perception that radiology is a commodity, further pressuring radiologists to increase productivity to compete. In contrast, developing nations like China with rapidly growing healthcare systems face the opposite problem: too much demand for the supply of radiologists to handle. According to Chen Kuan, Founder, and CEO of Infervision, the shortage means that radiologists in China must work 12 hours or more a day just to finish their work. Unfortunately, when dealing with medical imaging, cutting corners to save time can cost lives through misdiagnosis.

Automating stroke identification through CT scan analysis

Medical imaging is a natural application for deep learning as sessions on AI in healthcare at this year’s Nvidia GTC events demonstrate. For example, one presentation detailed the use of deep neural networks for the early detection of pancreatic cancer, while another described algorithms to diagnose cardiovascular disease. Strokes are one of the more widespread, yet tricky conditions to spot and after a cursory physical exam, CT scans are the first line of emergency room diagnosis. Of the two ways brain tissue can lose oxygen, i.e. blood flow, arterial hemorrhages are much easier to spot than blockages, which explains why Infervision has chosen the former as the follow-on application to its initial diagnostic software targeting lung cancer.

Like so many tasks, when studying a medical image there’s a significant trade-off between speed and accuracy. The standard diagnostic technique measures the hemorrhage volume using the largest diameters in two dimensions on a CT scan times half the number of CT slices showing a hemorrhage.

Kuan says that this so-called ABC/2 measurement takes a skilled radiologist about 35 seconds to calculate and another 30 seconds or so to enter into a standard report form. The estimation technique is a rule of thumb to save time that almost always spots a stroke, but Kuan says also overestimates them 45% of the time. Using Infervision’s deep learning algorithms on the CT images can cut hemorrhagic stroke diagnosis time to 3 seconds while also feeding results to a digital report almost instantaneously.

A complicating factor in stroke diagnosis is the location of the hemorrhage or blockage, which can make incidents in certain parts of the brain easy to miss. Indeed, speaking from personal experience with a family member, some strokes can be missed entirely with a CT scan, requiring an MRI and closer study by a radiologist specializing in the brain to spot. Such cases are another area where deep learning can help according to Kuan since the software can highlight areas on the scans with abnormalities and let radiologists focus their attention on potential trouble spots.

Unlike consumer applications, where some errors can be tolerated, AI algorithms must be held to the highest standards when dealing with life-and-death situations. In tests comparing so-called segmentation accuracy of Infervision’s software to human radiologists using the “gold standard” of a four-way ANOVA analysis, Kuan says there is no statistical difference in accuracy and that the variance of errors is actually smaller. Furthermore, he says that minimizing false positives is difficult to do under time pressure to quickly read images, meaning that AI-assisted CT diagnosis could help reduce the number without significantly increasing the radiologist’s workload.

Deep learning has many other medical applications

Over time, deep learning will be applied to harder and harder cases of medical imaging that require more time and expertise for radiologists to correctly diagnose. For example, ischemic strokes, the other major category that results from blockages in blood vessels, are more difficult to spot on CT scans.

These are next on Infervision’s roadmap, as is the ability to analyze MRI scans, which provide much more detailed images, however, the company’s initial focus was on the ‘easiest’ cases of radiologic analysis that use the most common diagnostic tool available around the world, CT scanners. As Kuan points out, although MRIs are the imaging gold standard in the U.S. and Western Europe, they are not as common in less developed countries. That said, the company has published research that apply its technology to MRI and see it as a natural evolution of the technology.

recent research paper summarizing the use of deep learning in medical imaging noted the technique’s utility in both image segmentation (partitioning images into different biologically similar regions and extracting key features) and registration (combining and analyzing multiple images of the same area for better accuracy). CNNs (convolutional neural networks) are particularly useful at image “segmentation of the lungs (41), tumors and other structures in the brain (42,43), biological cells and membranes (27,44), tibial cartilage (45), bone tissue (46), and cell mitosis.” Mirroring its use to tag photos on consumer sites, deep learning is also being used automatically caption medical images. T paper states that,

Studies have recently introduced the use of CNNs and RNNs (recurrent neural networks) (60,61,62,63,64,65,66,67,68) to combine recent advances in computer vision and machine translation, and thus automatically annotate chest radiographs with diseases and descriptions of the context of a disease (e.g., the location, severity, and the affected organs) .

The paper notes that medical image recognition software can also automatically generate standard reports, further streamlining the radiologist’s workflow. Indeed, Kuan says that Infervision’s software already can feed results to the electronic forms systems used in most hospitals.

My take

As I first discussed last January, we are on the cusp of an explosion in the use of AI in healthcare with radiology as the ideal pilot test as a quote I cited from a medical school professor emphasizes,

I predict that within 10 years no medical imaging study will be reviewed by a radiologist until it has been pre-analyzed by a machine.

Work by Infervision and others demonstrates that the professor is too conservative. Don’t be surprised if AI-assisted radiology is commonplace by the end of the decade.

Unlike fears in automation-fueled job losses in other industries, AI in medicine is a great example of technology augmentation, not substitution. While AI will undoubtedly cost jobs in some areas, the net effect should be positive as the technology allows for skilled doctors and technicians to spend more time on the harder cases, consulting with patients and refining treatment options. Indeed, a recent Gartner report indicates that AI should be a net producer of jobs and business value by 2020.

The lesson for businesses across industries is that AI should be seen as a force multiplier that improves productivity and allows your best, most valuable employees to focus their time and energy on creatively solving problems, developing new products and services and deeply engaging with customers, not performing rote tasks that sap their intellectual energy. Like any tool, the benefits of AI will most accrue to those that learn how best to wield its unique capabilities.

Did you find apk for android? You can find new Free Android Games and apps.