When AI Looks at X-Rays: Interview with Qure.ai CEO, Prashant Warier

By | January 3, 2019

If you follow the recent advances in medical technology and artificial intelligence, you may have heard people make bold claims that AI will replace tomorrow’s doctors. While there are still ways to go for technology to reach these sci-fi levels, many companies are actively designing AI systems that will accompany doctors or assist them with their daily tasks. One particularly challenging task has been to enable algorithms to examine medical images and make intelligent conclusions, create reports, or provide recommendations. Medgadget recently had the chance to ask Qure.ai’s CEO, Prashant Warier, about the strides his company has made in automating the analysis of head CTs and chest X-rays.

Mohammad Saleh, Medgadget: Can you tell us about your background and how you came to be a part of Qure.ai?

Prashant Warier, Qure.ai: I have been a data scientist for pretty much my whole career. My previous company, Imagna, used AI to target consumers with tailored digital advertising. Imagna was acquired by Fractal Analytics in 2015. At Fractal, I took on the role of Chief Data Scientist and in that role started incubating a startup artificial intelligence in the healthcare field, which eventually became the foundation for Qure.ai.

Medgadget: Give our readers an overview of what your company’s mission is and how you’re working towards achieving it.

Warier: Our mission is to make healthcare affordable and accessible through the power of AI. Working towards this mission, we have automated interpretation of chest X-rays and head CT scans using AI, making faster and accurate diagnoses available to everyone.

Medgadget: What distinguishes Qure.ai from other companies using machine learning for radiology?

Warier: We have a strong research orientation and have innovated several deep learning techniques for medical image processing, as well as natural language processing, on radiology reports. In the last 2 years, we have had 5 peer-reviewed publications and 14 scientific abstracts at medical conferences. Our most recent publication in The Lancet was a big achievement as that was the first AI paper in The Lancet, which is among the top 3 science journals.

One of the challenges with deep learning is that it does not generalize very well to new technologies. Our solutions have been trained on millions on scans from more than a hundred medical centers and from a wide range of medical devices, making them very robust and accurate. We have also validated both sets of algorithms against new datasets from multiple hospital systems and several of these studies are available as publications or conference abstracts. Two of these abstracts were recently presented at the Radiology Society of North America conference in Chicago – studies conducted by Massachusetts General Hospital and Max Hospitals in India.

See also  Kavanaugh denies assault in TV interview with wife

As a strategy, we have chosen to go deep and comprehensive on specific imaging modalities (chest X-rays and head CT scans) rather than work on many different types of scans. For example, our chest X-ray algorithm can detect the top 18 abnormalities from chest X-rays and create an automated report which is similar to a radiologist’s report. Similarly, our head CT solution can detect all types of critical abnormalities (bleeds, infarcts, midline shift, fractures, etc.) and can create an automated report. Because of this comprehensive reporting capability, our solutions have been deployed in hospitals for multiple use-cases such as triaging, pre-reporting, auditing, tuberculosis screening, or even lung cancer screening.

We have also focused on building and adapting our solutions to problems affecting under-developed countries, where there is an extreme lack of radiologists, leading to delayed reads of radiology images. Sometimes X-rays might be read by general physicians or even technicians. Tuberculosis is a disease that affects almost 11 million people, annually. Our chest X-ray solution for TB screening has now been deployed in several countries. Also, to deploy in low-resource geographies, our algorithms are equipped to run on cloud hardware, limiting the onsite hardware requirements to be as small as a $ 50 Raspberry Pi.

qXR detects abnormal chest X-rays, then identifies and localizes 15 common abnormalities.

Most medical AI companies use a deep learning approach called “segmentation” which needs radiologists to mark out abnormal regions on the radiology image (this process is called annotation). The marking out of abnormal regions is not scalable. For example, our X-ray database of 1.2 million X-rays might need about 80 years of annotation by a radiologist.

To avoid this, we have done two things. We focused on another type of deep learning approach called “classification,” which does not require the abnormal regions marked out but can work with a label of which type of abnormality is seen on a medical image. The labeling of the abnormality on the scan is a much simpler task but it might still take a long time for 1.2 million scans. And so to enable automated labeling, we have created Natural Language Processing techniques which can extract out abnormalities and their location, allowing us to label millions of scans in a few minutes.

Sample of an automated head CT report

Medgadget: In simple terms, how do you get the algorithm to show its reasoning behind making diagnoses?

Warier: Typically, deep learning techniques are a black box, especially in these classification-type algorithms. A classification algorithm will attach labels to a radiology scan – labels such as “abnormal’, ‘pleural effusion’, ‘intracranial bleed’, and so on. However, for a radiologist to have confidence in the ‘abnormal’ classification of the algorithm, she needs to see why the algorithm has classified that scan as abnormal. This is where attribution methods come into play. There are several different attribution methods we use which have been described in our blog. One method, called occlusion, blacks out a small portion of the input image and sees how the probability of the image being labeled ‘abnormal’ changes. If you black out the exact region where the abnormality lies, the probability of the scan being labeled abnormal becomes almost zero. In this way, we can figure out which part of the scan caused it to be labeled ‘abnormal’ and we can then highlight that region to the radiologist.

See also  Top 5 Ways to Burn Stubborn Belly Fat

Medgadget: Is that how you’re able to then generate automated reports?

Warier: For both of our solutions, we are able to generate automated reports because a) our solutions can detect all the common abnormalities seen on chest X-rays and head CT scans, b) due to our attribution techniques, we can localize the anatomy which is affected by the abnormality in the scan, and c) we can quantify the abnormal region.

Medgadget: Qure.ai recently published some work in The Lancet journal. Can you summarize the findings that you reported?

Warier: The study compared the performance of the algorithm to interpret head CT scans against a panel of 3 radiologists. We were able to show that deep learning algorithms can accurately interpret head CT scans identifying 5 types of hemorrhages, fractures, midline shift and mass effect. These algorithms can be used to pre-read head CT scans, as they are acquired, and triage them in order of criticality.

qER is designed for triage/diagnostic assistance in head injury or stroke. The most critical scans are prioritized on the radiology worklist so that they can be reviewed first.

Medgadget: Does the algorithm get smarter the more cases it sees? 

Warier: The algorithm can get smarter the more cases it sees provided it gets the scan as well as its corresponding radiology report. Usually, this is possible only if the solution is hosted on the cloud.

 

Medgadget: How do you ensure the safety of this data/patient information on the cloud?

Warier: We are HIPAA compliant and have implemented a variety of measures to protect data on our cloud. However, for our cloud deployments, we also implement one more level of security. We anonymize any scans that are processed on the cloud, ensuring that there is no personally identifiable information on our cloud servers.

See also  E-cigarette vapor tied to changes in lung cells

Medgadget: What do you think the future of AI in medicine will look like, ten or twenty years from now?

Warier: We’ll have ingestible sensors, handheld ultrasounds, wearable vital sign monitors and many more. In the future, there will be several ways in which devices will continuously monitor data from the human body. Artificial Intelligence will play a crucial role in finding meaning out of this data, correlating it to disease events, and help pre-empt disease by medicating appropriately or suggesting lifestyle changes. This will improve lives of people with chronic conditions, but also help many others avoid these conditions pre-emptively.

Focusing on radiology, I like to point to an analogy to pathology, specifically to a test called the complete blood count. In the 1980s, this test was completely manual, with pathologists having to count the different cellular components in a blood sample using a microscope. It used to take 15-20 minutes per test. Today, machines can do 100s of these tests in minutes, and pathologists can spend more time on tasks such as detecting or grading cancer. I believe radiology is also headed in a similar direction, where more basic modalities such as X-rays could be automated using AI and serve as a screening tool for further testing. AI will help screen and triage the normal cases, allowing radiologists to focus on the critical and complex cases.

For more information, check out the Qure.ai website or their Lancet publication

Medgadget