Drawing Capital Newsletter
September 11, 2020
This week, we delve into artificial intelligence (“AI”), its promising uses for medical imaging, and catalysts for improving healthcare.
A Crisis in Medical Imaging
The global radiology crisis has been steadily worsening as demand for cross-sectional imaging grows, and a shortage of trained, experienced radiologists remains constant.
In a report published by the Association of American Medical Colleges (AAMC), the data highlighted an urgent need for physicians with a shortage of nearly 122,000 by 2032 (1).
The UK is facing similar challenges, with the Royal College of Radiologists reporting that only 2% of radiology departments can fulfill their imaging reporting requirements within contracted hours, and CT/ MRI scans have been increasing 10% year-over-year since 2013 (2).
These problems are further exacerbated by complex funding issues, an aging population, and the number of people living with long-term conditions.
AI as a Solution
In the 2010s, Deep Learning became possible through advancements in modern GPUs, available data, and a massive increase in computing power capability. Consequently, AI solutions are being implemented across industries as a means to improve the speed and accuracy of skilled professionals in their day-to-day jobs.
In the current clinical radiology workflow (depicted below), there are dozens of opportunities for AI to improve end-to-end logistics. Today, we will be exploring three key image-based tasks that artificial intelligence can optimize in the radiology workflow: 1) Abnormality detection, 2) characterization and 3) subsequent monitoring.
(3)
Abnormality Detection
Currently, radiologists utilize their years of experience and schooling as a means to manually recognize specific patterns and abnormalities in patient images. In relying on a specific person to conduct abnormality detection, a few high-impact variables are introduced: a radiologist’s focus and attention to detail, daily patient volume/ time constraints, experience in recognizing a broad array of abnormalities, experience in recognizing variance in specific types of abnormalities, and other exogenous factors.
With an AI-based solution, models can be trained on millions or billions of images and records from patient diagnoses around the globe, improving with each individual analysis. This dramatically reduces the turnaround time on processed images, which increases the bandwidth of healthcare professionals and decreases the time that patients must wait for their results. It also provides a means to a platform that can store, access, and utilize exponentially more data to draw more accurate conclusions.
Example: In late 2018, the Google Research health team, Deepmind Health, and a team from Google’s Hardware division focused on healthcare-related applications combined forces to create Google Health. 2019 was an exciting year for the Google Health team, who showed that using deep learning in mammography can assist physicians in better spotting breast cancer, which affects 1 in 8 women in the US. Deep learning models were trained on de-identified data from a healthcare system based in the United Kingdom, but were accurately applied in diagnosis of patients in a US-based healthcare system (10).
Characterization
After the detection of a general abnormality, characterization types place - a broad term referring to the segmentation, diagnosis, and staging of a disease. This entails the review of quantifiable features such as size, shape, and texture.
Segmentation, in particular, has a strong use case for an AI-driven approach. Non-diseased organs can be segmented fairly easily, but identifying the extent of any diseased tissue can be significantly more difficult and of vital importance. For example, within clinical radiation oncology, tumor and non-tumor tissues must be accurately segmented to ensure appropriate radiation planning. Typically, this involves physician support by computer-aided diagnosis (CADx), the younger sibling of computer-aided detection (CADe).
CADx is based on a foundation of engineered assumptions and previous segmentation results, which, if obstructed by even a minor degree of inaccuracy, can compound the inaccuracy of future results. For example, if the assumptions for the makeup of healthy anatomy is based on a small dataset or sample size, the CADx model may apply a misrepresented generalization to a broader population of future patients. This could lead to inaccurate segmentation and subsequent treatment.
When CADx systems are enhanced using deep learning methods, the model is able to continuously learn from diverse patient populations and form a general understanding of variance in anatomy and health conditions. In proper practice, this model can provide accurate outcomes of both common and uncommon cases.
Example: Researchers trained, tested, and evaluated their convolutional neural network on over 210 MRI scans. After training, it takes only a few seconds to segment lesions on a new image, according to researchers. (7)
Subsequent Monitoring
Monitoring is an essential part of the overall treatment and measurement of efficacy. It can provide physicians with an accurate assessment of results and any need for additional treatments or effective post-treatment care.
The workflow for monitoring involves a comparison whereby images of the diseased tissue are aligned across a series of scans, followed by a routine evaluation using predefined protocols. In the field of oncology, in particular, protocols will pertain to analysis relating to tumor size. In an effort to provide a more streamlined process, protocols such as Response Evaluation Criteria in Solid Tumors (RECIST) involve a fairly simplistic approach. The issue with a simplistic approach is that it may not adequately address subtle variations such as texture and homogeneity within the object. Additionally, it becomes difficult to deal with multiple objects and physiological changes over time.
For any sort of subsequent change analysis, a multi-part procedure combining different datasets is required. As datasets change, the sequence becomes prone to registration errors. By utilizing computer-aided change analysis with deep learning architectures, feature engineering is eliminated and joint data representation can be learned.
Example: researchers at the National Institutes of Health, the Ping An Insurance company, and a researcher presently at NVIDIA developed a deep learning-based method that can automatically annotate tumors in cancer patients. “Measuring tumor diameters requires a great deal of professional knowledge and is time-consuming. Consequently, it is difficult and expensive to manually annotate large-scale datasets.” (8)
Noteworthy Market Participants
Qure.ai (private company)
Founded in 2016, Qure.ai operates a healthcare technology company aiming to make healthcare more accessible through the power of deep learning. Its qXR detects abnormal chest X-rays, then identifies and localizes 15 common abnormalities. The company offers Head CT scans which are a first-line diagnostic modality for patients with head injury or stroke.
Total Funding: $16M in Feb 2020, led by Sequoia
HQ: Mumbai, India
Website: https://qure.ai/
Nvidia (NVDA)
Although Nvidia is not an immediate participant in the healthcare space, they provide a hardware-based backbone from which AI, ML, and DL models are built. As a leader in the GPU space, their progress and advancements in hardware have a direct and immediate impact on the capabilities of AI in the medical imaging space.
NVDA YTD: +110.45%, vs VGT (Vanguard Information Tech ETF): +24.45%, vs S&P 500: +4.63%
Google Health, Verily, and Deepmind (GOOG/ GOOGL)
Over the last three years, DeepMind has built a team to tackle some of healthcare’s most complex problems—developing AI research and mobile tools that are already having a positive impact on patients and care teams. Today [09/18/2019], with our healthcare partners, the team is excited to officially join the Google Health family. Under the leadership of Dr. David Feinberg, and alongside other teams at Google, we’ll now be able to tap into global expertise in areas like app development, data security, cloud storage, and user design to build products that support care teams and improve patient outcomes.
GOOGL YTD: +13.35%, vs VGT (Vanguard Information Tech ETF): +24.45%, vs S&P 500: +4.63%
Conclusion
Like many technologies, artificial intelligence does not come without its limitations - especially in the arena of medical imaging. The heart and soul of AI-based solutions is data. And for some diseases - especially those that are less common - some professionals lack the data required to help with automation and support. In the case of rare diseases, these issues are further proliferated by a lack of experienced individuals who can verify the accuracy of results.
While we are on the verge of very exciting developments in AI in healthcare, we are still in near-infancy stages, with an estimate of greater traction in a five- to ten-year time frame. One out of every four Americans receive a CT scan annually, and one in ten receive an MRI scan. With these millions of images produced every year, and with the implementation of systems such as the Picture Archiving and Communication System (PACS), we can further grow and scale data in an organized fashion.
Today we explored abnormality detection, segmentation, and subsequent monitoring as three promising use cases for artificial intelligence in radiology. However, on a parting note, there are vast possibilities and applications for AI as a general tool in medicine, as illustrated by Accenture below (9).
Given our focus on innovation over a five to ten year period, we expect very promising platforms and applications to arise in the AI space in the near future. As regulatory and scalability concerns become alleviated, we believe that the opportunities in this field will proliferate across every sector and industry, and create significant competitive advantages for successful early adopters.
References
"Artificial intelligence in medical imaging: threat or opportunity ...." 24 Oct. 2018, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6199205/. Accessed 9 Sep. 2020.
"The Radiologist Shortage and the Potential of AI - Aidoc." 1 Apr. 2020, https://www.aidoc.com/blog/is-radiologist-shortage-real/. Accessed 9 Sep. 2020.
"Artificial intelligence in radiology - NCBI - NIH." https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6268174/. Accessed 9 Sep. 2020.
"IBM News room - United States - IBM." 13 Oct. 2015, https://www-03.ibm.com/press/us/en/pressrelease/47839.wss. Accessed 9 Sep. 2020.
"Adapting to Artificial IntelligenceRadiologists and ... - Sirm." 29 Nov. 2016, https://areasoci.sirm.org/uploads/Documenti/SDS/911147f73cc1ecf3ea91e43eb0f3ba04810dd2ea.pdf. Accessed 9 Sep. 2020.
"Developing Deep Learning Models for Chest ... - Google AI Blog." 3 Dec. 2019, http://ai.googleblog.com/2019/12/developing-deep-learning-models-for.html. Accessed 9 Sep. 2020.
"NIH Uses AI to Detect Multiple Sclerosis with Human Level ...." 3 Apr. 2018, https://news.developer.nvidia.com/nih-uses-ai-to-detect-multiple-sclerosis-with-human-level-accuracy/. Accessed 9 Sep. 2020.
"AI Helps Monitor Cancer Treatment – NVIDIA Developer News ...." 2 Jul. 2018, https://news.developer.nvidia.com/ai-helps-monitor-cancer-treatment/. Accessed 9 Sep. 2020.
Forbes Insights: AI And Healthcare: A Giant Opportunity." 11 Feb. 2019, https://www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/. Accessed 9 Sep. 2020.
"Google Research: Looking Back at 2019, and ... - Google AI Blog." 9 Jan. 2020, http://ai.googleblog.com/2020/01/google-research-looking-back-at-2019.html. Accessed 9 Sep. 2020.
_______________
This letter is not an offer to sell securities of any investment fund or a solicitation of offers to buy any such securities.
An investment in any strategy, including the strategy described herein, involves a high degree of risk. Past performance of these strategies is not necessarily indicative of future results. There is the possibility of loss and all investment involves risk including the loss of principal.
The information in this letter was prepared by Drawing Capital and is believed by the Drawing Capital to be reliable and has been obtained from sources believed to be reliable. Drawing Capital makes no representation as to the accuracy or completeness of such information. Opinions, estimates, and projections in this letter constitute the current judgment of Drawing Capital and are subject to change without notice.
Any projections, forecasts, and estimates contained in this document are necessarily speculative in nature and are based upon certain assumptions. In addition, matters they describe are subject to known (and unknown) risks, uncertainties, and other unpredictable factors, many of which are beyond Drawing Capital’s control. No representations or warranties are made as to the accuracy of such forward-looking statements. It can be expected that some or all of such forward-looking assumptions will not materialize or will vary significantly from actual results. Drawing Capital has no obligation to update, modify or amend this letter or to otherwise notify a reader thereof in the event that any matter stated herein, or any opinion, projection, forecast, or estimate set forth herein, changes or subsequently becomes inaccurate.