Is artificial intelligence really coming to clinical healthcare?

Is artificial intelligence really coming to clinical healthcare?

By Ranjani Srinivasan

Artificial intelligence (AI) algorithms pervade our world. Weather predictions and strategies for sports teams, apparel shopping, and even our Saturday night movie recommendations all have AI built into them. AI scientists and tech companies have long eyed healthcare, with its variety and complexity in problems, as fertile ground for disruptive technology. But have they gotten anywhere? Is AI really taking over our hospitals?   

AI in healthcare is a fairly large umbrella term for a range of technologies, a market estimated to be valued at over US$150 billion by 2026. Clinical diagnostics including imaging and pathology, clinical drug discovery, data-driven surgical robotics, hospital management systems and patient experience, and personalized medicine are some of the largest areas where AI is touted to make a dent. However, it is often unclear how much the technology has moved beyond research settings and into the clinic. So here, we take a deep dive into whether artificial intelligence is being employed in clinical healthcare in practice, especially in diagnostics and predictive systems, and what it implies for clinical infrastructure and decision-making today and in the future.

What is artificial intelligence?

“Artificial intelligence” is a term that has been misused and abused over the years, but at its core, refers to systems that emulate human thinking and decision-making. It is often used interchangeably with machine learning (and more recently, deep learning), although these approaches are really subsets of AI that specifically refer to statistical models that can learn from data.

In the context of this article, AI refers to algorithms that pick up on trends in data and then perform a variety of tasks, like predict a future data point, or classify a data point as belonging to one type versus another, or develop a policy that strategizes what actions to take toward optimal outcomes.

AI in diagnostics:

With the abundance of electronically stored medical images, and a parallel rise of computer vision technology, AI-assisted image-based diagnosis has risen to prominence. Annotated X-rays, CT scans, and pathology images are often used to train algorithms to learn patterns and anomalies. In 2016, IBM Watson Health developed a radiology solution code-named Avicenna

As an early adopter in the space, IBM Watson Health generated a lot of hype, which has been difficult to live up to completely. For example, IBM was reportedly seeking to sell Watson Health in early 2021 due to lack of profitability. However, as John Frownfelter puts it, “although the era of clinical AI may have started with Watson, it certainly doesn’t end with Watson.” The technology served as an example and impetus for the development of many other technologies that have contributed to today’s AI ecosystem.

Image-based AI diagnostics are beginning to be deployed in other places: Microsoft’s Inner Eye technology helps segment and identify tumors at Addenbrooke’s Hospital in Cambridge, and IDx-DR is a diabetic retinopathy lesion detection device first used at University of Iowa Health Care before expanding to 20 other institutions. IDx-DR is the first fully autonomous AI system to be approved by the FDA for use. 

The majority of AI systems in healthcare tend to be decision-support tools, with the ultimate diagnosis made by a clinician, and not the algorithm itself. The exhaustive list of all AI-enabled medical devices approved for use by the FDA shows 343 devices (as of Jan 26, 2022), with diagnostic devices making up a significant share — most of which are those applied to radiology. Hematology, neurology, and ophthalmology are some areas that have also adopted AI-assisted diagnostics. However, adoption has been justifiably slow and cautious.

AI in predictive analytics:

Another avenue where AI has received a lot of attention is in prediction and preventative care. Researchers build complex predictive models based on data from electronic health records (EHRs) to assess impending patient risk and the odds of negative outcomes like septic infections or acute organ damage, among other things. The hope is to alert clinicians to developing risks to prevent or treat the condition in time. 

EHR giant Epic Systems has an inbuilt early warning system that has been used in many hospital systems in the United States but has come under a lot of scrutiny for performing poorly. Sepsis early warning systems have been built independently by many hospitals around the country, but little is known about their performance and reliability. Warning systems have also been developed for pulmonary disease and kidney disease, the latter being pursued by Alphabet

AI-enabled predictive systems are not yet deployed widely for specific conditions; and wherever deployed for generic risk scores, they have come under scrutiny. On a population level, researchers are more recently enthusiastic about warning systems for disease outbreaks and pandemics. Meanwhile, the impact of these early warning systems has been called into question, with one study suggesting that treatment does not typically escalate as per protocol and questions the assumptions behind the alarm systems. These technologies are still nascent, and much has yet to be done in the way of policy, protocol, and behaviors to ensure patient safety and improved care.

Challenges to adoption of artificial intelligence in healthcare:

At the same time that artificial intelligence for healthcare technologies are evolving, they also face several challenges to their adoption in clinical settings, due to issues regarding reliability and accuracy as well as privacy and security concerns.

Data curation, bias, and transportability:

AI algorithms learn from data. And what they learn is often opaque to the very scientists who designed the algorithm; scientists validate the algorithm’s outputs, but the promise of AI lies in detecting complex patterns that humans do not immediately discern. This presents challenges of two main types: (1) data-related, and (2) algorithm-related. 

First, data has to be well curated. For example, if labels of disease are used to train the algorithm, they should be error-proof. This is not straightforward, as opinions on labeling tumors and scans differ among radiologists. There are many other data-related challenges as well, such as imbalance in datasets and systematically missing values. 

Second, the algorithm might learn undesirable patterns that do not capture the true underlying patterns of disease, even though validation performance might be very good. There have been instances of racial or gender biases that have crept into algorithms due to badly learned patterns. The algorithm might do well in one hospital but not in another, due to transportability issues. The field of interpretable machine learning strives to eliminate a black-box approach to these algorithms. 

Ethical concerns around autonomy of decision making also abound. Exercising caution about the possibilities of AI is not just necessary, but is a fundamental responsibility for all stakeholders to ensure equity and safety in practice.

Privacy and data security:

Privacy and confidentiality laws (HIPAA) are meant to protect patient information from vulnerabilities such as data breaches. Strict measures are taken to de-identify patient information and limit access. Nevertheless, hacking incidents, unauthorized access, and theft leave tens of millions of records exposed each year. In 2020, the number was estimated to be over 29 million in the United States alone. HIPAA settlements and penalties exceeded $13 million in value the same year. These costs are an added deterrent to the adoption of AI-assisted healthcare, and healthcare institutions cannot move toward the future without active collaborations with major technology companies. 

Some of the biggest perceived risks of AI in healthcare include technological, ethical (trust factors), and regulatory concerns. Researchers are becoming vocal about understanding sources of bias instead of marching forward with a one-size-fits-all approach. 

So, where is AI taking off in the healthcare space?

While diagnostics and preventative systems are just beginning to be incorporated into mainstream health systems, other kinds of artificial intelligence for healthcare — mainly, hospital management and administrative workflow technologies — have met less resistance. Many of these are not regulated by the FDA, as they are considered low-risk software programs that are exempt from the 21st Century CURES Act. 

Virtual nursing assistants like CareAngel, Sensely, and Ocuvera are widely used for messages, dialing patients for appointments, and triaging. AI-enabled administrative workflow platforms aim to reduce clinician burden by automating data entry processes. Nuance in partnership with Microsoft Teams, the Mayo Clinic in partnership with IBM, and Johns Hopkins with GE and are all working on different ecosystems to make hospital administration more efficient. 

The promise of artificial intelligence in clinical healthcare:

AI holds great promise to uncover the complexities of human health and disrupt the delivery of care. However, the current healthcare ecosystem is fragmented, and AI systems are only beginning to be tested in clinics, where they often fall short of expectations. Regulatory concerns are ever-evolving; it is imperative to have equitable, transparent, and reliable technologies in our healthcare institutions. While explainability has started to occupy center stage in clinical AI discourse and efforts to develop new, less-opaque algorithms are underway, the technology is nascent, still very much confined to the perimeters of academic pursuits. 

If done right, there is potential for AI to bring equity to healthcare in serving historically underserved populations. Regulatory bodies must play an active role in these conversations, adapt to new developments, and ensure that any use of technology is done with care and deliberation.

With cautious optimism, one might expect that diagnostics and preventative systems will be in more frequent use in a few years, but it remains to be seen how many of them will be truly autonomous. It is almost certain that AI will not replace clinicians in the foreseeable future.

However, AI tools can help clinicians go beyond their own training and experience, by augmenting their capabilities with data and collective wisdom across a corpus of similar cases around the world. AI can also dramatically reduce the administrative burden in clinical systems. In this way, it can free up clinicians’ time to return healthcare to what it should be about — human interactions. 

If you have any questions or would like to know if we can help your business with its innovation challenges, please leave your info here or contact Jeremy Schmerer, Healthcare & Life Sciences Lead, directly at jschmerer@prescouter.com or Linda Cohen, Strategic Accounts Manager at lcohen@prescouter.com.

Never miss an insight

Get insights delivered right to your inbox

More of Our Insights & Work

Never miss an insight

Get insights delivered right to your inbox

You have successfully subscribed to our newsletter.

Too many subscribe attempts for this email address.

*