Medical Tech

How Accurate Are Hospital AI Tools? The Truth Might Surprise You


How Accurate Are Hospital AI Tools? The Truth Might Surprise You

Every day you see a news report about how artificial intelligence (AI) or machine learning algorithms are ‘revolutionizing’ healthcare with hospital AI tools. The latest AI models find connections between genetic codes, for example. Or cognitive technology improves patient care.

These news reports tell the same narrative:

Hospital AI tools make it easier to predict and analyze trends in modern healthcare.

But how accurate are these tools, really?

Those news reports often ignore the methodologies used during the design, development, and deployment stages of the AI life cycle. For example, you never hear about how developers tested AI tools for healthcare. Or what sample sizes they used. Or how they analyzed data during the development and testing phases. You also never hear how effective these tools are in a real-world healthcare setting.

And that’s a problem.

The media makes it appear like all AI healthcare predictive models will be successful, but this isn’t always the case. Some data professionals are now calling for greater documentation and transparency during AI project cycles. Otherwise, they say, some AI tools could do more harm than good to patients and your healthcare system.

How Accurate Are Hospital AI Tools?

Many patients presume AI tools are accurate because they have highly complex algorithms for solving problems, such as diagnosing diseases, automating administrative tasks, or reducing healthcare operational costs.

The consensus?

AI tools use algorithms that clever data scientists have developed. How can they possibly be wrong?

But you find multiple cases of AI tools not providing the results their developers promised. One of the most famous examples is the Epic Sepsis Model (ESM), used in hundreds of hospitals in the United States to predict the onset of sepsis. Over 100 health systems use this model to identify the infection in test results. Healthcare providers use the algorithm to identify and treat sepsis, a condition that kills 11 million people worldwide every year.

However, an independent investigation led by Karandeep Singh, an assistant professor at the University of Michigan, reveals that hospital AI tools like ESM fall short of expectations. In his study of nearly 30,000 patients, Singh found the ESM algorithm missed two-thirds of all sepsis cases in hospitals. He also says the tool issued false alarms about sepsis infections.

“This external validation cohort study suggests that the ESM has poor discrimination and calibration in predicting the onset of sepsis,” says Singh’s researchers. “The widespread adoption of the ESM despite its poor performance raises fundamental concerns about sepsis management on a national level.”

Read more: Strengthening Healthcare With AI.

aihospitaltools, artificialintelligence, machinelearning, predictivemodels, healthcare, datascience

Why Are AI Tools Often Wrong?

ESM isn’t the only model that might not perform as expected in your healthcare environment. Other hospital AI tools work well during the design, development, and deployment stages of the AI life cycle, but they don’t generate results after you implement them.

Why is that?

There are various reasons:

  • No healthcare body independently assesses these tools before deployment.
  • No universally accepted evaluation system guarantees the accuracy of these technologies.
  • Healthcare providers sometimes lack the knowledge or skills to apply complex AI and machine learning algorithms to their hospital processes. Again, there are no clear standards on how providers should use these technologies.
  • Developers don’t always publish the methodologies they use when developing and testing AI healthcare tools. There’s little documentation or transparency about the effectiveness of predictive models.
  • Some vendors provide healthcare organizations with little guidance and support. That means hospital employees cannot use these programs without extensive training, which takes time and resources.

Now some healthcare professionals are calling for a shake-up of artificial intelligence in hospitals.

Laure Wynants, an epidemiologist at Maastricht University who analyzes AI tools for hospitals, studied 232 algorithms that diagnose patients with COVID-19 symptoms and predict how sick these patients might get. She concluded that none of these algorithms were suitable for clinical use, and only two showed enough promise for future testing.

“It’s shocking,” Wynants told MIT Technology Review. “I went into it with some worries, but this exceeded my fears.”

These findings differ from those previously published in the media, which often commend the accuracy of AI in healthcare settings. A review from Lancet Digital Health Journal, for example, says AI can detect diseases from medical imaging with the same accuracy as human doctors.

Read more: Seven Ways Personalized Medicine Revolutionizes Healthcare and Pharmaceutical Research.

aihospitaltools, artificialintelligence, machinelearning, predictivemodels, healthcare, datascience

Increasing Transparency

So how can developers improve accuracy? It’s all about transparency.

Creators of these tools should disclose the methodologies they used to design and develop algorithms; they should also work with more clinicians. Those are the thoughts of Derek Driggs, a machine learning researcher at Cambridge University. He, like Wynants, examined algorithms used to predict and diagnose COVID-19. Out of the 415 AI tools he studied, Driggs concluded that none were suitable for clinical use.

Driggs, Wynants, and other data science professionals believe AI and machine learning provide multiple benefits for healthcare providers. These technologies can reduce the time to diagnose patients, identify healthcare risks, and potentially save lives. First, however, providers should have deeper knowledge of the artificial intelligence they integrate into their hospital workflows. Knowledge like the number of people who have used hospital AI tools. Or how accurate they are.

Read more: How AI Will Improve Our Work and Life.

Final Word

The media is often quick to announce the latest AI healthcare technology and how it might save lives. But there’s little information about the accuracy of these tools in real-world clinical environments. Now, some data professionals are urging developers to improve transparency before hospitals invest in predictive models.

For more insights into hospital AI tools and other healthcare technologies, check out Intech.Media’s blog


Previous Article

How to Use Your Willow Device

Next Article

WuXi AppTec Device Review

Related Posts