Passer au contenu

Panier

Votre panier est vide

Artificial Intelligence Insights for Healthcare and Government Leaders

Resource Hub for the Safe & Innovative Use of AI in Healthcare

Healthcare organizations are increasingly looking to AI to streamline workflows, ease clinicians’ burdens, and cut costs. To meet the evolving needs of the healthcare landscape, ECRI is publishing a collection of AI resources and expanding its hands-on AI support services available to industry partners.

Key Takeaway

AI-enabled tools have benefits spanning the healthcare field, from identifying at-risk patients and enabling earlier interventions; to assisting clinical decision-making by surfacing relevant insights that might otherwise go unnoticed; to automating administrative tasks, leaving more time for direct patient care. Additionally, predictive AI is being tested and used in care delivery, with its scope set to expand into more applications.

However, AI also poses significant risks to patient safety if it is not properly assessed and managed. These systems depend on high-quality data, robust clinical validation, and a clear understanding of their intended use. Inadequate training data, poor integration, or lack of transparency can lead to inappropriate outputs and degraded care. Additionally, evolving regulatory requirements and clinician trust must be addressed.

The resources listed here and additional insights from ECRI address how to procure, integrate, and monitor AI solutions safely in healthcare.

AI Resources

  • ECRI’s AI Position Paper – This includes a seven-point plan with recommendations to use AI safely and mitigate risk, plus answers to the most pressing questions on regulatory clearance assurances and how to assess functionality.
  • White House AI Action Plan recommendations – ECRI submitted practical recommendations to the Office of Science and Technology Policy to support their Artificial Intelligence Action Plan.
  • AI is the #1 hazard in ECRI’s Top 10 Health Technology Hazards for 2025 ReportDownload the report for recommendations to implement AI safely through a Total Systems Safety framework.

The resources listed below are excerpts from ECRI’s member-only website that have been made accessible to non-members to meet an unmet need in the industry for insights into the safe, responsible use of AI applications in healthcare.

To learn more about accessing other member-only resources from ECRI, contact us.

  • Managing the Risks of AI in Healthcare – This article addresses AI-enabled medical devices that use machine learning, including those that are "configuration locked" and those that are subject to ongoing updating.
  • Ethical Use of AI in Healthcare – This article provides guidelines for data management and privacy; bias and fairness; governance; monitoring and evaluation; and provider and patient education.

Evidence Analysis: AI Software for Improving Outpatient Scheduling and Patient Chart Management – This report focuses on the risks and benefits of using AI for administrative functions. AI applications may appear to predict patient behaviors and length of appointments and create efficiencies in appointment scheduling.

AI Technology Evaluations

ECRI has evaluated the effectiveness and clinical evidence surrounding numerous AI tools in healthcare, including:

Imaging

A software application that runs on medical imaging systems to analyze echocardiograms and reduce time spent on measurements and report creation

Therapy

An AI-based cognitive behavioral therapy conversation app designed to improve symptoms for patients with depression or anxiety

Colonoscopy

AI software added to video colonoscopy systems to aid colorectal cancer screening by helping endoscopists detect adenomas during colonoscopies

Falls

An AI camera monitoring system that detects and records videos of patient falls, notifying staff immediately, for use in residential care facilities

Diabetes

An AI tool that takes high-quality retinal pictures and real-time assessments of retinal lesions to screen diabetes patients for diabetic retinopathy

Heart Failure

Automated AI-based interface that analyzes echocardiography images to aid in the diagnosis of heart failure with preserved ejection fraction

Anesthesia

Image-processing that connects to ultrasound machines to highlight anatomic structures of interest for regional anesthesia assistance

Experts: AI in Health Tech

These healthcare safety and technology experts from ECRI are often called on to share insights on the safe and strategic use of AI in care delivery and coordination.

To interview or request an AI expert from ECRI as a speaker, contact Yvonne Rhodes at YRhodes@ECRI.org

Award-Winning AI Innovation

ECRI has honored several healthcare organizations for the innovative use of AI with the Health Technology Excellence Award, including these examples featured in TechNation:

Timeline of Red Flags

ECRI has sounded the alarm about potential risks in the use and misuse of AI. AI-related topics were addressed in ECRI’s Top Ten Health Technology Hazards report four out of the last five years, and they have also been covered in ECRI’s Top 10 Patient Safety Concerns report.

2021

#8 Health Technology Hazard

ECRI identified "Artificial Intelligence (AI) Applications for Diagnostic Imaging May Misrepresent Certain Patient Populations" as the number 8 concern in the Top 10 Health Technology Hazards for 2021 report. This ranking highlighted growing concerns within the medical community about potential biases in AI-driven diagnostic tools, particularly how these systems may produce inaccurate or misleading results for patients from underrepresented or diverse demographic groups. The report underscored the urgent need for more equitable data collection and algorithm training to ensure that AI applications in healthcare serve all populations effectively and fairly.

2022

#7 Health Technology Hazard

For the Top 10 Health Technology Hazards for 2022, ECRI once again emphasized the potential risks associated with artificial intelligence in healthcare by naming “AI-Based Reconstruction Can Distort Images, Threatening Diagnostic Outcomes” as one of the top hazards of the year. This designation brought attention to a critical issue: the use of AI algorithms in reconstructing medical images—such as those from CT scans or MRIs—can sometimes lead to image artifacts or distortions that may not be immediately apparent to clinicians. These distortions have the potential to obscure or mimic clinical findings, ultimately jeopardizing diagnostic accuracy and patient safety.

2024

#5 Health Technology Hazard

In the Top 10 Health Technology Hazards for 2024 report, ECRI pinpointed insufficient governance of AI in medical technologies as a potential source of danger and offered practical recommendations for reducing risks. The report emphasized that the rapid integration of AI into clinical environments—ranging from diagnostic tools to decision-support systems—has outpaced the development of appropriate oversight structures, regulatory frameworks, and institutional policies. Without robust governance mechanisms in place, healthcare organizations face increased risks related to data bias, lack of transparency, algorithmic errors, and the misuse or misunderstanding of AI outputs.

2024

#4 Patient Safety Concern


In the Top 10 Patient Safety Concerns for 2024 report, ECRI identified "Unintended Consequences of Technology Adoption including AI" as one of the most pressing challenges facing healthcare organizations. As hospitals and health systems increasingly turn to advanced technologies—such as AI-powered tools, electronic health records, and remote monitoring devices—to improve care delivery, they also face new and often unforeseen risks. ECRI warned that without thoughtful planning, training, and oversight, these technologies can inadvertently introduce safety hazards, such as clinician overreliance on AI recommendations, workflow disruptions, alert fatigue, or missed diagnoses due to algorithmic errors.

2025

#1 Health Technology Hazard

ECRI ranked “Risks with AI-Enabled Health Technologies” as the number one health technology hazard in the Top 10 Health Technology Hazards for 2025 report. As AI becomes increasingly embedded in diagnostic tools, decision-support systems, and patient monitoring platforms, ECRI warned that the potential for harm has escalated—particularly when these technologies are deployed without sufficient oversight, validation, or understanding of their limitations.

2025

#2 Patient Safety Concern

In the Top 10 Patient Safety Concerns 2025 report, ECRI highlighted “Insufficient Governance of AI in Healthcare” as the
second most critical concern impacting patient safety across care settings. ECRI emphasized that while AI offers transformative potential—enhancing diagnostic accuracy, streamlining workflows, and supporting clinical decision-making—the absence of robust governance structures can lead to significant risks.

Get in Touch

Learn more about our impactful work can benefit you and your organization.