AI in Healthcare Applications and the Potential for Preventable Harm
The advent of artificial intelligence (AI) in healthcare promises transformative changes that could significantly improve clinical outcomes, reduce operational costs, and enhance the quality of care provided to patients. By leveraging AI’s ability to process and analyze vast amounts of data with speed and accuracy, healthcare organizations are increasingly turning to AI for tasks ranging from medical diagnoses and personalized treatment plans to improving administrative processes. Yet, as with any groundbreaking technology, AI also brings a host of potential risks—especially when it comes to its application in clinical settings. When not properly managed, AI could inadvertently introduce preventable harm to patients, exacerbate health disparities, or undermine trust in the healthcare system.
The Promise and Pitfalls of AI in Healthcare
AI’s potential in healthcare is undeniable. The technology can support clinical decision-making by identifying patterns and offering predictions that might elude even the most seasoned healthcare professionals. In areas such as medical imaging, diagnostic tools, and predictive analytics, AI has shown substantial promise in improving the precision of diagnoses, accelerating treatment plans, and even assisting in robotic surgery.
However, despite these positive applications, AI’s integration into healthcare also poses significant challenges. One of the most pressing concerns is that AI models are only as good as the data they are trained on. If the data used to train AI models is flawed, incomplete, or biased, the results can be misleading. AI systems can perpetuate these biases, which could result in disparate health outcomes across different demographic groups, further exacerbating existing health disparities. For instance, AI models trained on a predominantly white population might struggle to accurately diagnose conditions in people of color, leading to incorrect or delayed treatments.
Moreover, AI systems can experience "hallucinations," a term used to describe false or misleading outputs generated by AI. These hallucinations can undermine clinical decision-making, creating a false sense of confidence in AI’s recommendations. In some cases, AI models can even show signs of "data drift," which occurs when the data the model is exposed to over time changes or becomes misaligned with its initial training set. This is particularly problematic in continuous learning models, where the AI adapts based on new data, but might inadvertently lose the effectiveness it had at the outset.
The Importance of Proper AI Implementation
The benefits AI offers in healthcare cannot be realized without a careful and thoughtful implementation strategy. To minimize the risks of preventable harm, it is essential that healthcare organizations take a proactive approach in managing the deployment and use of AI technologies. Missteps in the implementation phase can lead to poor performance, missed opportunities for improvement, and even adverse events.
Even AI systems that are non-clinical in nature (e.g., to help with scheduling) could end up adversely affecting patient care Several factors must be considered when adopting AI systems into healthcare operations or clinical workflows, as:
- Unrealistic Expectations: One common pitfall is setting expectations that exceed what the AI system can deliver. Healthcare organizations must have a clear understanding of the AI’s capabilities and limitations, ensuring that the technology is being used for the right purpose. Whether the goal is to improve accuracy, speed up processes, or reduce errors, it is important to set realistic benchmarks for success and make sure the AI solution aligns with those objectives.
- Excessive Trust in AI: Trusting AI blindly can lead to harmful outcomes. If healthcare professionals overly rely on AI-generated predictions without scrutinizing the results, they may miss critical errors or fail to notice when the system provides misleading information. To avoid this, it is crucial to maintain human oversight and ensure that AI is seen as a tool to aid, not replace, clinical decision-making.
- Lack of Governance: AI implementation must be accompanied by strong governance structures. Clear rules and oversight mechanisms must be in place to ensure AI models are functioning as expected, and that patient privacy is maintained. Ongoing monitoring is essential to detect any performance degradation, such as brittleness (failure to adapt to new patient populations) or data drift.
- Data Quality and Accessibility: Before implementing AI, healthcare organizations must ensure that their data infrastructure is prepared for the task. AI systems often require large, well-organized datasets to perform optimally, and poor-quality data can lead to inaccurate predictions. Healthcare institutions should invest in data management practices that ensure data accuracy, consistency, and compliance with privacy regulations.
A Governance Framework for AI in Healthcare
To mitigate these risks and maximize the potential benefits of AI in healthcare, the governance of AI systems must be robust, proactive, and inclusive. ECRI, a leading authority in healthcare technology, offers a series of recommendations for healthcare organizations looking to integrate AI into their operations:
- Establish an AI Governance Committee: One of the first steps in implementing AI responsibly is to form an AI governance committee that includes a wide range of stakeholders—administrators, clinicians, IT specialists, and others who understand the goals and workflows the AI will enhance. This committee should oversee the entire AI lifecycle, from defining goals to continuously monitoring performance.
- Define Clear AI Goals: Before selecting an AI solution, healthcare organizations must clearly define what they want the technology to achieve. Is it improving diagnostic accuracy? Reducing treatment time? Enhancing patient outcomes? Clearly articulated goals will help determine which AI solutions are most appropriate and ensure alignment between the AI's capabilities and the organization's needs.
- Validate AI Performance: It is critical to validate the AI model's performance using real-world data. Testing should be conducted by independent bodies to ensure that the system is functioning correctly and meeting the expectations set by the healthcare organization.
- Focus on Transparency: Healthcare organizations must insist on transparency from AI vendors. Vendors should provide detailed information about the data used to train the model, including its size, diversity, and sources. The AI’s performance metrics under ideal conditions should also be made available to help organizations monitor and assess its ongoing performance.
- Data Preparation: AI solutions require high-quality data to perform effectively. Healthcare organizations should invest time and resources into preparing their data, ensuring it is organized, accurate, and accessible for AI training. Data must also be structured to comply with privacy and governance policies.
- Monitor AI Performance Over Time: AI oversight doesn’t stop once the system is implemented. It’s essential to continuously track AI performance to ensure it remains effective and doesn’t degrade over time. This includes assessing changes in data quality, AI behavior, and its impact on patient outcomes.
- Adverse Event Reporting: As AI becomes more integrated into healthcare, organizations should establish a process for reporting and tracking AI-related adverse events. This system should capture details about the incident, such as the AI’s involvement, any questionable predictions made by the model, and the resulting patient outcomes. Ensuring that users understand how to identify and report AI-related issues is crucial to maintaining a safe healthcare environment.
Closing Thoughts
Artificial intelligence has the potential to revolutionize healthcare, making processes more efficient and improving patient outcomes. However, the technology must be implemented with caution. Healthcare organizations must take steps to manage AI risks effectively, ensuring that the technology complements human decision-making and enhances—not undermines—the quality of care. With careful planning, continuous monitoring, and strong governance, AI can play a critical role in shaping the future of healthcare, driving innovation, and improving patient safety.
By keeping patient welfare at the forefront and integrating AI thoughtfully into clinical workflows, healthcare organizations can ensure that the benefits of AI are realized without compromising safety or exacerbating existing disparities. The future of healthcare may be increasingly AI-driven, but the human element will always remain central to effective, compassionate care.
Download ECRI's Top 10 Health Technology Hazards for 2025 Executive Brief to learn more about potential sources of danger we believe warrant the greatest attention for the coming year and offers practical recommendations for reducing risk and preventing harm.