Skip to content

Cart

Your cart is empty

Webinars

The Misuse of AI Chatbots in Healthcare: Risks, Realities, and Responsible Innovation

January 28, 2026 | 12:00 p.m. ET

Overview

AI chatbots and other large language models (LLMs) are increasingly being used for healthcare-related advice, despite not being designed or regulated for such purposes. Many individuals working in healthcare—or those with health concerns—are turning to tools such as ChatGPT, Claude, Copilot, Gemini, Grok, and other LLMs for guidance on medical conditions, treatments, or less clinical questions, such as how to use a medical device or what supplies to purchase. Chatbot responses are fast and often useful; however, at times they are incorrect—sometimes dangerously so.

This webcast examines how LLMs can be used—or misused—in healthcare applications, and what healthcare organizations, as well as individuals who use these tools, must understand to prevent unintended harm. Through case examples, technical insights, and ethical considerations, we will explore the gap between what AI chatbots and other LLMs can do and what they should do in healthcare settings. Participants will leave with a clearer understanding of the risks involved and the practical safeguards needed to support responsible, evidence-aligned use of these tools.

Learning Objectives

By the end of this session, participants will be able to:

  • Recognize the limitations of LLM technologies that can lead to unreliable responses.
  • Identify risks associated with the inappropriate use of AI chatbots for patient care–related purposes, including exposure to misinformation, overreliance on unvetted tools, and inappropriate delegation of clinical or professional judgment. These risks can lead to patient safety issues, privacy vulnerabilities, workflow disruptions, and erosion of trust.
  • Analyze real-world examples of chatbot failures to understand the technical, ethical, and human-factor contributors behind them.
  • Differentiate between appropriate and inappropriate use cases for AI chatbots in healthcare environments.
  • Describe key safeguards and governance strategies that healthcare organizations can implement to reduce misuse and support responsible adoption.
  • Evaluate emerging regulatory and ethical considerations that shape how AI chatbots should be integrated into clinical practice.

Register Now

Moderator

Rob Schluth

Principal Project Officer I, Device Safety, ECRI

Rob Schluth is a project leader focusing on content development and program management for ECRI's Device Safety group. During his 30 years at ECRI, Rob has contributed to hundreds of the organization's device evaluations, problem reports, and guidance articles spanning a wide range of health technologies. Rob currently manages special initiatives for the device evaluation & safety team and leads the development of the organization's annual Top 10 Health Technology Hazards report.

Panelists

Marcus Schabacker, MD, PhD

President and Chief Executive Officer, ECRI

Dr. Marcus Schabacker became President and Chief Executive Officer of ECRI, an independent, trusted authority on the medical practices and products that provide the safest, most cost-effective care, in January 2018. A board-certified anesthesiologist and intensive care specialist, Dr. Schabacker has 35 years of healthcare experience and 20 years of senior leadership responsibilities serving the medical device and pharmaceutical industries. He held leadership roles in medical affairs, preclinical and clinical development, regulatory affairs, quality, research and development, and patient safety. After receiving his medical and academic training at the Medical University of Lubeck, Germany, Dr. Schabacker served as senior medical officer at the Mafikeng General Hospital in South Africa as part of a humanitarian aid program to support the African National Congress government under Nelson Mandela. Dr. Schabacker is an affiliate assistant professor at The Stritch School of Medicine at Loyola University Chicago.

Francisco Rodriguez-Campos, PhD

Principal Project Officer, Device Evaluation, ECRI

Francisco Rodriguez-Campos, Ph.D., is responsible for evaluating medical imaging technologies such as CT and breast tomosynthesis for ECRI’s Device Evaluations group. Before joining ECRI, Francisco was a neuroscientist and instructor at the University of Pennsylvania where he performed image-guided (CT and MRI) surgeries to place chronic implants in old-world macaques and taught in the biomedical engineering program about medical devices. He has served as project manager for a Medical Technology assessment project for the El Salvador Social Security Administration, consultant to PAHO/WHO in the deployment of medical technology-related projects in El Salvador and Nicaragua, director of the clinical engineering graduate program at Universidad Don Bosco and professor of medical imaging in the biomedical engineering undergraduate program.

Christie Bergerson, PhD 

Device Safety Analyst, ECRI

Dr. Christie Bergerson is a consultant in the medical device space, currently working exclusively with ECRI on many topics, including AI-enabled medical devices. Prior to partnering with ECRI, Dr. Bergerson worked as a consultant at Exponent and a Systems Engineer for Abbott Laboratories R&D Diagnostics Division. Throughout her various roles, Dr. Bergerson has gained expertise in the fields of in vitro diagnostics, orthopedics and software development, with artificial intelligence being one factor tying the three niches together. Dr. Bergerson has published extensively on these topics and enjoys guest lecturing at various institutions including Johns Hopkins University and Texas A&M University. Please check out her LinkedIn profile for more information.