AI isn’t just approaching doctors; it’s already an active presence in patient care.

November 10, 2025 by No Comments

As healthcare facilities embrace AI, patients might increasingly be uncertain about who—or what—is influencing their medical diagnoses and treatments. As emergency room doctors, we observe how AI-driven guidance is redefining the experience of visiting an ER.

This situation isn’t about policy; it’s a cultural phenomenon: what does it mean to place confidence in your doctor when that “doctor” could be an algorithm? According to a recent report, AI tools are already in use in emergency departments for managing triage, predicting risks, and creating staffing models—plans that ensure hospitals have the appropriate number and mix of doctors, nurses, and other personnel available at the right times for patient care. Patients might not realize if their caregiver is a human physician or an AI-assisted hybrid. This can feel smooth or unsettling, depending on the seriousness of their condition. We are experiencing a silent transformation of the ER, where economic pressures, staff shortages, and AI support are reshaping what it means to consult a doctor, and what it means to have faith in one.

The transition from human physicians to AI is more than just a staffing solution; it’s a profound alteration in how medical decisions are reached. Each approach carries distinct advantages and disadvantages. AI can process vast amounts of data in mere seconds, yet it cannot meet a patient’s gaze to perceive fear, empathize with subtle human suffering, or detect the unspoken cues conveyed by holding the hand of someone in pain. A significant part of our more than 10,000 hours of medical training to become ER doctors involves developing that innate sense that something is wrong, even when a patient’s vital signs and lab results appear normal. It involves recognizing the subtle indicators—a slight confusion, a faint slurring of speech, the quiet apprehension in their eyes—that a patient might not articulate, nor can an algorithm detect. The human element, which encompasses trust and empathy, is precisely where AI falls short.

Technology companies are rapidly working to integrate AI into clinical environments by developing digital triage systems, diagnostic assistants, and decision-support tools intended to enhance or even take over aspects of physician supervision. Hospitals are swiftly adopting these solutions, drawn by the promise of reduced costs and enhanced diagnostic precision. In one recent study, AI performed comparably to non-specialist physicians, showcasing the rapid progress of algorithms in clinical settings. OpenAI, Google, and Microsoft are actively testing AI-based healthcare applications. For instance, Open Evidence AI, a company developing an AI-powered tool to provide clinicians with quick, evidence-based answers to medical questions, is already valued at $3.5 billion.

Undoubtedly, there are areas where AI truly excels. It can identify patterns invisible to even the most seasoned clinician, connecting a lab result from months past with a current medication list and a cluster of symptoms to flag a serious infection risk before anyone else notices it. It can pinpoint rare drug interactions, assist in decision-making, and accelerate documentation, freeing up physicians to spend more time with patients and experience less burnout. When used appropriately, AI is less a substitute for human intuition and more a powerful enhancer of it.

Perhaps more importantly, the novel aspect is that both patients and doctors are now using AI, though in different ways.

A few nights ago, a young woman arrived in the ER experiencing chest pain. All her tests were normal, but she remained visibly anxious. When I inquired if something was bothering her, she confessed to having extensively researched symptoms on ChatGPT after noticing some skipped heartbeats. The chatbot suggested she might have: a rare, life-threatening heart condition. (She did not.) The resulting panic likely triggered the symptoms that brought her in.

Another patient, a young man, presented convinced he had appendicitis because ChatGPT had informed him so. In this instance, he was correct. His symptoms were classic, and the medical student examining him independently reached the same diagnosis. The AI helped the patient identify his condition sooner and seek treatment. Nevertheless, the skilled hands of a surgeon were still required to remove his appendix.

That is the inherent contradiction of our current era: the same technology that can breed confusion and anxiety can also sharpen insights and accelerate care. It’s not just altering how we diagnose; it’s also changing how patients seek care and who provides it. The interplay of costs, staffing challenges, and technology has blurred the distinction between human and automated care, ushering in a new medical paradigm: patients are treated by clinicians whose most influential collaborator might be an algorithm.

The concern isn’t solely that AI might make an incorrect diagnosis; it’s also that prolonged reliance on AI could compromise a clinician’s own diagnostic acumen. In one study, doctors were less effective at detecting potentially cancerous growths during colonoscopies after they had become accustomed to using an AI tool. The authors theorized that increased dependence on an algorithm reduced the exercise of human judgment.

This move towards incorporating AI or non-physician practitioners into the ER isn’t inherently negative, but it’s often not transparent to patients. And that lack of transparency is the core issue.

Patients have a right to know when AI is guiding their care, who ultimately holds responsibility for the decisions being made, and what safeguards are in place when an algorithm might be the “doctor in the room.”

Transparency will not halt the advancement of technology, but it could help safeguard something medicine cannot afford to lose: trust.