Rebecca Handler
Stanford Medicine
Originally posted 15 Jan 26
Artificial intelligence is no longer a speculative force in medicine. It is already embedded in everyday care. AI systems flag hospitalized patients at risk of deterioration, assist radiologists reading mammograms, draft clinicians’ notes, route patient messages, and increasingly interact directly with patients through chatbots and digital assistants.
In recent months, the pace and visibility of these deployments have accelerated sharply. OpenAI announced ChatGPT for Health, positioning a general-purpose language model as a tool for health-related information and patient interaction. Utah just began piloting AI-supported prescribing and clinical decision systems, raising questions about how algorithmic recommendations intersect with clinician judgment and liability. OpenEvidence, an AI-powered medical evidence platform designed primarily for clinicians and health professionals, has become a dominant player in point-of-care decisions, underscoring the fact that doctors are often bypassing traditional IT gatekeepers to use AI in clinical care. At the federal level, the FDA signaled a loosening of regulatory oversight for certain categories of clinical decision support software, shifting more responsibility to developers and health systems to ensure safety and effectiveness.
Here are some thoughts:
A January 2026 report called The State of Clinical AI from the ARISE Network, led by researchers across Stanford and Harvard, examines where AI is genuinely improving clinical care versus where it falls short in real-world settings.
AI has become deeply embedded in everyday medicine — flagging patients at risk of deterioration, assisting radiologists, drafting clinical notes, and interacting with patients through chatbots. However, the report finds a significant gap between controlled study performance and actual clinical practice: AI systems struggle with uncertainty, incomplete information, and complex reasoning, often performing closer to medical students than experienced physicians.
The report also raises concerns about patient-facing AI, noting that patients may over-trust systems that sound confident but lack full clinical context, and that escalation to human care is often unclear when guardrails are poorly defined.








