Dr. Sarah Chen thought she knew her radiology department inside and out. Then one Tuesday morning, she discovered three different AI tools were flagging the same chest X-ray with conflicting recommendations. No one had told her they were running simultaneously.
Stories like this are multiplying across American hospitals. Physician AI usage doubled from 38% to 68% in 2024, but a troubling pattern has emerged: the technology is advancing faster than the oversight keeping it in check.
The American Medical Association sees what’s coming. Their new governance framework, released this month, reads less like standard policy guidance and more like an urgent playbook for health systems that realize they’re flying blind.
The Problem Nobody Saw Coming
Here’s what happened. Hospitals embraced AI tools piecemeal, department by department, vendor by vendor. Marketing promised seamless integration. Reality delivered something messier.
“There are genuine risks in implementing these technologies,” Dr. Margaret Lozovatsky told a packed webinar of health system leaders last week. As the AMA’s chief medical information officer, she’s fielding calls from executives who suddenly realize they don’t know which AI systems are making decisions in their buildings.
The AMA deliberately calls it “augmented intelligence” now, not artificial intelligence. The distinction matters legally. When something goes wrong, courts will ask whether the human doctor remained in control.
“Clinical decision-making must still lie with clinicians,” Dr. Lozovatsky explained. “AI simply enhances their ability to make those decisions.”
That’s the theory. Practice gets complicated when you have multiple AI systems offering different suggestions, each trained on different datasets, each with different blind spots.
The stakes keep rising. These aren’t just diagnostic aids anymore. AI influences treatment plans, resource allocation, and staffing decisions. When algorithms shape patient care at scale, governance becomes a survival skill.
What Smart Hospitals Are Doing Now
The AMA’s eight-step framework starts with a simple question: who’s actually accountable when your AI makes a mistake?
Most health systems discover they don’t have a clear answer. That’s changing fast among the organizations that see trouble ahead.
Smart leaders are pulling together what Dr. Lozovatsky calls “true C-suite engagement.” Chief medical officers sit next to chief technology officers. General counsel joins the conversation early, not after problems surface.
“Engaging the C-suite is critical,” she noted. “All of their areas will be impacted, so buy-in from those leaders is imperative.”
The successful hospitals are building three-tier governance structures. Executive leadership sets strategy and delegates authority. Advisory councils handle technical reviews and interoperability concerns. Specialty departments ensure front-line staff actually trust and use these tools properly.
This isn’t about slowing down innovation. It’s about making sure innovation doesn’t outrun responsibility.
The Confidence Gap
Dr. Lozovatsky, a pediatric hospitalist who’s spent years studying health informatics, sees a pattern emerging. The hospitals moving fastest on governance are pulling ahead of competitors still debating committee structures.
Clinical informatics experts are becoming hot commodities. These professionals understand both the technical capabilities and clinical realities. They can spot problems before they reach patients.
The organizations getting this right are asking harder questions upfront. How does AI support our mission? What happens when external partnerships go sideways? Who gets blamed when automated systems conflict with clinical judgment?
Some health systems are discovering their existing technology review processes work fine with minor adjustments. Others are realizing they need entirely new oversight structures.
What This Really Means
Think beyond individual patient encounters. When AI becomes embedded in hospital workflows, it shapes institutional behavior. The algorithms that seem helpful today could create blind spots that emerge months later.
Patient trust hangs in the balance. So does physician confidence. When doctors don’t understand how AI reaches its recommendations, they either ignore the technology or rely on it too heavily. Neither outcome serves patients well.
“Health care organizations must ensure that AI is implemented in a safe, thoughtful manner,” Dr. Lozovatsky said. “We must prove that we’re supporting care for our patients and our clinicians in their ability to deliver that care.”
The AMA isn’t just worried about individual hospitals. They’re watching an entire industry transformation happen without adequate guardrails. Their governance toolkit, along with published AI advocacy principles, represents an attempt to get ahead of problems that could undermine public confidence in medical AI.
The Race Against Complexity
The hospitals that establish strong governance frameworks now will likely dominate the next phase of healthcare innovation. Those that don’t may find themselves managing crises instead of opportunities.
We’re watching healthcare’s equivalent of the early internet boom. The technology promises enormous benefits, but the winners will be organizations that master both innovation and risk management simultaneously.
The question isn’t whether AI will transform healthcare. That’s already happening. The question is whether healthcare institutions can govern these changes wisely enough to maintain the trust that makes medical care possible.
How confident are you in your hospital’s AI oversight? The gap between adoption and governance is widening every month.
