The prior authorization workflow is one of the most universally despised processes in healthcare. Physicians hate it. Care coordinators hate it. Even the administrators who run it will tell you, off the record, that it consumes resources disproportionate to its clinical value. So when a health system deployed an AI product designed to automate a significant portion of the prior auth process, the clinical case was obvious, the operational case was compelling, and the leadership team had signed off.
Six months in, the product was being used by roughly 20% of the staff it had been deployed to.
Not because it didn’t work. It worked well enough in the facilities where it had been adopted. But in most departments, it had quietly stalled. Coordinators had reverted to their existing workflows. A few vocal physicians had raised concerns about liability that hadn’t been formally addressed. The IT team had deprioritized a configuration issue that was causing intermittent errors in one module. Middle management, caught between a leadership mandate they hadn’t been part of designing and a staff that was skeptical, had chosen the path of least resistance: not pushing.
The initiative wasn’t cancelled. It just stopped moving.
Then something changed. The CMO assigned a named physician champion with protected time, explicit authority to resolve the liability question, and a direct line to the vendor’s implementation team. A small working group was formed with representation from the coordinator team, IT, and compliance. The configuration issue got fixed in two weeks. The liability concern got a written clinical governance policy in four. Three months later, adoption was at 78%.
The technology hadn’t changed. The organizational structure around it had.
The Invisible Wall Founders Keep Hitting
If you are a founder selling an AI product into health systems, you have probably experienced a version of this. The demo goes well. The clinical champion is enthusiastic. Legal clears the contract. Implementation starts. And then, somewhere between go-live and the renewal conversation, the product stops gaining traction without anyone being able to tell you exactly why.
The instinct is to look at the product. Is the UX wrong? Is a feature missing? Is the workflow integration not deep enough? Sometimes those things are true. But more often, what you are hitting is not a product problem. It is an organizational one. And it is one that no amount of product improvement will solve, because the barrier isn’t functional. It is structural.
Healthcare organizations, particularly large health systems, are not neutral environments for innovation. They are complex, risk-averse institutions with established workflows, established hierarchies, and established ways of absorbing new things without fundamentally changing. When an AI product enters that environment without structural protection, it doesn’t usually get rejected outright. It gets absorbed. Gradually deprioritized. Worked around. Adopted superficially by the enthusiasts and ignored by everyone else until the contract renewal conversation becomes awkward.
This is not malice. It is organizational physics. Change in complex institutions requires more than permission. It requires protection.
Reality check: If your product’s adoption depends entirely on the enthusiasm of a single clinical champion with no protected time, no formal authority, and no organizational backing, you don’t have an implementation. You have a hobby project with a contract attached.
Why Verbal Support Is Not Structural Protection
The distinction that most founders and most enterprise leaders fail to make clearly enough is the one between a leadership team that supports an AI initiative and a leadership team that has structurally protected one.
Verbal support sounds like: “We are committed to AI adoption.” “This is a strategic priority.” “We have full leadership buy-in.” It shows up in kick-off meetings, in board presentations, and in the contract signing ceremony. It is real, in the sense that the people saying it mean it at the time they say it.
Structural protection looks different. It is a named executive sponsor who has skin in the game, meaning their performance evaluation is connected to the initiative’s success. It is a physician or clinical champion with protected time, not someone doing this on top of a full clinical load. It is a governance structure that can make decisions, resolve conflicts, and unblock issues without routing every problem back to a leadership team that has seventeen other priorities. It is a clear escalation path for when middle management resistance surfaces, because it will. And it is a defined timeframe inside which the initiative is shielded from the normal organizational pressure to revert to what worked before.
The prior auth story above succeeded when it moved from verbal support to structural protection. The CMO didn’t add budget. He added structure. Named authority. Protected time. A decision-making body with the right people in it. That is what changed the trajectory.
For founders, the implication is direct: before you commit to a full deployment, ask the health system who owns this initiative, what authority they have, and how much of their time is protected for it. The answers to those three questions will tell you more about your renewal probability than anything in the product roadmap.
Decision rule: Verbal buy-in gets you a signed contract. Structural protection gets you a renewal. Know which one you have before you scope the implementation.
What Organizational Antibodies Look Like in Practice
The resistance that kills AI initiatives inside health systems is rarely visible in the way that a formal objection is visible. It operates through more subtle mechanisms, and founders who don’t recognize them tend to misdiagnose the problem.
Middle management neutrality. Department heads and team leads who were not involved in the decision to adopt the AI product have no particular incentive to champion it. They have plenty of incentive to avoid the friction that comes with pushing a skeptical team toward a new workflow. Neutrality, in a change management context, is functionally the same as resistance. If middle management isn’t actively pulling adoption forward, it won’t happen at the pace the contract assumes.
Clinical skepticism without a resolution mechanism. Physicians are trained to question. When an AI product enters a clinical workflow, questions about accuracy, liability, and clinical responsibility are legitimate and need formal answers. If there is no governance structure to address them, they don’t get answered. They circulate. They become reasons not to engage. The skeptical physician who raises a concern in a staff meeting and never gets a response doesn’t go away. They become the reason twelve other physicians stay cautious.
IT deprioritization. The IT team that supported the procurement process is usually not the same team responsible for ongoing configuration, maintenance, and issue resolution. Post-go-live IT support for AI products competes with every other system maintenance priority in the queue. Without a named IT owner and a clear SLA, minor configuration issues that should take two weeks to fix take four months, and clinical staff lose confidence in the product in the meantime.
Workflow reversion under pressure. Clinical and administrative staff revert to familiar workflows under pressure, particularly during high-census periods, staffing shortages, or system disruptions. If the AI product hasn’t been adopted deeply enough to become the default before the first major operational stress event, the reversion becomes permanent for most users.
The Incubation Argument
The reason innovation needs to be structurally protected inside organizations is not that organizations are hostile to good ideas. It is that organizations are optimized for stability, not change. Every system, process, and incentive inside a functioning health system is designed to produce reliable, consistent outcomes. That is appropriate. Reliability in clinical operations is not a bug. It is the point.
But that same optimization creates a gravitational pull toward the existing way of doing things. A new AI product, however well-designed, is asking the organization to absorb uncertainty: new workflows, new failure modes, new liability questions, new training requirements, new dependencies. The organization’s natural response is to minimize that uncertainty, which in practice means minimizing the change.
Incubation, in the organizational sense, means creating a protected environment where the new thing is shielded from that gravitational pull long enough to prove its value. Not permanently. Not indefinitely. Just long enough for the evidence to accumulate, the skeptics to see real outcomes, and the new workflow to become familiar enough that reverting to the old one feels like the harder choice.
This is why the most successful AI deployments in healthcare are almost never the ones with the biggest budgets or the most sophisticated technology. They are the ones where someone with organizational authority decided that this initiative was going to be protected, resourced, and driven to a real outcome, and then did the unglamorous work of making that happen.
If you only remember one thing: AI readiness is not a technology assessment. It is an organizational assessment. The question is not whether the health system has the infrastructure to run your product. It is whether they have the organizational structure to change because of it.
What Founders Should Assess Before Committing to a Full Deployment
These are not questions to ask at the demo stage. They are questions to get answered before you finalize your implementation scope and commit your customer success resources.
- Is there a named executive sponsor whose performance evaluation is connected to this initiative’s success, or is sponsorship symbolic?
- Does the clinical or operational champion have protected time for this initiative, or are they doing it alongside a full existing workload?
- Is there a governance structure, a working group or steering committee, with the authority to make decisions, resolve conflicts, and unblock issues without escalating everything to the C-suite?
- Has middle management been included in the design of the deployment, or were they informed after the decision was made?
- Is there a formal process for clinical staff to raise concerns and receive documented responses, particularly around liability and accuracy questions?
- Is there a named IT owner for post-go-live support, and is there a defined SLA for issue resolution?
- Has the organization defined what success looks like at 90 days and 180 days, with metrics that are specific enough to be unambiguous?
- Is there a plan for what happens when adoption stalls, including who has the authority and the mandate to intervene?
Closing
The technology problem in clinical AI is largely solved, or at least solvable. The models exist. The infrastructure exists. The integration pathways, imperfect as they are, exist. What doesn’t automatically exist is an organization ready to change because of the technology.
That readiness is not a precondition you can assess in a demo or a procurement process. It is something you have to evaluate deliberately, before you commit to a deployment that your customer success team will be trying to rescue six months later.
The health systems that deploy AI successfully are not the ones with the most sophisticated technical environments. They are the ones where someone decided that innovation was worth protecting, built a structure around it, and held that structure in place long enough for the evidence to speak for itself.
For founders, the implication is this: your job does not end at go-live. In healthcare, go-live is when the real work starts. The product has to survive the organization. And whether it does depends less on what you built than on what your champion built around it.
Verbal support gets you in the door. Structural protection keeps you there.

