Production Is Up. Control Is Not.
More than half of enterprises — 54%, up from just 11% two years ago — now run AI agents in production across core business operations. That number, from Ampcome’s 2026 mid-year enterprise AI agents report, marks a genuine inflection point. Agentic AI is no longer a pilot program. It is live, it is taking actions, and in most organizations, it is doing so with almost no governance infrastructure watching over it.
The same report puts the governance reality in sharp relief: only one in five companies — 21% — has a mature model for overseeing autonomous AI agents. That gap is not a minor oversight. It is a structural problem that will define the next 18 months of enterprise AI adoption.
What “In Production” Actually Means in 2026
The 54% figure is striking, but the details behind it matter. Ampcome defines “in production” as agents with live, bidirectional access to operational systems — ERP, CRM, HRIS, ticketing, and customer-facing workflows — taking actions without a human confirming each step. These are not assistants generating draft emails. They are agents approving expense reports, routing support tickets, triggering procurement workflows, and updating customer records.
The speed of this shift is explained partly by infrastructure maturity. By mid-2025, every major cloud platform — AWS, Azure, Google Cloud — had a managed agent runtime. The barrier to deployment dropped sharply. What had taken a dedicated ML team months now takes a few days with an off-the-shelf agent framework. Organizations that had been running pilots in 2024 graduated to production almost automatically once the tooling caught up.
The problem is that governance infrastructure did not evolve at the same pace. Agent frameworks ship fast. Oversight frameworks ship slow.
The Governance Numbers Are Difficult to Ignore
A range of 2026 surveys converge on the same uncomfortable finding. ESecurity Planet’s State of AI Risk Management report found that 86% of organizations claim to maintain a complete AI inventory — yet 59% of the same respondents admit ungoverned shadow AI is present. These numbers cannot both be true, which tells you something about the quality of the inventory claims.
On access and control: Microsoft’s February 2026 security report found 80% of Fortune 500 companies now run active AI agents — but only 12% have centralized visibility into what those agents are doing. The other 88% have fragmented, departmental deployments with no single team accountable for agent behavior.
Only 24% of enterprises have a dedicated AI security governance team, according to the ArmorCode State of AI Risk Management report. That leaves three-quarters of organizations relying on existing security and compliance teams to govern a category of system those teams were not designed to handle.
Gartner adds a harder deadline to the picture: over 40% of agentic AI projects are at risk of cancellation by 2027 — not because the technology does not work, but because organizations cannot demonstrate sufficient control to satisfy their own security review processes. Many teams that piloted agents in 2025 are now rebuilding permission systems and audit logging from scratch before they can pass internal security audits.
Why Traditional Governance Frameworks Break Down
The governance challenge is not simply one of policy — it is structural. Existing enterprise governance was designed for human-speed operations. Approval workflows, quarterly access reviews, manual audit sampling: these are calibrated for processes where a decision takes days and an action takes hours. An AI agent operating in a CRM can make hundreds of decisions per hour, each with real downstream consequences.
Shadow AI compounds the problem in a new way. CIO Magazine’s analysis from April 2026 describes how shadow AI has evolved: it is no longer just employees using unauthorized ChatGPT accounts. It is departmental agents — often built by a motivated engineer over a weekend using an internal LLM API — operating against production data with no identity registration, no policy enforcement, and no audit trail. These agents are invisible to the governance team because they were never registered with one.
The EU AI Act enforcement deadline of August 2026 adds regulatory urgency. Any system that materially influences decisions — hiring, credit, customer service prioritization — will require documented oversight. Agents that cannot produce an audit trail of their actions will need to be suspended or substantially rearchitected before that deadline.
What the Organizations Getting It Right Are Doing Differently
The Ampcome report is not only a warning — it identifies a cluster of enterprises scaling AI agents successfully. They share a common pattern: governance infrastructure was built before agent autonomy was expanded, not retrofitted afterward.
Concretely, this means three things. First, agent identity: every agent has a registered identity in the organization’s IAM system with explicitly defined scopes — what systems it can access, what actions it can take, what data it can read or write. Second, observability: real-time dashboards track every agent action, not sampled logs but continuous telemetry. When an agent’s behavior deviates from its expected pattern, a human is notified immediately. Third, least privilege: agents start with the minimum permissions needed to complete their task, with escalation paths requiring human approval.
ServiceNow’s Knowledge 2026 conference centered almost entirely on what they call an “AI Control Tower” — a governance layer sitting above all agentic workloads, providing a unified view of agent activity across an enterprise. Whether that specific product wins or loses market share is secondary; the fact that the category exists signals that this is now a real procurement consideration for enterprise buyers.
This maps to the broader trend that vortx.ch has tracked throughout 2026: the enterprises actually seeing ROI from AI are the ones treating deployment as a systems engineering problem, not a technology adoption problem. The bottleneck is never the model. It is the surrounding infrastructure — the absorption capacity of the organization to integrate AI into processes that have controls attached.
The Next Six Months Will Separate the Ready from the Exposed
The trajectory from here is predictable. As EU AI Act enforcement begins in August, organizations will face their first real external audit of AI governance. Some will pass with well-documented agent inventories and access logs. Many will scramble. A meaningful number will quietly shut down agents they cannot yet defend to a regulator.
The governance gap is not going to close by itself. The organizations closing it now are building infrastructure that will become a competitive moat: once agents are properly governed and trusted internally, autonomy can be expanded safely and quickly. The organizations waiting — assuming governance can be bolted on later — are accumulating a debt that compounds with every new agent they deploy.
54% of enterprises are in production. The question is no longer whether to deploy agents. It is whether the governance infrastructure can keep pace with the agents already running.
Further Reading
- Enterprise AI Agents 2026: Mid-Year Report — Ampcome’s data-backed survey covering adoption rates, governance maturity, and ROI benchmarks across industries.
- AI Agent Risks & Guardrails: 2026 Enterprise Security Guide — Practical framework for implementing least-privilege access, observability, and audit trails for agentic systems.
- Shadow AI Morphs into Shadow Operations — CIO Magazine on how ungoverned agents have moved from individual tools to full departmental workflows operating outside any oversight structure.

