Skip to content

Why Most Enterprise AI Still Delivers No ROI

5 min read

Why Most Enterprise AI Still Delivers No ROI
Photo by Markus Winkler on Pexels

The Numbers Are In — And They’re Uncomfortable

A Gartner survey of 782 infrastructure and operations leaders, published on April 7, 2026, found that only 28% of AI projects in that domain fully deliver on their ROI expectations. Twenty percent fail outright. Fifty-seven percent of I&O managers reported experiencing at least one AI failure in the past year. These aren’t startup experiments — these are enterprise deployments with real budgets and executive sign-off.

The broader picture is no better. McKinsey’s March 2026 Global AI Survey found that while 88% of companies now use AI in at least one business function, only 39% report any measurable impact on EBIT. MIT’s GenAI Divide report puts the failure rate at 95% when defined as projects that have not shown measurable financial returns within six months of launch. IBM’s February 2026 enterprise AI report found that just 5% of organizations achieve what IBM classifies as “substantial ROI.”

The disconnect is glaring: 86% of enterprises increased their AI budgets in 2025, yet only 29% of executives say they can reliably measure what they’re getting back, according to McKinsey. Boards are demanding proof — 98% of surveyed tech leaders report board pressure to show ROI — but most CIOs are standing in front of those boards with little to show.

What’s Actually Going Wrong

The Gartner research is worth reading carefully because it names the causes rather than just describing the symptoms. The most common failure mode is unrealistic expectations: teams assumed AI would immediately automate complex tasks, slash costs, or fix long-standing operational problems. When those results didn’t materialize in weeks, confidence collapsed and projects stalled.

Poor data quality or insufficient data was cited by 38% of I&O leaders as a reason for failure — a figure that tracks with what practitioners report in the field. AI models are only as useful as the data you feed them, and most enterprises have years of under-invested data infrastructure they’re now trying to paper over with models. You can’t fine-tune your way out of a bad data warehouse.

Skill gaps affected 38% of cases. The constraint isn’t just data scientists — it’s people who understand both the business process and the AI tooling well enough to scope a project that will actually ship. That combination remains rare.

Leadership gaps compound everything else. Less than 30% of companies report their CEOs directly sponsor their AI agenda, according to McKinsey. Only 15% of U.S. employees say their organization has communicated a clear AI strategy at all. When strategy is fuzzy at the top, problem definition at the project level is almost certainly worse. Teams end up solving technically interesting problems that have no path to a P&L line.

The Pattern Among Companies That Do See Returns

The 28% — or the 5%, depending on which study you weight — aren’t doing something magical. They’re applying project management discipline that the rest of the organization abandoned in the rush to “move fast on AI.”

The consistent pattern is: workflow redesign before tool selection. MIT, McKinsey, and Wharton research reach the same conclusion — AI transformation fails when treated as a technology rollout rather than a process change. Organizations that captured ROI started by mapping the workflow they wanted to improve, defined what “better” looked like in concrete terms (time saved, error rate, conversion rate), and then chose tools to fit that spec. They didn’t start with “let’s try GPT-4 on this” and see what happened.

Successful deployments also tend to be narrow and specific. Rather than broad “AI transformation” programs, leaders pick one process where autonomous decision-making creates immediate, measurable value — customer service resolution time, invoice processing accuracy, inventory forecasting — and prove the model there first. The specificity matters both for measurement and for building the organizational confidence to scale.

Gartner’s data shows a 53% success rate in more mature applications like IT service management and cloud operations — significantly higher than the 28% overall. The lesson: newer, more ambitious use cases fail more often. Reliability improves as deployment patterns become established and the tooling matures.

The Trough Has a Floor — But Where Is It?

Gartner explicitly places 2026 inside the Trough of Disillusionment for AI. That framing is useful because troughs eventually end — the question is what survives the descent. The companies that emerge with competitive advantage will likely be those that used this period to build actual data infrastructure, develop internal AI capability that doesn’t depend on a single vendor, and establish measurement practices that let them evaluate what’s working.

The pressure is real: 71% of CIOs surveyed by McKinsey believe their AI budget will face cuts or a freeze if targets aren’t met by mid-year 2026. That’s forcing a useful reckoning. Budgets that flowed freely toward experimentation are now requiring a business case. Projects that can’t articulate a measurable outcome within a reasonable timeframe are getting cut — which, painful as it is, is the right outcome. The 95% that isn’t delivering returns shouldn’t continue to consume resources indefinitely.

What this means practically: organizations that have been running AI pilots for 12+ months without a credible path to production should treat that as a signal, not a sunk cost justification. The companies seeing real returns have moved beyond pilots entirely. That gap — between the experimenting majority and the scaling minority — is where the next 18 months of enterprise AI competition will be decided.

Further Reading

Don’t miss on Ai tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Don’t miss on Ai tips!

We don’t spam! We are not selling your data. Read our privacy policy for more info.

Enjoyed this? Get one AI insight per day.

Join engineers and decision-makers who start their morning with vortx.ch. No fluff, no hype — just what matters in AI.