544 Enterprises. One Number That Matters: 22.8%
Steven Dickens did something unusual. He published the raw data.
HyperFRAME Research just dropped their 1H 2026 State of the Enterprise AI Stack. 544 enterprise respondents. 119 questions. 644 rows of results. No paywall. No lead form. No “download after you give us your email and consent to 47 follow-up calls.” Just the data, sitting on a public URL.
That’s a deliberate bet on Generative Engine Optimization — the idea that open, citable, structured data gets picked up by AI systems and surfaces in the answers your buyers are reading before they ever talk to an analyst. It’s a bold move and I think it’s the right one. Steven is building discoverability the way the next decade actually works.
I’m going to take him up on the offer.
Because buried in that dataset is the number that matters most:
22.8%.
That’s the percentage of AI/ML projects launched in the past 12 months that are successfully deployed and meeting their original ROI objectives. Nearly eight out of ten enterprise AI projects are not delivering what they promised.
What the Other 77% Are Doing
The breakdown is worth spelling out:
- 25.7% stalled or still in process
- 20.7% deployed, but not meeting ROI
- 13.8% failed or abandoned
- 16.9% still pending or in planning
HyperFRAME calls this the Execution Gap — the distance between AI ambition and operational reality. I’ve been describing the same pattern for the better part of two years. The AI gold rush rewarded experimentation. The AI factory era rewards execution discipline. The names are different. The problem is the same.
The question that matters isn’t why enterprises are struggling. It’s where specifically the failure is occurring. That’s what the rest of the data tells you — if you have a framework to read it through.
The Data Foundation Is the Ceiling
The model selection conversation dominates vendor marketing and analyst coverage. Which LLM? Open source or proprietary? Build, buy, or customize?
HyperFRAME’s data suggests that’s the wrong conversation for most of these organizations.
Only 14% of enterprises describe their core data architecture as fully modernized for AI workloads. Twenty-three percent are still running a legacy on-premises data warehouse. Half of respondents rate their data platform as less than 75% ready to support AI/ML workloads.
And when asked what the top barrier to broad AI adoption is — not infrastructure, not talent, not governance — Data Quality ranks #1, with 27% of respondents putting it at the top of their list.
This is the core prediction of the 4+1 Layer AI Infrastructure Model: the intelligence layer cannot outrun the data foundation layer. You can select the best foundation model in the world, build a sophisticated orchestration plane, and deploy agents across your enterprise — and the output will be constrained by the quality, accessibility, and governance of the data feeding it.
The 22.8% success rate isn’t a model problem. It’s a foundation problem that’s being diagnosed as a model problem.
The Governance Gap Behind the AI Push
Here’s the tension I keep coming back to when I read this data.
78% of respondents say AI is strategically important to their organization’s overall success.
Only 40% have a dedicated AI governance committee.
79% anticipate agentic AI playing a significant role in their strategy within the next 12 months.
Do you see what’s about to happen?
Enterprises are preparing to deploy autonomous AI systems — systems that take actions, trigger workflows, and make operational decisions without human review — into organizations where decision authority over AI hasn’t been formally placed. Not 30% of them. Not a concerning minority. The majority.
The Shadow AI data makes this more concrete. 37.3% of organizations describe their position as “strictly prohibited — with enforcement issues.” They’ve claimed authority over AI usage. They cannot exercise it. That gap — between claimed authority and exercised authority — is exactly what the Decision Authority Placement Model (DAPM) describes as unplaced decision authority.
The enforcement issues aren’t a policy problem. They’re a structural problem. Someone decided where authority lives on paper. Nobody built the system to make it real. And now they’re adding agents.
The 24-Month Optimism Math Doesn’t Work
The implementation stage data is the sharpest internal contradiction in the entire dataset.
Today, 15.1% of respondents are at mass deployment. Within 24 months, 66.4% project they’ll be there. That’s a 4x increase in organizations at full-scale production AI deployment — in two years.
With the same data infrastructure. The same talent gaps (64.7% acknowledge a skills gap as a barrier today). The same MLOps immaturity (less than 7% rate their MLOps practices at a 10/10 today). The same 40% without a governance structure.
The aspiration is real. The systems required to support it are not yet in place.
I’m not predicting it won’t happen. I’m pointing out that the path from here to there runs directly through every unsolved problem the survey already identified. Data quality. Governance structure. MLOps maturity. Execution process — only 37% have a structured AI evaluation and deployment process today.
If you’re building the infrastructure strategy, the gap between 15% and 66% is not an AI strategy question. It’s an AI factory design question.
The Market Is Already Solving the Problem
Here’s what I find most telling about that 14% data architecture figure: the startup ecosystem saw it coming before the survey confirmed it.
Three companies I work with as sponsored content partners — and I’m disclosing that relationship directly because it’s relevant — are each attacking the data foundation problem from a different angle. None of them are competing with each other. All three exist because the same structural gap the HyperFRAME data describes is real enough to build a company around.
Articul8 (an Intel spinout led by CEO Arun Subramaniyan) starts from the premise that general-purpose models with RAG don’t work for enterprises with mission-critical, domain-specific data. Their answer isn’t a better model — it’s a platform built to make enterprise data usable by AI in the first place, inside the security perimeter, without moving it. They don’t do POCs. They start with production pilots. That discipline is a direct response to the 22.8% problem.
Kamiwaza is attacking what they call data gravity — the reality that enterprise data can’t be moved to where AI runs, so the compute has to go to where the data lives. Their Distributed Data Engine processes information in place across clouds, on-prem systems, and edge infrastructure without requiring centralization. The HyperFRAME data shows 29.5% of enterprises cite integration and governance of siloed data as their #1 modernization driver. Kamiwaza is building for that exact customer.
UnicornIQ, founded by Joe Onisick, is focused on what he calls the “Private Data Paradox” — the pattern where unstructured, contradictory data gets fed into AI systems and produces weak inference. Their automated data hygiene engine is built to create a source of truth before the AI ever touches it. It’s the furthest upstream solution of the three, and arguably the most foundational. You can’t orchestrate what you can’t trust.
Three different entry points. Three different architectural bets. One shared diagnosis: the data foundation problem is the real AI problem, and it’s not getting solved by model selection.
When practitioners with real enterprise deployment scars all converge on the same problem independently, that’s not a coincidence. That’s a market signal. The HyperFRAME survey puts numbers on what the builders already knew.
What This Data Tells Buyers
If you’re an enterprise leader reading this data looking for a signal, here’s what I take from it:
Your model strategy is not your primary constraint. Sixty-eight percent of enterprises are evaluating new foundation models at least quarterly. The market for model selection advice is saturated. The constraint is everything below the model — the data that feeds it, the orchestration that runs it, the governance that controls it.
The deployment process is a differentiator. Only 37% have a structured process for evaluating, testing, and deploying AI technologies. If you’re in that 37%, you have a systematic advantage over the majority of your peers — independent of which models you’re using.
Governance is not a compliance exercise. It is the mechanism by which AI decision authority gets placed in your organization. The organizations that figure out where decisions live — and build the structures to match — will not be the ones dealing with enforcement issues or abandoned pilots two years from now.
On Steven’s Bet
I want to come back to where I started, because it matters.
Releasing primary research without a wall is a bet that the value of the data comes from its reach, not its restriction. It’s a bet that being cited — by practitioners, by AI systems, by the next wave of enterprise buyers doing research — is worth more than the leads generated by a gated PDF.
I think he’s right. And I think the data is strong enough to hold up to the scrutiny that comes with open access.
The 22.8% figure will make it into decks, into briefings, into AI-generated answers to the question “what percent of enterprise AI projects succeed.” That’s the whole point.
What I’ve tried to do here is give it context that pure data can’t provide. The number tells you how bad the execution gap is. The frameworks tell you where it lives and why it persists.
If you want to go deeper on the data, Steven’s full raw dataset is here. No wall. Worth reading.
Share This Story, Choose Your Platform!

Keith Townsend is a seasoned technology leader and Founder of The Advisor Bench, specializing in IT infrastructure, cloud technologies, and AI. With expertise spanning cloud, virtualization, networking, and storage, Keith has been a trusted partner in transforming IT operations across industries, including pharmaceuticals, manufacturing, government, software, and financial services.
Keith’s career highlights include leading global initiatives to consolidate multiple data centers, unify disparate IT operations, and modernize mission-critical platforms for “three-letter” federal agencies. His ability to align complex technology solutions with business objectives has made him a sought-after advisor for organizations navigating digital transformation.
A recognized voice in the industry, Keith combines his deep infrastructure knowledge with AI expertise to help enterprises integrate machine learning and AI-driven solutions into their IT strategies. His leadership has extended to designing scalable architectures that support advanced analytics and automation, empowering businesses to unlock new efficiencies and capabilities.
Whether guiding data center modernization, deploying AI solutions, or advising on cloud strategies, Keith brings a unique blend of technical depth and strategic insight to every project.




