You Know What’s Dumb? Using AI When It’s Not Needed.
You Know What’s Dumb?
There’s a well-documented process for using OCR to scan a set of documents and converting them into an editable Word document. That’s not the dumb part. The dumb part is reverting to a manual process without any clear business driver. You know what’s even dumber? Telling AI to do it instead of a human. This is the mismatch that’s driving AI projects today.
At Nutanix Next, I sat in a session and listened to a very frustrated CIO relay this story about his internal users. The AI evolution has caused an immense amount of confusion among enterprise AI stakeholders. These business process owners now see what’s a natural user interface into computers and assume AI has both intelligence and consistency. Neither are true, and this is one of the main reasons AI projects fail.
Nothing New Under the Sun
This isn’t a new phenomenon in computing. We’ve misused new abstractions for decades.
I was chatting with Kelsey Hightower at dinner (name drop). He shared how microservices drove a whole generation of overly complex architectures due to this misplaced focus. Cloud bills exploded when teams discovered that 70% of their application cost was the communication between services, not the business logic inside them. Applications became more expensive and less reliable as complexity increased. The lesson wasn’t to abandon microservices — put the boundaries in the wrong place and the overhead consumes the benefit.
LLMs are following the same path.
Preparing vs. Analyzing Tax Returns
Daniel Vassallo went viral for using OpenClaw to prepare his own tax returns. OpenClaw is one of the tools getting significant attention right now precisely because it makes agentic workflows accessible enough that an individual can stand one up for a personal use case. If you look at the system he built, the architecture was actually smart for his workflow. He consolidated his data locally, defined the logic for the agentic loop, and used a public model for the reasoning. For a one-time personal use case, that’s a reasonable design.
Now invert the workflow. If the IRS wanted to analyze millions of tax returns, calling a public cloud LLM for every reasoning decision would break the system. At that scale you’re looking at inference costs that run into the millions per processing cycle, latency that makes batch windows unworkable, and a hard dependency on external availability for a mission-critical government function. Calling an LLM to process every return doesn’t just create a cost problem — it creates an architectural one. The correct design makes reasoning the exception, not the rule. You invoke a reasoning model when the system encounters a decision that can’t be resolved by a known pattern or filter. Everything else runs deterministically.
This is the difference between a demo and a production system, and it’s why solid systems architecture isn’t going anywhere regardless of how capable models get.
The Deeper Point
This is DAPM — Decision Authority Placement Model — in practice. Every system makes decisions. The design question is where those decisions get made and by what mechanism. In the IRS example, routing every return through a reasoning model means the model holds decision authority for the entire process. That’s the wrong placement. Deterministic logic should own the routine cases. Reasoning gets invoked for the exceptions. Getting that placement wrong at the system design level compounds at scale in ways that are expensive to unwind.
Who gets to define that placement matters as much as the placement itself. That’s a governance question as much as an architectural one, and most AI projects aren’t treating it that way yet.
Share This Story, Choose Your Platform!

Keith Townsend is a seasoned technology leader and Founder of The Advisor Bench, specializing in IT infrastructure, cloud technologies, and AI. With expertise spanning cloud, virtualization, networking, and storage, Keith has been a trusted partner in transforming IT operations across industries, including pharmaceuticals, manufacturing, government, software, and financial services.
Keith’s career highlights include leading global initiatives to consolidate multiple data centers, unify disparate IT operations, and modernize mission-critical platforms for “three-letter” federal agencies. His ability to align complex technology solutions with business objectives has made him a sought-after advisor for organizations navigating digital transformation.
A recognized voice in the industry, Keith combines his deep infrastructure knowledge with AI expertise to help enterprises integrate machine learning and AI-driven solutions into their IT strategies. His leadership has extended to designing scalable architectures that support advanced analytics and automation, empowering businesses to unlock new efficiencies and capabilities.
Whether guiding data center modernization, deploying AI solutions, or advising on cloud strategies, Keith brings a unique blend of technical depth and strategic insight to every project.




