Why Your AI Talent Doesn’t Want to Shovel: Blackwell, H100s, and the Cost of Enterprise Inertia
Let’s talk about shovels and backhoes.
Recently, I compared NVIDIA’s H100 GPUs to shovels—indispensable tools, but not exactly built for scale when you’re digging serious holes. My analogy sparked a deeper conversation. A friend offered a twist: what if we think of the new NVL72 Blackwell-based systems not just as a bigger, better shovel—but as the backhoe?
Suddenly, the conversation shifts.
Everyone Needs a Shovel, But Not Everyone Should Be Shoveling
H100s have been the go-to for enterprise AI workloads. And make no mistake—there’s still a lot of work to do that doesn’t require more than a “shovel’s” worth of compute. Fine-tuning small models, running inference on legacy workloads, or standing up initial PoCs—H100s are perfectly serviceable.
But here’s the kicker: even when the job can be done with a shovel, increasingly no one wants to do it that way.
Ever watch a contractor rent a backhoe to dig a hole that you and I could knock out in a day with a shovel? That’s not laziness—it’s economics. Labor is expensive. Time is limited. And talent? It’s nearly impossible to find people who are both willing and able to “shovel” in a world where the high-value talent wants to operate machinery.
The same is happening in enterprise AI.
The Talent Problem No One Talks About
Everyone’s chasing AI engineers, ML ops professionals, and data scientists. You’re paying top dollar to bring them in. And the first thing they ask? “Where’s the GPU cluster?”
They don’t want to SSH into half a rack of underutilized H100s buried in a colocation facility. They want a platform. They want scale. They want to run RLHF fine-tuning jobs without negotiating with IT for compute access. They want something that just works—like an NVL72 pod, with software-defined networking, optimized power and cooling, and integrated developer tooling.
In other words, they want a backhoe. And increasingly, so do the workloads.
The Inertia of the Enterprise
Here’s where things get real.
Most of the enterprise just signed off on their H100 purchases. CapEx committees reviewed them. Datacenter teams are still figuring out power and cooling. You think they’re going to turn around six months later and approve a shift to Blackwell?
Not likely.
This is where enterprise inertia kicks in. It’s not that they don’t see the value in Blackwell—it’s that they haven’t even digested the last meal. The shovel hasn’t been paid off yet. Meanwhile, the AI team is eyeing the construction site next door, where the competition just pulled up with three backhoes and a dump truck.
Being Right Doesn’t Mean Being Ready
This reminds me of what I wrote after the Southwest Airlines meltdown. The CTO probably saw the risk a mile away. But if leadership says no, you don’t stop preparing. You start prototyping, modeling, building relationships, and laying the groundwork for when the shift becomes inevitable.
The same applies here. Smart CTOs aren’t waiting for the green light to move to Blackwell. They’re prototyping new workloads. They’re engaging with NVIDIA, Dell, HPE, and cloud providers. They’re understanding the operational model of NVL72s. They’re ready to move the moment leadership realizes that it’s not about shovels anymore—it’s about getting the job done, faster and with less human capital.
It’s Bigger Than GPUs
Look, the Blackwell announcement isn’t just a hardware refresh. It’s a signal. The same way DB2 on RDS wasn’t just “DB2 in the cloud”—it was a shift in how we manage enterprise workloads. A backhoe isn’t just a faster shovel. It’s a different job site, with different risks, economics, and talent requirements.
That’s what Blackwell is. It’s a statement about where AI infrastructure is going. And it’s a challenge to enterprise IT leaders:
Are you going to keep asking your people to shovel? Or are you going to give them the machinery they need to move at the speed of business?
Share This Story, Choose Your Platform!
Keith Townsend is a seasoned technology leader and Chief Technology Advisor at Futurum Group, specializing in IT infrastructure, cloud technologies, and AI. With expertise spanning cloud, virtualization, networking, and storage, Keith has been a trusted partner in transforming IT operations across industries, including pharmaceuticals, manufacturing, government, software, and financial services.
Keith’s career highlights include leading global initiatives to consolidate multiple data centers, unify disparate IT operations, and modernize mission-critical platforms for “three-letter” federal agencies. His ability to align complex technology solutions with business objectives has made him a sought-after advisor for organizations navigating digital transformation.
A recognized voice in the industry, Keith combines his deep infrastructure knowledge with AI expertise to help enterprises integrate machine learning and AI-driven solutions into their IT strategies. His leadership has extended to designing scalable architectures that support advanced analytics and automation, empowering businesses to unlock new efficiencies and capabilities.
Whether guiding data center modernization, deploying AI solutions, or advising on cloud strategies, Keith brings a unique blend of technical depth and strategic insight to every project.