AI in the Data Center: When Faster Isn’t Better
AI is changing the data center—but not always in the ways enterprises expect.
In this episode, Keith Townsend is joined by Intel’s Lynn Comp for Part Two of their conversation, shifting the focus squarely to AI infrastructure realities. They explore why many AI workloads never justify GPUs, how CPU-based deployments often exceed real SLAs, and why chasing performance can actually increase cost, risk, and operational complexity.
The discussion digs into architectural portability, governance, data sovereignty, and the growing tension between AI acceleration and enterprise control. They also examine emerging technologies like CXL, the memory-bound nature of AI workloads, and how organizations are measuring AI success beyond hype—focusing instead on efficiency, time saved, and business outcomes.
This episode is a grounded look at what it really takes to run AI in the enterprise data center today—and why “faster” isn’t always “better.”
