When Scale isn’t Infinite
In the past, I’ve said multi-cloud as a method for vendor diversity wasn’t worth the effort. Have COVID-19 related capacity restraints in Azure changed my mind?
One of the advertised advantages of moving to the public cloud includes transforming procurement. The myth, companies would no longer pre-purchase capacity and better distribute capital. IT managers could take a just in time procurement approach to the effectively infinite capacity offered by the hyperscale cloud providers. Worst case scenario, if your preferred cloud provider ran out of capacity, you’d leverage your multi-cloud capability and spin up additional capacity or workloads elsewhere.
Not happy with @Azure these days.
Microsoft gives Teams away for free, overburdens their data centers, and blocks paying customers from spinning up new servers.
For days we have been trying to spin up servers to go live. Critical support ticket (120040122000742) is ignored! pic.twitter.com/u3FsPZYgCk
— Thomas Jespersen (@thomento) April 3, 2020
Up until the COVID-19 pandemic, this has all been theory. There have been anecdotal stories of customers unable to deploy beyond their quota in some Azure Regions. Analysts have inferred that the significant increase in the use of Microsoft Teams usage has resulted in capacity restraints for Azure. I haven’t heard of similar constraints for other hyperscale cloud providers.
What have we learned?
What does this mean for the procurement strategy for the public cloud? Should enterprises spread risk by leveraging multiple cloud providers? No, we learned that abstracting the supply chain via hyperscale cloud providers doesn’t change the laws of physics.
I believe data gravity makes multi-cloud an impractical solution to managing the supply chain. In the instance of this pandemic, having the capacity available in another cloud provider wouldn’t solve most challenges. Engineers have to weigh the cost of egress data transfer rates and latency between cloud providers with available cloud resources.
There are high-availability designs where customers could failback to their private data center or a different cloud provider. Again, cost and latency are limiting factors. Not to mention, today, the difference in cloud control plans severely impacts steady-state operations. We’ve consistently preached against making significant changes to your operational processes as part of an emergency response. So, unless your services rely on something such as Kubernetes or VMware vSphere, I don’t consider failover a great mitigation strategy for cloud provider capacity constraints.
Are you still determined to leverage multiple cloud providers as mitigation of availability and capacity? Look to abstract your data from your cloud provider’s infrastructure. By placing data in cloud storage platforms that exist in cloud-adjacent co-locations, many IT operations could take advantage of compute from multiple cloud providers. However, this takes considerable planning and isn’t something I’d recommend implementing in the middle of a pandemic.
What’s old is new
So, what have I seen that works? How are some of Microsoft’s largest Azure customers managing capacity during a constrained period? According to Azure’s website, one of the advantages or reserved instances and capacity is prioritized compute capacity in Azure regions. IT leaders I’ve engaged have a strategy to purchase reserved instances for the expected cost savings but also mitigation against capacity constraints during a disaster or emergency event.
The infinite scale of hyperscale providers is indeed infinite until it isn’t any longer. The process of pre-purchasing capacity highlights how the public cloud service delivery model doesn’t always solve traditional enterprise IT challenges. Also, customers should consider separating their cloud storage strategy from their cloud compute strategy. By decoupling your storage from your cloud provider, you begin to build the process and technology muscle memory to pivot compute from one provider to another.
Share This Story, Choose Your Platform!
Keith Townsend is a seasoned technology leader and Chief Technology Advisor at Futurum Group, specializing in IT infrastructure, cloud technologies, and AI. With expertise spanning cloud, virtualization, networking, and storage, Keith has been a trusted partner in transforming IT operations across industries, including pharmaceuticals, manufacturing, government, software, and financial services.
Keith’s career highlights include leading global initiatives to consolidate multiple data centers, unify disparate IT operations, and modernize mission-critical platforms for “three-letter” federal agencies. His ability to align complex technology solutions with business objectives has made him a sought-after advisor for organizations navigating digital transformation.
A recognized voice in the industry, Keith combines his deep infrastructure knowledge with AI expertise to help enterprises integrate machine learning and AI-driven solutions into their IT strategies. His leadership has extended to designing scalable architectures that support advanced analytics and automation, empowering businesses to unlock new efficiencies and capabilities.
Whether guiding data center modernization, deploying AI solutions, or advising on cloud strategies, Keith brings a unique blend of technical depth and strategic insight to every project.