Will the DPU kill the Storage Array?
Will the rise of the DPU mean the fall of the traditional storage array? The Data Processing Unit (DPU) has seen a surge due to successful systems such as AWS Nitro. VMware recently announced project Monterey. The industry is slowly waking up to the power of SmartNIC for typical data center services. Shortly, HCI solutions such as VMware VSAN offload storage functions to the SmartNIC, thus reducing the CPU demand. However, what happens when you bypass the concept of HCI using the DPU? That’s the promise from startup Nebulon.
44 Zettabytes of Data
Let’s take a moment to focus on the data part of the data processing unit. According to a 2012 IDC prediction, by 2020, there would be 44ZB (44 Thousand Petabytes) of data in the world. That number continues to grow. As enterprises continue to ingest and process this data, it’s clear the legacy model of centrally storing the data will keep with the pace of data creation.
DPU’s enable a distributed model for the collection and analysis of the ever-growing data. To call it traditional seems a bit early. Still, the conventional method is the approach VMware is taking with Monterey VMware leverages SmartNIC to change the fundamental VSAN architecture. SmartNIC enabled VSAN very much looks just like any other HCI solution.
Disaggregation of storage
During Storage Field Day 20, Nebulon gave the delegates another look at distributed storage services. Nebulon proposes replacing the storage controller within each standard server within your data center. The Nebulon storage controller then communicates with a centralized control plane to create essential storage services.
Data replication is an example of one of the essential services. Operating systems access data via standard local storage controller drivers. Nebulon allows for multiple controllers on separate servers to replicate data over the network providing redundancy. In the case of a drive failure, the solution should allow for failover to the replicated data set.
The architecture isn’t something that’s a new concept. Software-based solutions exist on the market today. Software-only solutions come with a heavy impact to system performance and limited protection. Nebulon is wrapping this in a nice hardware-based container.
Packaging
Of interesting note, Nebulon plans to sell these solutions as part of the Server OEM standard configuration SKU. So, OEM’s such as Lenovo install the controllers as part of the server build. My fellow delegate Jason Benedicic called it Shadow Storage. The target customer isn’t the enterprise storage group. The target of this solution is the server team.
It makes for a curious set of questions around support. The representatives made clear server OEMs provide supports.
Enterprise server hardware customers have become familiar with this model as a result of chassis-based server platforms. Server chassis include complicated network stacks. Customers maintain the same single point of contact for support for these network modules as with any other server hardware support incident.
Keith’s take
Should you abandon your VMAX’, Flashblades, and Nutanix clusters? No. Nebulon is a unique take on the low end of the market. I’d recommend experimenting with backup or low-priority workloads such as analytics. If I were in the market for some 2U servers, I buy a couple of these as a way to learn more about a new distributed storage model. The relative cost of a storage controller isn’t material for most enterprise server customers.
Share This Story, Choose Your Platform!
IT infrastructure subject matter expert (Cloud, Virtualization, Network & Storage) praised for transforming IT operations in verticals that include Pharma, Software, Manufacturing, Government and Financial Services. I’ve lead projects that include consolidation of multiple data centers and combining disparate global IT operations. “Three letter” Federal agencies have called upon me to lead the modernization of critical IT communication platforms.