A year ago, at Nutanix’s 2024 .NEXT user conference, the Nutanix team announced they were working with Dell on bringing disaggregated storage to their platform. This past week, at .NEXT 2025, came the formal announcement of a similar partnership with Pure.
I’m going to switch up my general article order and push the technical elements to the end of this blog – frankly the technical aspects are surprisingly straightforward. Instead, I’m going to start with some industry notes, opinion, and current caveats. The three of which should help you understand why this is happening, the impact, and guidance through some of the FUD that’s followed the announcements.
Converged & Hyperconverged
First thing’s first, understanding converged, hyperconverged, and where Nutanix comes in.
Without diving into history with pedantry, much of the post-mainframe tech boom was built on infrastructure one might call converged or disaggregated. Companies found that using shared storage, where hard drives sat in large storage arrays and storage presented over a shared network, brought efficiencies and technical benefits vs. isolating drives in individual servers.
There was, and some would stay still is, a level of additional complexity of management that comes with piecing multiple pieces of infrastructure together. So folks started to build solutions that brought compute and storage closer together, which is what we called hyperconverged infrastructure (or HCI). At a high level, multiple servers have compute and storage which are grouped together to create a unified resource pool.
This HCI approach is what Nutanix has been best known for over the past decade. Pulling storage closer to the hypervisor with all the various pros/cons that entails. While first tied to the hip with VMware, Nutanix began to build their own hypervisor, AHV. Nutanix has always been all about the benefits of HCI and many within the organization see it as a core tenant of the solution.
Enter Broadcom
In 2023 Broadcom bought VMware from Dell and the whole hypervisor industry shifted dramatically as Broadcom implemented many licensing and sales practices that upset a large portion of their customer base. With this shift companies began to look at other hypervisors and this presented both an opportunity and an inflection point for Nutanix. In the years up to the Broadcom acquisition AHV has steadily increased in competency and adoption, by some metrics standing alone aside VMware in enterprise readiness.
Nutanix has now found itself in a position of having a modern, well received hypervisor, but one still limited to a HCI architecture. While HCI has very focal fans, the majority of VMware customers are still running converged ecosystems. Ecosystems where storage and compute are often on different refresh cycles, or having a high imbalance between compute and storage requirements, making a full rip and replace either impractical or difficult alongside the benefits that come with a converged ecosystem.
Myself, many of my colleagues, and many within Nutanix, saw an opportunity for Nutanix to open up their architecture to external storage. This would help customers running converged VMware infrastructures by giving them another alternative hypervisor. Many within Nutanix still see HCI as a core part of their DNA, a badge built from years of hard fought technical development and competitive battling to differentiate themselves in the market.
It’s Still Early
While Nutanix and Dell announced their partnership a year ago, general availability (GA) of Nutanix on PowerFlex has only been a few days old as I write this, a full year after that initial partnership announcement.
The Pure partnership was only formally announced a few days ago as well, but GA of this solution stack is not expected until the end of 2025 at the earliest. Early access is coming this summer, but limited to Cisco FlashStack.
I need to emphasize that these solutions are still in an early stage. There are still many questions that have not been addressed. While these solutions have been running in lab environments for a long time there hasn’t been the kind of wide scale customer adoption needed to flesh out all the little technology gremlins. There are still business and partnership discussions taking place between Nutanix and Dell/Pure which are going to change their go-to-market approach over the next year. A whole bunch of core Nutanix capabilities (acronym soup time) including NUS, NDB, NCM, and NKP, are not yet supported in these infrastructures, nor has Nutanix finalized how they’re going to price converged licensing (NCI-C).
A Note On Core Minimums
And this leads me to a bit of FUD I’ve seen floating around: the minimum licensed core count. Depending on who you ask you’ll get different numbers of the minimum number of Nutanix cores required for a Dell PowerFlex or Pure Storage converged ecosystem. The fact is that everyone is both right, and wrong. High limits have been set up, adjusted downward, and disregarded… something that I suspect is going to happen quite a bit as these solutions mature. What you’ve heard was probably true at some point, likely isn’t true anymore, and is almost certainly going to be different at GA.
In my opinion these limits are the result of Nutanix, Dell, and Pure trying to grapple with the change they’re going through, while creating an artificial barrier to entry as they align to a specific market segment as they shake things out. As I have many customers dealing with Broadcom uncertainty, and in many cases outright hostility, I would love the ability to talk about disaggregated Nutanix with small and mid-market customers. While other alternatives such as Hyper-V, Proxmox, containers etc etc. offer value, there are also limitations which AHV can overcome.
Nutanix CVM
Before I get into some of the more technical elements I want to cover the Nutanix CVM quickly. The Controller Virtual Machine, or CVM, wears a lot of hats in the Nutanix technical stack. One of those responsibilities is aggregating all the physical storage within a Nutanix cluster and presenting it back out to AHV.
In the Dell and Pure converged architectures, the CVM’s responsibilities shift from managing local storage, to presenting remote storage. Some of the tasks the CVM would do otherwise, such as storage efficiencies, also shift back to the attached array. Pretty straightforward, right?
I also want to note that Nutanix’s replication factor no longer applies in these environments. Where Nutanix would build RF2 or RF3 and build data redundancy across nodes, that resiliency responsibly has also been shifted over to the storage array.
One element that hasn’t shifted is replication, which is still a Nutanix hypervisor-to-hypervisor task.
All the CVMs need to be on the same page and have the same responsibilities, meaning that the Nutanix cluster needs to be pulling resources from the same location. Today it’s all either HCI storage, Dell, or Pure on a per-cluster basis.
Nutanix & Dell PowerFlex
PowerFlex is a scale-out storage architecture built around multiple nodes aggregating x86 based compute and storage together and presenting them back out as a unified cluster. It’s been around for a long time, albeit under multiple names, going back to 2011 as ScaleIO which was consumed by EMC, then brought under Dell when they in turn consumed EMC.
From a technical perspective there are a couple architecture components/key terms to understand. I call these out because they show up often in marketing, diagrams, and technical documentation and having a base understanding will make everything click into place.

First is the Metadata Manager, or MDM, which is like the PowerFlex cluster overseer which maintains the cluster configuration, layout, and resiliency.
Next is the Storage Data Server, or SDS. The SDS is responsible for taking the individual storage drives from the different PowerFlex cluster nodes and presenting them back as a unified storage pool.
Last is the Storage Data Client, or SDC. The SDC is an agent driver that gets installed on the end device to allow it to consume PowerFlex resources (to put it another way, you install the SDC on a Windows server so Windows can mount and use PowerFlex volumes).
When building a Nutanix cluster supported by PowerFlex, the SDC is configured within the Nutanix CVM allowing Nutanix to integrate with, and consume, PowerFlex volumes. Pretty simple right?
There’s a lot more going on from a management and configuration perspective, but I’m going to take the lazy way out and say that’s outside the scope of this blog.
There are a couple things I do want to call though, mainly as helpful references. There’s a 1-to-1 relationship between virtual machine disk/snapshot and a PowerFlex volume. Networking is going to either be NVMe/TCP at either 25 or 100 Gbps.
In addition there’s currently a minimum of 5 Nutanix compute nodes in a cluster, alongside a minimum of 4 nodes of PowerFlex. The same PowerFlex cluster can be used to present storage to other components in the ecosystem. However, a single PowerFlex cluster only supports data presentation to a single Nutanix cluster.
Nutanix & Pure Storage

Presumably you’ve read through the Dell section just above, so I can cheap out and say Nutanix and Pure is basically the same thing. In this environment the Nutanix CVM is consuming volumes from Pure via NVMe over TCP. Right now the requirements call for a minimum of 25 GbE, and a minimum of 5 Nutanix compute nodes. In addition only the FlashArray //X and //XL are supported.
In my opinion these limitations are to direct initial customer adoption and validation and will likely expand with development. There’s no technical limitations I’m aware of which would preclude //C or even //E support as they’re all running the same software and use the same protocols and API integration.
As with the PowerFlex solution, there’s also a 1-to-1 mapping between volume/snapshot and Pure volume.
One of the nice things is that there’s no need to go out and buy a new Pure array to support this environment. As long as you can update Purity FA, and throw ethernet ports in the array, then you’re good to go.
What’s really compelling about this solution for me is the high level of storage density you can get from Pure. From what I’ve experienced Pure gets best-in-class storage efficiencies, and when you combine that with 36.6 TB DFMs (or 75 TB DFMs once the //C gets support) you can pack a lot of storage in a handful of RU.
Nutanix & Pure Storage & and Cisco FlashStack
While general support for Nutanix and Pure is coming for other hardware vendors (HPE, Dell, etc.) at GA, Cisco is coming to play early with their FlashStack solution. “FlashStack” is a name for a validated hardware infrastructure that’s built around Cisco UCS and Pure Storage, so in many ways this isn’t anything new, just the inclusion of Nutanix as a validated hypervisor.
As I write this there’s broad support for M6 and M7 C-series servers, UCX, and even older B200 M5 and M6 blades. There’s also a minimum requirement of 5 compute nodes for the initial release.
Additional Resources
- Nutanix’s Pure Storage press release
- Pure’s blog on the Nutanix integration
- Pure’s solution brief on the Nutanix integration
- Nutanix’s 2024 Dell press release
- Dell’s 2025 Nutanix press release
- Dell PowerFlex with Nutanix Cloud Platform Technical Overview
- Nutanix Cloud Platform with Dell PowerFlex Deployment Guide
Publication History
- May 9, 2025 – Initial Publication
- May 11, 2025 – Fixed a typo
- Aug 21, 2025 – Fixed a typo, again