HPE has partnered with Nvidia to launch a host of AI-focused offerings, including new servers that support the chipmaker’s Blackwell and Blackwell Ultra architecture.
Unveiled at Nvidia’s GTC conference in San Jose, California, HPE also announced a new modular data center offering, in addition to an expansion of its Private Cloud AI to support the new Nvidia AI Data Platform and enhancements to its data management and storage portfolio.
"AI is delivering significant opportunity for enterprises, and requires a portfolio of streamlined and integrated solutions to support widespread adoption," said Antonio Neri, president and CEO, at HPE. “To fully harness the power of AI, HPE, and Nvidia bring to market a comprehensive portfolio of AI solutions that accelerate the time to value for enterprises to enhance productivity and generate new revenue streams."
HPE says the new AI servers have been designed to support the “full range of AI model training, fine-tuning, and inferencing” with Nvidia’s Blackwell and Blackwell Ultra platforms. Built on the Blackwell architecture introduced last year, Blackwell Ultra combines two reticle-sized GPUs, has 15 petaflops of FP4 performance, and 288GB HBM3e.
The servers due for release by HPE include the Nvidia GB300 NVL72, which connects 72 Blackwell Ultra GPUs and 36 Arm Neoverse-based Nvidia Grace CPUs in a single liquid-cooled rack; the HPE ProLiant Compute XD servers which will support Nvidia’s HGX B300 platform; the HPE ProLiant Compute DL384b Gen12 with the Nvidia GB200 Grace Blackwell NVL4 Superchip; and the HPE ProLiant Compute DL380a Gen1, a PCIe-based data center solution which contains Nvidia’s new RTX PRO 6000 Blackwell Server Edition GPU.
The GB300 NVL72 will be available in the second half of 2025, with the ProLiant Compute DL380a Gen12 with Nvidia RTX to follow in Q3, and the GB200 NVL4-supported offering in Q4.
Meanwhile, HPE also unveiled AI Mod POD, an offering described by the company as a “fully integrated solution in a container-based AI data center,” optimized for AI and HPC workloads.
The company claims the solution will enable 3x faster deployment, with each module supporting up to 1.5MW whilst providing a PUE of under 1.1. AI Mod POD supports HPE AI and HPC servers in addition to the company’s Private Cloud AI, and offers HPE’s Adaptive Cascade Cooling technology, a single hybrid system that supports air and hybrid liquid cooling to address the energy demands from data-intensive workloads.
"The AI mod pod is redefining what's possible with AI infrastructure, delivering the speed, scalability, and sustainability in one solution as AI scales," said Trish Damkroger, SVP and CPO, HPC & AI business, HPE.
AI Mod POD is available now.
HPE enhances AI-optimized storage offerings, expands its Private Cloud AI
Advancements to HPE’s AI-optimized storage were also announced by the company.
Updates to Alletra Storage MP X10000 will allow organizations to create AI-ready object data with new automated, inline metadata tagging, accelerating ingestion by downstream AI applications. By collaborating with Nvidia, HPE said it expects to further accelerate the performance of X10000 by providing a direct data path for remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000.
HPE is also expanding its Alletra Storage MP B10000 to simplify data management and help customers address more diverse workloads via unified file access. The updated B10000 offering will additionally have enhanced ransomware protection and allow for the easy movement of data between on-prem data centers and public clouds.
The updates to the B10000 and the X10000 will be orderable in May 2025.
Finally, HPE is expanding its Private Cloud AI to support Nvidia’s new AI Data Platform and “transform data into actionable intelligence through continuous data processing that leverages Nvidia’s accelerated computing, networking, AI software, and enterprise storage.”
Other updates for the Private Cloud AI offering include a new developer system powered by Nvidia and offering end-to-end AI software and 32TB of integrated storage; Edge-to-cloud access provided by HPE Data Fabric, and support for a host of agentic and physical AI applications.
HPE Private Cloud AI developer system is expected to be generally available in the second quarter of 2025, while HPE Data Fabric within HPE Private Cloud AI is expected to be generally available in the third quarter of 2025.