Hewlett Packard Enterprise announces its new high-performance computing (HPC) and artificial intelligence (AI) infrastructure portfolio.
It includes an HPE Cray Supercomputing EX solutions and two systems optimised for large language model (LLM) training, natural language processing (NLP) and multi-modal model training.
“Service providers and nations investing in sovereign AI initiatives are increasingly turning to high-performance computing as the critical backbone enabling large-scale AI training that accelerates discovery and innovation,” said Trish Damkroger, senior vice president and general manager of HPC AI Infrastructure Solutions at HPE.
“Our customers turn to us to fast-track their AI system deployment to realize value faster and more efficiently by leveraging our world-leading HPC solutions and decades of experience in delivering, deploying, and servicing fully integrated systems,” Damkroger added.
HPE Cray Supercomputing EX
HPE’s net-new offerings for its entire HPC portfolio claim to offer a choice of air cooling or HPE’s industry-first 100% fanless direct liquid cooling system architecture.
It also claims to span every layer of HPE’s supercomputing solutions, including compute nodes, networking, and storage.
New HPE ProLiant Compute XD server family
HPE also expands its portfolio of servers optimised for high-end AI training and tuning workloads.
HPE’s new category of servers, HPE ProLiant Compute XD servers, aim to empower customers to streamline the deployment of large, highly performant AI clusters.
Users can also maximise optional HPE Services that support building, customisation, integration, validation, and full solution testing within HPE’s manufacturing facility to expedite on-site deployment.