In collaboration with NVIDIA, Pure Storage announced new validated reference architectures for running generative AI use cases to accelerate AI adoption by helping users manage high-performance data and compute requirements.
“Embracing our long-standing collaboration with NVIDIA, the latest validated AI reference architectures and generative AI proofs of concept emerge as pivotal components for global enterprises in unraveling the complexities of the AI puzzle,” said Rob Lee, chief technology officer at Pure Storage.
New validated designs and proofs of concept
Retrieval-Augmented Generation (RAG) Pipeline for AI Inference claims to improve the accuracy, currency, and relevance of Inference capabilities for large language models (LLMs).
Pure Storage has also achieved OVX Server Storage validation, providing flexible storage reference architectures and a strong infrastructure foundation.
Vertical RAG Development accelerates AI adoption across vertical industries by creating a financial services RAG solution to summarise and query massive datasets.
New partnerships forged with Run.AI and Weights & Biases optimise GPU utilisation through advanced orchestration and scheduling and enable ML teams to build, evaluate, and govern the model development lifecycle. Additionally, Pure Storage is working closely with ePlus, Insight, WWT, and others to operationalise joint customer AI deployments.
“NVIDIA’s AI platform reference architectures are enhanced by Pure’s simple, efficient, and reliable data infrastructure, delivering comprehensive solutions for enterprises navigating the complexities of AI, data analytics, and advanced computing,” said Bob Pette, vice president of Enterprise Platforms at NVIDIA.