New Nutanix Enterprise AI Now Extends to Public Cloud
Nutanix Enterprise AI provides an easy-to-use, unified generative AI experience on-premises, at the edge and now in public clouds.
Nutanix, a leader in hybrid multi-cloud computing, has announced that it has extended the company’s Artificial Intelligence (AI) infrastructure platform with a new cloud native offering, Nutanix Enterprise AI.
Nutanix Enterprise AI can be deployed on any Kubernetes platform, at the edge, in core data centres, and on public cloud services like AWS EKS, Azure AKS, and Google GKE. Nutanix Enterprise AI delivers a consistent hybrid multi-cloud operating model for accelerated AI workloads, enabling organisations to leverage their models and data in a secure location of their choice while improving return on investment (ROI).
Leveraging NVIDIA NIM for optimised performance of foundation models, Nutanix Enterprise AI helps organisations securely deploy, run, and scale inference endpoints for large language models (LLMs) to support the deployment of generative AI (GenAI) applications in minutes, not days or weeks.
Generative AI is an inherently hybrid workload, with new applications often built in the public cloud, fine-tuning of models using private data occurring on-premises, and inferencing deployed closest to the business logic, which could be at the edge, on-premises or in the public cloud. This distributed hybrid GenAI workflow can present challenges for organisations concerned about complexity, data privacy, security, and cost.
Get a Secure and Optimised Way to Deploy LLMs with Nutanix Enterprise AI
Nutanix Enterprise AI provides a consistent multi-cloud operating model and a simple way to securely deploy, scale, and run LLMs with NVIDIA NIM optimised inference microservices as well as open foundation models from Hugging Face. This enables customers to stand up enterprise GenAI infrastructure with the resiliency, day 2 operations, and security they require for business-critical applications, on-premises or on AWS Elastic Kubernetes Service (EKS), Azure Managed Kubernetes Service (AKS), and Google Kubernetes Engine (GKE).
Additionally, Nutanix Enterprise AI delivers a transparent and predictable pricing model based on infrastructure resources, which is important for customers looking to maximise ROI from their GenAI investments. This is in contrast to hard-to-predict usage or token-based pricing.
Nutanix Enterprise AI is a component of Nutanix GPT-in-a-Box 2.0. GPT-in-a-Box also includes Nutanix Cloud Infrastructure, Nutanix Kubernetes Platform, and Nutanix Unified Storage along with services to support customer configuration and sizing needs for on-premises training and inferencing. For customers looking to deploy in public cloud, Nutanix Enterprise AI can be deployed in any Kubernetes environment but is operationally consistent with on-premises deployments.
“With Nutanix Enterprise AI, we’re helping our customers simply and securely run GenAI applications on-premises or in public clouds. Nutanix Enterprise AI can run on any Kubernetes platform and allows their AI applications to run in their secure location, with a predictable cost model,” said Thomas Cornely, SVP, Product Management, at Nutanix.
Nutanix and NVIDIA Collaboration
Nutanix Enterprise AI can be deployed with the NVIDIA full-stack AI platform and is validated with the NVIDIA AI Enterprise software platform, including NVIDIA NIM, a set of easy-to-use microservices designed for secure, reliable deployment of high-performance AI model inferencing. Nutanix-GPT-in-a-Box is also an NVIDIA-Certified System, also ensuring reliability of performance.
“Generative AI workloads are inherently hybrid, with training, customisation, and inference occurring across public clouds, on-premises systems, and edge locations,” said Justin Boitano, Vice President of Enterprise AI at NVIDIA. “Integrating NVIDIA NIM into Nutanix Enterprise AI provides a consistent multicloud model with secure APIs, enabling customers to deploy AI across diverse environments with the high performance and security needed for business-critical applications.”
What Customers Stand to Gain
Nutanix Enterprise AI can help customers:
- Address AI skill shortages. Simplicity, choice, and built-in features mean IT admins can be AI admins, accelerating AI development by data scientists and developers adapting quickly using the latest models and NVIDIA accelerated computing.
- Remove barriers to building an AI-ready platform. Many organisations looking to adopt GenAI struggle with building the right platform to support AI workloads, including maintaining consistency across their on-premises infrastructure and multiple public clouds. Nutanix Enterprise AI addresses this with a simple UI-driven workflow that can help customers deploy and test LLM inference endpoints in minutes, offering customer choice with support for NVIDIA NIM microservices which run anywhere, ensuring optimised model performance across cloud and on prem environments.
- Mitigate data privacy and security concerns. Helping mitigate privacy and security risks is built into Nutanix Enterprise AI by enabling customers to run models and data on compute resources they control. Additionally, Nutanix Enterprise AI delivers an intuitive dashboard for troubleshooting, observability, and utilisation of resources used for LLMs, as well as quick and secure role-based access controls (RBAC) to ensure LLM accessibility is controllable and understood. Organisations requiring hardened security will also be able to deploy in air-gapped or dark-site environments.
- Bring enterprise infrastructure to GenAI workloads. Customers running Nutanix Cloud Platform for business-critical applications can now bring the same resiliency, Day 2 operations, and security to GenAI workloads for an enterprise infrastructure experience.
Key use cases for customers leveraging Nutanix Enterprise AI include:
- Enhancing customer experience with GenAI through analysis of customer feedback and documents
- Accelerating code and content creation by leveraging co-pilots and intelligent document processing; leveraging fine-tuning models on domain-specific data to accelerate code and content generation
- Strengthening security, including leveraging AI models for fraud detection, threat detection, alert enrichment, and automatic policy creation
- Improving analytics by leveraging fine-tuned models on private data.
Nutanix Enterprise AI, running on-premises, at the edge or in public cloud, and Nutanix GPT-in-a-Box 2.0 are currently available to customers. For more information, please visit Nutanix.com/enterprise-ai.