NVIDIA and Hugging Face Connect on Generative AI Supercomputing Integration
As part of the collaboration, Hugging Face will offer a new service—called Training Cluster as a Service to simplify the creation of new and custom generative AI models for the enterprise.
August 8, 2023
During SIGGRAPH, NVIDIA and Hugging Face announced a partnership that will make generative artificial intelligence (AI) supercomputing accessible to developers building large language models (LLMs) and other advanced AI applications.
By giving developers access to NVIDIA DGX Cloud AI supercomputing within the Hugging Face platform to train and tune advanced AI models, the combination will help advance industry adoption of generative AI using LLMs that are custom-tailored with business data for industry-specific applications, including intelligent chatbots, search and summarization, the companies report.
“Researchers and developers are at the heart of generative AI that is transforming every industry,” says Jensen Huang, founder and CEO of NVIDIA. “Hugging Face and NVIDIA are connecting the world’s largest AI community with NVIDIA’s AI computing platform in the world’s leading clouds. Together, NVIDIA AI computing is just a click away for the Hugging Face community.”
As part of the collaboration, Hugging Face will offer a new service—called Training Cluster as a Service to simplify the creation of new and custom generative AI models for the enterprise. Powered by NVIDIA DGX Cloud, the service will be available soon.
“People around the world are making new connections and discoveries with generative AI tools, and we’re still only in the early days of this technology shift,” says Clément Delangue, co-founder and CEO of Hugging Face. “Our collaboration will bring NVIDIA’s most advanced AI supercomputing to Hugging Face to enable companies to take their AI destiny into their own hands with open source to help the open-source community easily access the software and speed they need to contribute to what’s coming next.”
LLM Customization and Training
The Hugging Face platform lets developers build, train and deploy AI models using open-source resources. Over 15,000 organizations use Hugging Face, and its community has shared over 250,000 models and 50,000 datasets.
The DGX Cloud integration with Hugging Face will bring one-click access to NVIDIA’s multi-node AI supercomputing platform. With DGX Cloud, Hugging Face users will be able to connect to NVIDIA AI supercomputing, providing the software and infrastructure needed to train and tune foundation models with data to drive a new wave of enterprise LLM development. With Training Cluster as a Service, powered by DGX Cloud, companies will be able to leverage their unique data for Hugging Face to create efficient models in record time.
DGX Cloud Speeds Development
Each instance of DGX Cloud features eight NVIDIA H100 or A100 80GB Tensor Core GPUs for a total of 640GB of GPU memory per node. NVIDIA Networking provides a high-performance, low-latency fabric that ensures workloads can scale across clusters of interconnected systems to meet the performance requirements of advanced AI workloads.
Support from NVIDIA experts is included with DGX Cloud to help customers optimize their models and resolve development challenges. DGX Cloud infrastructure is hosted by leading NVIDIA cloud service provider partners.
About Hugging Face
Hugging Face is the collaboration platform for the machine learning community. The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source machine learning.
Sources: Press materials received from the company and additional information gleaned from the company’s website.
More NVIDIA Coverage
About the Author
DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].