AWS and NVIDIA Extend Collaboration
Plan is to advance generative artificial intelligence Innovation.
Latest in Engineering Computing
Engineering Computing Resources
Amazon Web Services
Latest News
March 26, 2024
Amazon Web Services (AWS), an Amazon.com company, and NVIDIA announced that the new NVIDIA Blackwell GPU platform—unveiled by NVIDIA at GTC 2024—is coming to AWS. AWS will offer the NVIDIA GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs, extending the companies’ strategic collaboration to deliver the most secure and advanced infrastructure, software and services to help customers work with new generative artificial intelligence (AI) capabilities.
NVIDIA and AWS continue to collaborate on their technologies, including NVIDIA’s multi-node systems featuring the next-generation NVIDIA Blackwell platform and AI software, AWS’ Nitro System and AWS Key Management Service (AWS KMS) advanced security, Elastic Fabric Adapter (EFA) petabit scale networking, and Amazon Elastic Compute Cloud (Amazon EC2) UltraCluster hyperscale clustering. age models (LLMs) faster, at massive scale, and at a lower cost than previous-generation NVIDIA GPUs on Amazon EC2.
“The deep collaboration between our two organizations goes back more than 13 years, when together we launched the world’s first GPU cloud instance on AWS, and today we offer the widest range of NVIDIA GPU solutions for customers,” says Adam Selipsky, CEO at AWS. “NVIDIA’s next-generation Grace Blackwell processor marks a significant step forward in generative AI and GPU computing. When combined with AWS’s powerful Elastic Fabric Adapter Networking, Amazon EC2 UltraClusters’ hyper-scale clustering, and our unique Nitro system’s advanced virtualization and security capabilities, we make it possible for customers to build and run multi-trillion parameter large language models faster, at massive scale, and more securely than anywhere else.”
“AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries,” says Jensen Huang, founder and CEO of NVIDIA. “Our collaboration with AWS is accelerating new generative AI capabilities and providing customers with unprecedented computing power to push the boundaries of what's possible.”
Latest innovations from AWS and NVIDIA accelerate training of large language models that can reach beyond 1 trillion parameters
AWS will offer the NVIDIA Blackwell platform, featuring GB200 NVL72, with 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVIDIA NVLink. When connected with Amazon’s networking (EFA), and supported by advanced virtualization (AWS Nitro System) and hyperscale clustering (Amazon EC2 UltraClusters), customers can scale to thousands of GB200 Superchips. NVIDIA Blackwell on AWS delivers a leap forward in speeding up inference workloads for resource-intensive, multitrillion-parameter language models.
AWS plans to offer EC2 instances featuring the new B100 GPUs deployed in EC2 UltraClusters for accelerating generative AI training and inference at massive scale. GB200s will also be available on NVIDIA DGX Cloud, an AI platform co-engineered on AWS, that gives enterprise developers dedicated access to the infrastructure and software needed to build and deploy advanced generative AI models. The Blackwell-powered DGX Cloud instances on AWS will accelerate development of generative AI and LLMs that can reach beyond 1 trillion parameters.
For more details on their collaboration, click here.
Sources: Press materials received from the company and additional information gleaned from the company’s website.
More Amazon Web Services Coverage
More NVIDIA Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
DE EditorsDE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].