GTC 2024: NVIDIA Unveils New Blackwell Processor, Highlights Omniverse APIs and Microservices
GTC ushers in new AI era with HPC chip and on-demand services.
Engineering Resource Center News
Engineering Resource Center Resources
Dell
Latest News
April 11, 2024
In March, the 17,000-seat SAP center in San Jose, California, was filled to the brim, not by a hockey match or a concert, but by NVIDIA GPU Technology Conference (GTC). Over the last decade, as NVIDIA expands its focus beyond graphics processing to general purpose GPU computing and then to AI, GTC itself has also evolved to become the premiere AI conference.
This year, NVIDIA CEO Jensen Huang stepped up to the podium for another big reveal—the unveiling of the new NVIDIA Blackwell chip for AI workloads.
The Blackwell Era
It's misleading to call Blackwell a “chip,” because it's meant to function as the heart of a supercomputing system—“a platform,” in Huang's words. “People think we make GPUs, and we do, but GPUs don't look the way they used to,” he said. “Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry.”
The new Blackwell GPU comprises “208 billion transistors,” Huang proudly revealed. “Two dies are abutted together in such a way so they think of themselves as one chip. Ten Terabytes of data per second flows between them. There's no memory issue, no cache issue. They function as one giant chip.” NVLink switches further streamline the data flow within the Blackwell systems.
The Blackwell dies are connected to an NVIDIA Grace CPU, built on Arm architecture. In 2021, NVIDIA announced its plan to build its own HPC-targeted CPU, called Grace.
AWS is planning to build one of the first Blackwell systems, capable of processing 222 exaFLOPS per second, Huang revealed. Others looking to build Blackwell systems include Microsoft and Oracle, according to Huang.
In March, as NVIDIA got ready to release its Blackwell chip, Michael Dell, founder and CEO of Dell Technologies, said, “Dell Technologies and NVIDIA are working together to shape the future of technology. With the launch of Blackwell, we will continue to deliver the next-generation of accelerated products and services to our customers, providing them with the tools they need to drive innovation across industries.” Dell is collaborating with NVIDIA to offer a next-generation compute platform based on the NVIDIA hybrid Grace ARM CPU with on-chip Blackwell microarchitecture B200 GPUs.
Partnership with PLM, EDA and CAE Vendors
In his keynote, Huang celebrated a number of partnerships with leading software vendors: Cadence, Siemens , and Ansys, to name but a few. Software vendors are looking to augment and infuse compute-intensive simulation tools based on Finite Element Analysis (FEA) and Computational Fluid Dynamics methods with surrogate models and machine learning to accelerate their solutions. The new method speeds up simulation but demands GPU-powered machine learning to develop the necessary surrogate models.
In March, Ansys announced it will use NVIDIA Blackwell GPUs, Hopper architecture-based GPUs, and GB200 Grace Blackwell Superchips to scale up and accelerate existing solutions. It also joined the Alliance for OpenUSD (AOUSD), joining NVIDIA and others to promote the use of OpenUSD 3D data interchange and framework. The format is the visual language of the NVIDIA Omniverse development platform.
NVIDIA is also bolstering Omniverse's natural language processing to remove technical barriers from potential users with NVIDIA ChatUSD and DeepSearch NIMs (NVIDIA Inference Microservices). “You can speak to [Omniverse] in English, and it would directly generate USD and talk back in USD,” Huang said.
NVIDIA announced it's disaggregating the Omniverse platform, making its core technologies available as cloud API. “With these APIs, you're going to have magical digital-twin capability,” Huang said. The ability to use natural language as prompts to generate 3D objects could significantly improve the user experience in CAD programs and quicken adoption. Therefore, CAD vendors are expected to explore these APIs.
During GTC, Siemens announced, “In the next phase of our collaboration with NVIDIA, the company will release a new product later this year—powered by NVIDIA Omniverse Cloud APIs—for Teamcenter X, our industry-leading cloud-based product lifecycle management (PLM) software, part of the Siemens Xcelerator platform.”
NVIDIA NIM Inference Microservices
During GTC, NVIDIA launched NVIDIA NIM, a set of easy-to-use microservices for developers to create enterprise generative AI applications. The company describes them as “Cloud endpoints for pretrained AI models optimized to run on hundreds of millions of CUDA-enabled GPUs across clouds, data centers, workstations, and PCs.” Adobe, Cadence, Getty Images, and SAP are listed as the earliest firms to access these services, included in the NVIDIA AI Enterprise 5.0 portfolio.
“These NIMs are going to help you create new types of applications for the future—not ones that you write completely from scratch, but you're going to integrate them like teams to create these applications,” Huang said.
Humanoid Robotics
During the keynote's closing minutes, Huang was flanked by a row of robots, from dwarfish ones to life-size ones. Huang said, “In the future, everything that moves will be robotics. And these robotic systems, whether they are humanoid, AMRs (autonomous mobile robots), self-driving cars, forklifts, or manipulating arms, they will all need one thing—a digital twin platform. We call it Omniverse.”
NVIDIA offers Isaac Sim, a virtual robot training system. The system uses NVIDIA Omniverse's immersive 3D environment to replicate the robot's operations in the real world. The rise of robotics, Huang expects, would boost demand for Omniverse.