NVIDIA Releases New GPUs

NVIDIA launches GeForce RTX SUPER desktop GPUs for generative AI capabilities, new AI laptops, and new NVIDIA RTX-accelerated AI software and tools for developers.

NVIDIA launches GeForce RTX SUPER desktop GPUs for generative AI capabilities, new AI laptops, and new NVIDIA RTX-accelerated AI software and tools for developers.

NVIDIA announces new generative AI Tensor Core GPUs at CES 2024. Image courtesy of NVIDIA.


NVIDIA is now offering RTX GPU tools to enhance PC experiences with generative artificial intelligence: NVIDIA TensorRT acceleration of the Stable Diffusion XL model for text-to-image workflows, NVIDIA RTX Remix with generative AI texture tools, and NVIDIA ACE microservices.

AI Workbench, a unified, easy-to-use toolkit for AI developers, will be available in beta later this month. In addition, NVIDIA TensorRT-LLM (TRT-LLM), an open-source library that accelerates and optimizes inference performance of the latest large language models (LLMs), now supports more pre-optimized models for PCs. Accelerated by TRT-LLM, Chat with RTX, an NVIDIA tech demo also releasing this month, allows AI enthusiasts to interact with their notes, documents and other content.

“Generative AI is the single most significant platform transition in computing history and will transform every industry, including gaming,” says Jensen Huang, founder and CEO of NVIDIA. “With over 100 million RTX AI PCs and workstations, NVIDIA is a massive installed base for developers and gamers to enjoy the magic of generative AI.”

NVIDIA is delivering innovations across its full technology stack, building on the 500+ AI-enabled PC applications already accelerated by NVIDIA RTX technology.

RTX AI PCs and Workstations

NVIDIA RTX GPUs— capable of running a broad range of applications at the highest performance—unlock the full potential of generative AI on PCs, NVIDIA suggests. Tensor Cores in these GPUs speed AI performance across demanding applications for work.

The new GeForce RTX 40 SUPER Series graphics cards, also announced today at CES, include the GeForce RTX 4080 SUPER, 4070 Ti SUPER and 4070 SUPER for top AI performance. The GeForce RTX 4080 SUPER generates AI video 1.5x faster—and images 1.7x faster—than the GeForce RTX 3080 Ti GPU, according to NVIDIA. The Tensor Cores in SUPER GPUs deliver up to 836 trillion operations per second, the company adds.

Leading manufacturers—including Acer, ASUS, Dell, HP, Lenovo, MSI, Razer and Samsung—are releasing a new wave of RTX AI laptops, bringing a full set of generative AI capabilities to users. The new systems, which deliver a performance increase ranging from 20x-60x compared with using neural processing units, will start shipping this month.

Mobile workstations with RTX GPUs can run NVIDIA AI Enterprise software, including TensorRT and NVIDIA RAPIDS for simplified, secure generative AI and data science development. A three-year license for NVIDIA AI Enterprise is included with every NVIDIA A800 40GB Active GPU.

New PC Developer Tools for Building AI Models

To help developers quickly create, test and customize pretrained generative AI models and LLMs using PC-class performance and memory footprint, NVIDIA recently announced NVIDIA AI Workbench.

AI Workbench, which will be available in beta later this month, offers streamlined access to repositories like Hugging Face, GitHub and NVIDIA NGC, along with a simplified user interface that enables developers to easily reproduce, collaborate on and migrate projects.

Projects can be scaled out to virtually anywhere—whether the data center, a public cloud or NVIDIA DGX Cloud — and then brought back to local RTX systems on a PC or workstation for inference and light customization.

In collaboration with HP, NVIDIA is also simplifying AI model development by integrating NVIDIA AI Foundation Models and Endpoints, which include RTX-accelerated AI models and software development kits, into the HP AI Studio, a centralized platform for data science. This will allow users to easily search, import and deploy optimized models across PCs and the cloud.

After building AI models for PC use cases, developers can optimize them using NVIDIA TensorRT to take full advantage of RTX GPUs’ Tensor Cores.

NVIDIA recently extended TensorRT to text-based applications with TensorRT-LLM for Windows, an open-source library for accelerating LLMs. The latest update to TensorRT-LLM, available now, adds Phi-2 to the growing list of pre-optimized models for PC.

Sources: Press materials received from the company and additional information gleaned from the company’s website.

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].

Follow DE
#28489