Latest News
April 11, 2018
On Tuesday March 27, NVIDIA GPU Technology Conference (GTC 18), the annual gathering of GPU computing champions and gurus, opened with “The Greatest Show,” the energetic, upbeat theme song from the film by the same name.
Sporting his trademark biker jacket with ribbed shoulders, NVIDIA CEO Jensen Huang strolled up to the main stage inside the San Jose Convention center.
“From one frame in many hours to 60 frames per second—that fundamental difference was the gap we've been trying to close for literally four decades,” he said.
Real-time ray tracing is computation intensive. It involves calculating every light ray's physical path to accurately render a virtual scene. Most users activate it only when such visuals are absolutely essential. Even minor changes to the scene, from a switch in perspective to a shifted light source, prompt recalculation of the light rays all over, putting a burden on the system.
The Elevator Pitch
A little over 10 minutes into the keynote, Huang paused, allowing what appeared to be a short Star Wars-themed movie clip to run on the oversized screen. Created with the help of Epic Games, it depicted the moment two storm troopers had the misfortune to share an elevator ride with their high commander.
Except, it wasn't a pre-rendered movie clip but a real-time, ray-traced demo.
“What you just saw was completely rendered in real time. This demo is running completely on real-time, and it's running on one DGX Station. Instead of a supercomputer, this is running on one computer with four NVIDIA Volta in real time,” explained Huang. “This is what we can do now—$68K vs. a supercomputer ... It's the first time ray tracing has been done at this level in real time.”
The demo represents an implementation of NVIDIA RTX technology in the Unreal game engine, according to Dr. Steven Parker, NVIDIA's VP of Professional Graphics, who was running the demo offstage.
“We're announcing the NVIDIA RTX technology,” he said. “You're also seeing deep learning in action. Without deep learning, it would have been impossible to have traced all of those rays.”
The GPUs—the graphics coprocessors that launched the company in 1993—still remains the bread and butter of NVIDIA's business, but the company is obviously now eyeing at a bigger pie—the AI pie. In fact, the tagline for GTC 18 was “The Premiere Conference on AI.”
NVIDIA Unleashes RTX
NVIDIA RTX gives you cinematic-quality rendering, powered by NVIDIA's Volta GPU architecture. In its explanation of the technology, NVIDIA writes: “While ray tracing has long been 'the future' or holy grail of computer rendering, we are now seeing the advent of consumer GPUs which have enough compute capability to do interesting ray tracing workloads in real-time. It is expected that many use cases will employ hybrid renderers which combine rasterization and ray tracing, so tight integration with an existing rendering API is very important.”
In RTX, ray tracing and deep learning work in tandem. “[In the Star Wars-themed demo,] you are also seeing deep learning in action,” said Huang. “Without deep learning, it would have been impossible to trace all these rays. We're using deep learning in predicting rays, to fill the spots.”
The RTX technology has been incorporated into the APIs of the NVIDIA OptiX ray tracing engine and Microsoft DirectX Raytracing (DXR).
The Quadro GV100
The first workstation-class GPU based on the AI-powered Volta architecture will be the NVIDIA Quadro GV100, Huang revealed. “The new Quadro GV100 packs 7.4 TFLOPS double-precision, 14.8 TFLOPS single-precision and 118.5 TFLOPS deep learning performance, and is equipped with 32GB of high-bandwidth memory capacity,” NVIDIA writes in its blog.
The rendering performance in Quadro GV100 benefits from NVIDIA's AI-powered denoising, which speeds up the pixel-level ray-traced rendering using machine learning.
DGX, Second Generation
At the half-way point into his keynote, Huang cheekily described the DGX-2 as"the world's largest GPU.”
The DGX is not a single GPU but a supercomputer built with a series of GPUs inside. Last year, NVIDIA unveiled the DGX-1, a specialized system aimed at AI researchers and tech pioneers.
The second generation, the DGX-2, boasts 16 NVIDIA TESLA V100s connected via NVLink technology for parallel workloads. It is capable of 300 GB per sec GPU-to-GPU communication capacity, according to Huang's presentation chart.
“What that means is, every single GPU can communicate with every other GPUs at 20 times the speed of PCI Express [PCI-e data transfer bus],” said Huang.
The company describes the DGX-2 as “the first 2 petaFLOPS system ... powered by NVIDIA DGX software and a scalable architecture built on NVIDIA NVSwitch, so you can take on the world’s most complex AI challenges.”
Safety in Self-Driving
In the last couple of years, NVIDIA began pursuing a stake in the autonomous vehicle market. The company develops and offers its AI-powered NVIDIA Drive platform, which it says “enables automakers, truck makers, tier 1 suppliers, and startups to accelerate production of automated and autonomous vehicles.” It's also making a series of boards and system chips aimed at the self-driving market: Drive Pegasus, Drive Xavier, and Drive PX are among them.
As Huang prepared for the keynote, the case of the self-driving Uber that hit and killed a pedestrian just a week before must have been on his mind. In the Q&A with the press, Huang clarified that Uber didn't use NVIDIA's self-driving technology, but its own. Uber did use NVIDIA's GPUs as general processors, according to The Verge. Nevertheless, NVIDIA has also suspended its own self-driving car tests on public roads.
“Safety is the single most important thing,” Huang said in his keynote. “It’s the hardest computing problem. With the fatal accident, we’re reminded that this work is vitally important. We need to solve this problem step by step by step because so much is at stake. We have the opportunity so save so many lives if we do it right.”
NVIDIA's belief in the autonomous car and devotion to it, however, still remains strong. Huang predicted, “Everything that moves, will become autonomous.”
The GPU giant's solution is DRIVE Constellation, a dual-server platform that lets you create virtual roads and infrastructures to simulate different driving scenarios. According to NVIDIA's press announcement, “The first server runs NVIDIA DRIVE Sim software to simulate a self-driving vehicle’s sensors, such as cameras, lidar and radar. The second contains a powerful NVIDIA DRIVE Pegasus AI car computer that runs the complete autonomous vehicle software stack and processes the simulated data as if it were coming from the sensors of a car driving on the road.”
Making Inroad into the Skittish Market
Market analyst Gartner recently conducted a survey to gauge the AI market. “Despite huge levels of interest in AI technologies, current implementations remain at quite low levels,” said Whit Andrews, research vice president and distinguished analyst at Gartner. “However, there is potential for strong growth as CIOs begin piloting AI programs through a combination of buy, build and outsource efforts.”
Only 4% of the Gartner survey respondents have deployed AI solutions, as it turns out. It suggests the market believes in AI's potential, but most CIOs are also opting to wait and see; only a few are willing to take the risk of being pioneers.
In the Q&A with the press, Charlie Boyle, NVIDIA’s Senior Director, DGX, said, “Our DGX unit sales wildly exceeded expectations. For an enterprise system priced over $100,000, people placing orders on the same day of the announcement was amazing. It shows there's a pent-up demand for it.”
NVIDIA's AI-training system DGX-1's listed price on NVIDIA's order form is $129,000-$149,000. Huang announced the price for the DGX-2 as $399,000. As high as these sticker prices may seem, Huang argued they could replace $3 million CPU servers that you would otherwise need to run machine-learning workloads.