NVIDIA Launches Volta GPU and GPU Cloud Platform
Latest News
May 22, 2017
NVIDIA has launched Volta GPU computing architecture. The company also announced its first Volta-based processor, the NVIDIA Tesla V100 data center GPU.
Volta, NVIDIA’s seventh-generation GPU architecture, is built with 21 billion transistors. It provides a 5x improvement over Pascal, the current-generation NVIDIA GPU architecture, in peak teraflops, and 15x over the Maxwell architecture, launched two years ago, according to the company.
The Tesla V100 GPU has various new technologies including
- Tensor cores designed to speed AI workloads. Equipped with 640 Tensor Cores, V100 delivers 120 teraflops of deep learning performance, equivalent to the performance of 100 CPUs.
- NVLink to provides the next generation of high-speed interconnect linking GPUs, and GPUs to CPUs, with up to 2x the throughput of the prior generation NVLink.
- 900 GB/sec HBM2 DRAM, developed in collaboration with Samsung.
- Volta-optimized software, including CUDA, cuDNN and TensorRT software.
NGC will, according to NVIDIA, make it easier for developers to access the latest, optimized deep learning frameworks and the newest GPU computing resources.
NVIDIA combined the key software elements within the NVIDIA DGX-1 AI supercomputer into a containerized package. As part of the NGC, this package, called the NGC Software Stack, will be more widely available and kept updated and optimized for maximum performance.
To address the hardware challenge, NGC will give the flexibility to run the NGC Software Stack on a PC (equipped with a TITAN X or GeForce GTX 1080 Ti), on a DGX system or from the cloud.
The NGC Software Stack will provide a wide range of software, including: Caffe, Caffe2, CNTK, MXNet, TensorFlow, Theano and Torch frameworks, as well as the NVIDIA DIGITS GPU training system, the NVIDIA Deep Learning SDK, nvidia-docker, GPU drivers and NVIDIA CUDA .
With just one NVIDIA account, NGC users will have a simple application that guides people through deep learning workflow projects across all system types whether PC, DGX system or NGC.
Users can start with a single GPU on a PC and add more compute resources on demand with a DGX system or through the cloud. They can import data, set up the job configuration, select a framework and hit run. The output could then be loaded into TensorRT for inferencing.
NGC is expected to enter public beta by the third quarter.
Sources: Press materials received from the company.