May 31, 2024
GPU maker NVIDIA calls its annual GPU Technology Conference (GTC) “The #1 AI Conference for Developers.” The tagline of Intel Vision 2024 Conference is “Bringing AI everywhere”—a not-so-subtle hint as to who the company is going after. In April, Intel unveiled its own AI hardware, Intel Gaudi, signaling a ramp up in efforts to conquer the AI market.
In his keynote, Intel CEO Pat Gelsinger said, “Enterprises are also looking for cost-effective inferencing and AI training. With that, we see them turning toward Gaudi. It's the only benchmarked alternative to the NVIDIA H100 ... You'll see many more customers coming on board as we accelerate our Gaudi offerings in 2024 and 2025.”
Originally a graphics hardware supplier, NVIDIA began to transform itself into an AI tech firm over the last decade. The company's release of CUDA programming language in 2006 enabled GPUs to become general-purpose parallel processors. And its release of Turning architecture-based RTX GPUs marked another leap into AI. Can Intel catch up with a rival with such a head start? We turned to industry insiders for answers.
Is Intel Gaudi a Threat to NVIDIA's AI Business?
Jon Peddie, President of Jon Peddie Research (JPR), a veteran analyst of the computer graphics sector, said, “Spec-wise, Intel Gaudi is a worthy rival to NVIDIA's AI-targeted GPUs,” adding, “but keep in mind that a single-function application-specific integrated circuit (ASIC) can almost always beat a programmable general-purpose processor (GPP).”
CPUs are general-purpose processors, designed to process a wide range of computing tasks. GPUs originated as ASICs designed to accelerate graphics performance, but in NVIDIA's hands, it evolved into AI workload accelerators. The hardware also become more programmable, thanks to CUDA. As a GPP, CPUs may not be as efficient as tackling graphics or AI workloads as the purpose-build GPUs.
The question is, is Intel Gaudi a GPP or an ASIC targetting AI workloads? Peddie said, “We don’t know for sure because Intel hasn’t disclosed that much about it.” According to an Intel spokesperson, Gaudi “could be considered an ASIC as it’s purpose-built for AI.”
According to Intel, “The Intel Gaudi 3 accelerator, architected for efficient large-scale AI compute, is manufactured on a 5 nanometer (nm) process and offers significant advancements over its predecessor. It is designed to allow activation of all engines in parallel—with the Matrix Multiplication Engine (MME), Tensor Processor Cores (TPCs) and Networking Interface Cards (NICs) - enabling the acceleration needed for fast, efficient deep learning computation and scale.”
Challengers in the Workstation Market
Once, the ubiquitous “Intel Inside” label marked the company's dominance in the PC and workstation markets, but it now faces new challengers. Peddie noted, “Intel is still the current leader in market share for workstation CPUs, but AMD is hot on Intel’s tail.”
Another crucial chess piece in the processor market is the UK-headquartered Arm. Arm does not produce processors of its own, but it licenses the instruction set - the recipe, if you will - to make processors. It's the go-to place for those who want to make their own processors, either to compete with Intel or to reduce reliance on Intel. Peddie said, “Companies offering Arm processors for AI, such as Qualcomm, Apple, and NVIDIA, are definitely going to make a big impact. It will take some time to get all the software shimmed in but the price-power-performance (what we like to call the Pmark) advantages of the Arm RISC (Reduced Instruction Set Computing) processor are truly compelling. Arm processor also offer a cooler operation capability which is another consideration and one that can yield more real estate utilization for a data center operator.”
The CUDA-Dominated Landscape
For Intel Gaudi, one of the challenges is to counter the already widespread adoption of NVIDIA CUDA, especially in compute-intense engineering simulation workloads. According to Wim Slagter, Director, Partner Programs, Ansys, “Leveraging Intel Gaudi in FEA and generative design software is feasible, though it may require more development efforts compared to using the established NVIDIA CUDA methods at this time. The ease of using Gaudi will largely depend on the adoption of Intel’s oneAPI by independent software vendors. Broader adoption of oneAPI would lead to continuous improvements and refinements in the tools and libraries, making the transition smoother and more efficient for our developers.”
Intel on the other hand is betting on the momentum in open-source development activities to sidestep the CUDA ecosystem. “Nearly 100% of LLM development is happening on industry standard frameworks like PyTorch and TensorFlow,” an Intel spokesperson pointed out.
In its announcement, Intel revealed its plan to “create an open platform for enterprise AI together with SAP, Red Hat, VMware, and other industry leaders to accelerate deployment of secure generative AI (GenAI) systems, enabled by retrieval-augmented generation (RAG).”
Intel Gaudi 3 will be available to original equipment manufacturers (OEMs) in the second quarter of 2024. Systems with Gaudi 3 will be offered by Dell, HP Enterprise, Lenovo, and Supermicro.
Thomas Jorgensen, Sr Director of Technology Enablement at Supermicro, said, “The Matrix Multiply Engines (MMEs) are critical for Intel Gaudi 3's performance. Each one can handle 64,000 parallel operations simultaneously, making Gaudi 3 ideal for complex matrix operations. These operations are essential for deep learning algorithms, a powerful type of AI used for tasks like image recognition and natural language processing. The Gaudi 3's unique design accelerates parallel AI tasks and allows it to work with various data formats, including FP8 and BF16. This flexibility helps ensure efficient use of the accelerator for a wide range of AI applications.”
Supermicro also adds, “We will have those in our Universal GPU series, specifically the 8U Server, which can be air or liquid-cooled.”
More Intel Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!About the Author
Kenneth WongKenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.
Follow DE