DE · Topics · Resources · Resources · Sponsored Content


GPUs Drive HPC-Powered CAE and Machine Learning

JPR reports evolving CAE landscape

JPR reports evolving CAE landscape

The Graphics Processing Unit (GPU) was once specialized hardware to boost visualization, prized especially by gamers. But with the introduction of general GPU computing, championed by NVIDIA, it expanded its role. Today it still makes in-game explosions and firefights much more realistic with its real-time rendering. But it has also become a way to accelerate engineering simulation, AI and machine learning. Industry analyst Jon Peddie Research’s (JPR) report “Accelerating and Advancing Computer Aided Engineering Workflows” shines a spotlight on how the GPU is changing simulation-led engineering. The book is authored by JPR analysts Kathleen Maher and Jon Peddie, sponsored by NVIDIA, as a free download.

In Part I, the report focuses on how GPU computing changes the workstation-based CAE practices and applications. JPR notes, “The users can run larger simulations faster on their desktop workstations, and as a result, they can optimize their designs with more iterations.”

In Part 1 , JPR concludes:

  • GPUs can achieve 5× or more throughput for the same cost of a CPU. 
  • They can achieve lower cost and power consumption for the same throughput as the CPU. 

Part II expands the coverage to include High Performance Computing (HPC) and private data center and private Cloud-based resources, as well as Public cloud-based resources for running large engineering simulations from vendors such as Microsoft Azure, Rescale and ISV-specific cloud services.. The report draws from examples in software tools from Altair, Ansys, Cadence, Dassault Systèmes, and Siemens. 

CAE at a Higher Scale

A growing number of simulation software programs have refined their code to take advantage of the GPU. Many have also added tools and options to draw on on-demand HPC, especially with the proliferation of servers outfitted with multiple high-performance data center GPUs. This combination allows simulation software users to consider models that were previously impractical to study and scenarios that were previously impossible to simulate. 

Some, such as Ansys Discovery, are written from the ground up to take advantage of the cloud and GPU. “It was designed as a tool to enable engineers and designers to perform more upfront simulations on their local workstations. Ansys Fellow Dipankar Choudhury described Ansys’ development of Discovery as a multi-physics simulation product for design engineers,” JPR writes. 

Image courtesy of Ansys.

Siemens’ Simcenter Cloud HPC and STAR-CCM+ have also joined the list of ISV-specific Cloud HPC solutions leveraging GPUs. Last June, Daniele Obiso, Simcenter STAR-CCM+ Technical Product Manager for Siemens, wrote in a blog post, “The coupled solver in Simcenter STAR-CCM+ is a very robust and efficient density-based solver that has been for years the best practice for several industrial applications, amongst which: automotive vehicle external aerodynamics, aerospace aerodynamics, turbomachinery aero performance and Conjugate Heat Transfer (CHT) blade cooling … In Simcenter STAR-CCM+ 2306 all this will be available on GPU, providing you with a solution for faster turnaround time and lower costs per simulation. Moreover, we ensure CPU-equivalent flow solutions by maintaining a unified codebase, hence providing a seamless user experience and consistent results irrespective of the hardware technology used.”

CFD Benefits from GPU

Computational Fluid Dynamics (CFD) generally puts a heavy demand on the hardware, but it’s also shown to benefit greatly from GPU acceleration. JPR writes, “Traditional CPU-based solvers require lengthy processing times, sometimes spanning days for just a few seconds of real-world activity. GPU-accelerated CFD solvers have been available but faced limitations in feature parity and model size constraints. Altair's CFD Lattice Boltzmann Method (LBM) solver, ultraFluidX, has changed the landscape with its efficient GPU-based implementation, making it ideal for high-fidelity aerodynamic and aero-acoustic simulations.”

Image courtesy Cadence. CharLES high-fidelity GPU solver. Hardware: AWS instances. CPU: c6a.32xlarge AMD Epic Gen2, GPU p4d.24xlarge 8 x NVIDIA A100 GPU.

JPR singles out the introduction of NVIDIA's H100 Tensor Core GPU as the watershed moment for CFD simulation. The report says, “[It] has brought about a revolution in demanding CFD workloads. With up to 18,432 FP32 CUDA cores and various configurations, it enables efficient production-scale CFD simulations with remarkable performance improvements.”

The NVIDIA RTX 6000 Ada GPU also has 18,176 CUDA cores, making workstations equipped with the GPU viable in this space as well.

AI Gold Mine

In the last decade, NVIDIA has also been able to pivot its GPUs as the preferred processors for highly compute-intensive machine learning and AI algorithm development. In May 2023, The Wall Street Journal reported, “Nvidia Joins $1 Trillion Club, Fueled by AI’s Rise.” Since their introduction, NVIDIA RTX GPUs have included integrated Tensor Cores for accelerating AI and machine processing routines, so the same GPUs customers are currently using for general compute acceleration can also provide a significant boost for emerging AI-backed features in engineering software.

JPR sees AI as another catalyst to drive CAE. “Employing AI and machine learning in CAE not only enables process automation but also accelerates the development of simulation tools accessible to non-experts, enabling a new level of democratization in CAE. New business models are emerging to transform product development processes.”

Altair has integrated their geometric deep learning engine, Altair® physicsAI™ into user-native simulation environments like Altair® HyperWorks®. PhysicsAI leverages historical CAE and CAD data to predict outcomes for any physics up to 1000x faster than traditional solver simulation. This saves organizations time and cost as engineers can test more design variations than ever before without the limits of parametric studies or the need to build new simulation models. 

Image courtesy of Cadence.

According to benchmark data from Altair and NVIDIA, using an NVIDIA RTX™ A4000 GPU provided an 8-times speedup for training physicsAI models compared to an 8-core laptop CPU

This January, Ansys launched Ansys SimAI, described as “a physics-agnostic, software as a service (SaaS) application that combines the predictive accuracy of Ansys simulation with the speed of generative AI.” In its announcement, the company writes, “Instead of relying on geometric parameters to define a design, Ansys SimAI uses the shape of a design itself as the input, facilitating broader design exploration even if the structure of the shape is inconsistent across the training data. The application can boost the prediction of model performance across all design phases by 10-100X for computation-heavy projects. Customers can train the AI using previously generated Ansys or non-Ansys data. Training and predictions are hosted on a state-of-the-art cloud infrastructure to ensure that user data is secure and kept private.”

The approach can potentially catch on, prompting other CAE software vendors to introduce tools and features based on the same principle. JPR writes, “[It] is now possible to train AI on a large dataset of CAE simulations enabling evaluations beyond a component or a single design. For example, it’s now possible to evaluate how a variety of models may interact with a variety of environments. Complexity can increase exponentially.”

More Dell Coverage

Artificial Intelligence for Design and Engineering Workflows
In this white paper, learn how artificial intelligence and machine learning can improve design and simulation.
3DEXPERIENCE Lab Offers Advanced Design, Simulation and XR Experiences to Startups
Dassault Systèmes’ Paris facility offers access to cutting edge Dell Technologies Precision Workstations powered by NVIDIA GPUs.
Dell Rugged Laptops Get NVIDIA RTX™ GPU Boost
The new Dell Pro Rugged 14 semi-rugged laptop supports the NVIDIA RTX™ 500 Ada Generation Laptop GPU for advanced graphics and AI performance.
NVIDIA, nTop Strengthen 3D Solid Modeling Collaboration
NVIDIA invests in nTop, integrates OptiX rendering into nTop software.
AU 2024: Autodesk Offers Glimpses of the Future with Project Bernini
New Proof of Concept at Autodesk University Hints at AI Training Based on Proprietary Data
Rise of the AI Workstation
Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.
Dell Company Profile

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#28619