The Future of Accelerated Engineering Computations
It’s important to understand the problem, and define it correctly to get the right results in the right time, DE Design and Simulation Summit presenter says.
November 3, 2023
In today’s age, there are “enormous changes” in what’s available in the hardware space and how that affects engineering computing. That’s the 30,000-foot-view from Ozen Engineering’s MingYao Ding, VP of engineering and principal, who presented “The Future of Engineering Computing—From Workstations to the Cloud” track of the DE Design and Simulation Summit, October 26, 2023. The Sunnyvale, CA-based Ozen Engineering is a company that provides engineering simulation software and training; and serves as an Ansys Simulation channel partner.
Kenneth Wong, senior editor of Digital Engineering, moderated the track.
Ding, a 20-plus year veteran of the simulation software and engineering analysis space, shared how his team uses different types of hardware to accelerate their engineering computations.
“When engineering simulations, this takes a CAD model, which you can design in all the major CAD tools. We can discretize the CAD object into meshes … then do a wide range of simulations. This type of simulation—engineering computing— can easily become computationally heavy. It’s important to solve these problems quickly.
“One of the key challenges of engineering computing: we can never have enough computing resources to model the real world exactly as it really is in all of its intricate detail,” Ding explains.
The attempt to speed up simulation involves a set of trade-offs, according to Ding. “We have to start by understanding the level of accuracy and amount of detail we want in our simulation results. From there we can choose different types of simulation systems and different methodologies.”
What Matters Most
Critical for the engineer, when challenged with a difficult problem, is to understand what that problem is and defining it in such a way that “you get the right results in the right time,” Ding says.
For example, he says, when observing a complex problem, one can choose to reduce the problem to a component-level instead of system-level problem.
Once it’s determined what problems to solve and what methodology to apply, Ding says it’s time to decide what systems to run it on.
Questions to Ask
Questions might revolve around number of CPUs? GPUs? What type of memory? SSD or HDD? “All of these things are important to run simulations quickly,” Ding says.
It boils down to 3 main ways to accelerate engineering computing, Ding says.
The first way has been done for the last three decades: distributed solve on multiple CPU cores. “Take one big problem and chunk it up into lots of smaller problems. Send each of these sections into a different computer. After you run the simulation, combine it all back together — and have a final result,” he says. This way can solve huge problems quickly and is often done in aerospace, automotive and electronics.
In the last 10 years, GPU acceleration has entered the picture with capacity to run fairly large models on it. “This allows to solve much, much faster—10-100x speedup with same type of problems,” Ding says.
The third way to accelerate engineering computing: distributing parametric changes across multiple computers, a scalable method.
The key factor when picking a GPU — amount of memory available, Ding explains. “Memory is the key consideration for simulation size. All GPUs are extremely fast,” he notes.
GPU can be used in a range of engineering simulations: particle, optics, photonics, electromagnetic.
“GPU simulation really is on the front burner of most of the simulation development teams these days because memory available in these CPUs are now allowing us to do truly industrial level problems all in a workstation GPU,” he shares.
“Literally it takes less time for our team to run this CFD model than it takes to get a cup of coffee. Productivity improvements available at workstation level for all engineers who do simulation is very impressive and exciting for all of us,” he shares.
Cloud Acceleration
On the cloud acceleration side, Ozen notes big improvements in cloud-based computing: access high-performance computing anywhere; huge range of computer options; choice of cloud vendor; any software you want to run; multiple licensing options.
He highlighted the advent of managed cloud solutions about a decade ago, examples such as Microsoft Azure or AWS, which enabled ability to run engineering computing on a private cloud. It’s easy to set up and use with dedicated support. Now there are public cloud solutions with some setup.
“Teams can now do amazing things just from the desktop workstation,” Ding says. “This allows integration of rapid simulation and validation as a part of the product design process.”
And for those times when you need to solve really big problems, maybe analyze a full system in detail. “That’s when our team now accesses cloud HPCs,” Ding says.
Cloud HPCs offer enormous computational resources available on demand, many access options, simplified installation and setup across options and suitable for occasional large simulations or heavy parametric optimization workload.
“The combination of very powerful GPUs powered workstations with cloud - enable our engineers and customers we work with to tackle any size problem in the most efficient manner,” Ding concluded.
The DE D&S Summit was sponsored by NVIDIA and Dell. The engineering computing track was sponsored by PSSC Labs and UberCloud.
Subscribe to our FREE magazine,
FREE email newsletters or both!About the Author
Stephanie SkernivitzStephanie is the Associate Editor of Digital Engineering.
Follow DE