September 14, 2018
Across industries, design complexity is on the rise: In aerospace, lattice structures are helping to lightweight parts while breakthroughs in battery technology are fueling more environmentally-friendly forms of transportation. The behind-the-scenes problem solver is advanced simulation, but the catch is that complex simulation models require considerable computing resources.
Larger, high-fidelity simulation models, coupled with more widespread use of CAE tools by a broader swath of users, has pushed simulation into the forefront of the design process. In addition, the rise of the digital twin, enabled by the Internet of Things (IoT) and connected products, is also fueling more sophisticated simulations to help capture insights on the behavioral characteristics of potential prototypes that would allow engineers to predict failures as well as identify flaws and inefficiencies well before ever building a costly physical prototype.
While simulation helps engineering organizations pinpoint potential glitches when it’s far cheaper to make changes and encourages the experimentation necessary for optimized designs, the iterative process grinds to a screeching halt if the platform is insufficient to handle sophisticated processing needs. It’s not uncommon for engineers with an out-dated workstation to be bogged down for hours, if not days, as their systems try to churn through a backlog of advanced, resource-intensive simulations.
Without a platform tuned for high-performance modeling and analysis, engineering organizations are left with no choice but to rein in simulation efforts, either vastly scaling back design exploration or narrowing the scope of the problems they’re trying to solve—both scenarios that undermine the product development process and ultimately put competitive advantage at risk. According to a Digital Engineering survey, 68% of respondents are forced to limit model size and the amount of detail in their simulations at least half of the time.
“We live in a highly competitive landscape where customers are challenged to drive innovation and increase product quality while at the same time having to reduce development cycle times and shorten time to market,” notes Wim Slagter, director of the High Performance Computing & Cloud Alliance group at simulation software maker ANSYS. “Engineers are pressed to produce more and better designs faster than ever, and accelerating simulation workflows is critical not only for considering more design ideas, but to make more efficient product development decisions based on the understanding of performance tradeoffs.”
Simulation’s Need for Speed
Beyond upgrading to a state-of-the-art engineering workstation complete with multi-core processors, the latest dedicated graphics and systems memory, and SSD storage, there are other ways to amp up scalable compute power and accelerate simulation performance. Next-generation GPU appliances,software-as-a-service-based (SaaS) simulation solutions, cloud-based high performance computing (HPC) resources, and new families of clusters are just some of what’s available today to push the pedal on simulation workflows.
“It doesn’t matter what technology is used—ultimately the idea is to increase engineers’ productivity, whether that’s accomplished through a dedicated GPU appliance or a cloud computing cluster,” Slagter says. “It’s all a matter of what fits best in an individual environment.”
One solution is to dedicate large jobs to servers, freeing up workstations for other tasks. For example, the Dell Precision 7920 is a 2U form factor rack-mount workstation. Because servers are supposed to run 24 hours a day, seven days a week, 365 days of the year, the Precision 7920 offers lots of redundancy. For example, the front of the system provides eight hot-swappable hard drive bays. If a drive fails, you can swap in a new one without having to first shut down the system. The front panel also provides a USB 3.0 port, space for an optical drive, a USB management port, a pair of USB 2.0 ports and a 15-pin VGA port. The Precision 7920 can also be equipped with a second power supply, so that even if one of those fails, it too can be swapped out without interrupting work.
For graphics-intensive workflows, dedicated GPU appliances, typically rack-mount server systems that can be scaled out with professional-class GPUs like NVIDIA Quadro and Tesla, can accelerate simulation workflows and serve as effective tools to speed up rendering and visualization workflows. The high-density, fully integrated clusters are typically appointed with dozens, if not hundreds of CPU cores and GPUs to achieve in the neighborhood of teraflops of computing performance. Most of these systems also support Virtual Desktop Infrastructure (VDI) capabilities, allowing globally dispersed engineering teams to securely collaborate on modeling and design problems regardless of where they are located.
“Accelerating simulation is not just about reducing time to solution for calculations or simulation, but also about shortening processing time and the end-to-end workflow,” Slagter adds. “A VDI environment lets you leave the data where you computed it and do the pre- and post-processing remotely.” To that end, ANSYS release 18.2 offers support for a range of remote displays and virtual desktops.
Cloud-based HPC offerings are another way to wring faster performance out of simulations. Companies like Rescale and Penguin Computing offer system-as-a-service HPC platforms preconfigured with leading simulation software unlike general purpose cloud platforms like AWS or Google, which still require engineering organizations to do the work of configuring and managing HPC environments, a difficult task even in the cloud. Rescale’s ScaleX platform offers turnkey access to over 250 vendors, including simulation software heavyweights like ANSYS, SIMULIA, COMSOL, Siemens PLM Software, Altair, MSC Software, and others. The platform provides engineering teams with a turnkey, full stack HPC cloud solution and the ability to spin up CAE applications from anywhere with a Web browser with just a few clicks, gaining access to the latest technologies without requiring a dedicated staff of IT experts to configure, manage, and schedule jobs in a demanding HPC environment.
Cloud HPC resources are an especially good option for those companies that need bursts of compute capacity to accommodate special projects or the occasionally extra large-scale simulation. “When those special projects fall out of the sky that require large amounts of horsepower for a month or two to win new business and you can’t justify buying all the capability and bringing it on-premise, the cloud is a good option for burst capabilities,” says Rodney Mach, CEO and founder of TotalCAE, which offers hybrid, turn-key HPC simulation services combining both public cloud and private clusters. “It enables you to quickly bring on capacity in hours instead of months.”
TotalCAE’s customers are typically engineering departments grappling with existing systems that are under powered for sophisticated simulation workflows and that don’t have enough support from their IT departments, which are overwhelmed with enterprise IT responsibilities, Mach says. The company offers the choice of a managed HPC Cluster Appliance, essentially a private cloud for finite element analysis (FEA)/computational fluid dynamics (CFD) workloads, or a public cloud option, also managed by the firm and featuring its simple-to-use web portal, which lets engineers submit jobs without any special training.
“We offer turn-key simulation as a service,” he explains. “We are trying to minimize the amount of time engineers spend on setting up computing. We want them to push a button and get answers back so they can focus on engineering things—not IT technology.”
A hybrid computing approach, where the latest workstations and graphics technologies are supported by on-demand HPC when it’s needed, is a workflow that suits many design engineering teams’ simulation workflows.