Latest News
June 12, 2020
Engineers are working with increasingly larger and more complex models during the design process. Working with a model of a complete airplane, for example, has always been a challenge, but model size and complexity are an ongoing concern across industries.
Even models for much smaller products may incorporate multiple, complex sub-assemblies, as well as metadata, material properties, and other information. And as more smart and connected products are developed, model complexity is only going to increase.
Although these complex, high-fidelity 3D models are crucial to bringing sophisticated products to market faster, they can wreak havoc on workflows and productivity by grinding workstation performance to a halt. An increasing embrace of systems modeling concepts along with pervasive simulation use, including new modalities and more widespread adoption among a broader audience, is also boosting model fidelity to a point where it can be taxing for older workstations to maintain effective performance.
Design and simulation software providers have responded by incorporating new features that make handling large models easier and faster. In most cases, these features also leverage the availability of powerful GPUs, as well as the higher core counts and larger amounts of memory available in modern engineering workstations.
“Customers are trying to add more value for their customers and many are doing that by moving to system-level design,” says Jon den Hartog, senior director of product management at Autodesk. “That means the scope of what they’re modeling increases as a result. At the same [time], the scope is increasing because they are trying to create a more accurate digital representation of what the design is before they build it.”
This means engineering teams newly empowered by multidisciplinary simulation and systems engineering workflows are taking a productivity hit unless they re-evaluate their optimal workstation configurations.
“Without enough horsepower, it affects their quality of life working with a massive model,” den Hartog says. “Every change will require a significant amount of time to calculate and propagate the math throughout the model.”
Faster Hardware
New solid-state storage options, CPUs with faster clock speeds and more powerful GPUs are just some of the hardware advances being integrated into next-generation engineering workstations to help with large-scale 3D model management and processing.
NVIDIA’s Quadro RTX family (powered by the NVIDIA Turing architecture) has set a new bar. The Quadro RTX line integrates RT Cores, accelerator units dedicated to performing ray tracing operations with high-level efficiency, along with high-end memory and artificial intelligence capabilities.
The architecture and core combination is designed to optimize performance of sophisticated applications like virtual reality, ray tracing, photorealistic rendering and simulation, all of which require massive compute horsepower and real-time performance
The choice of RTX platform depends on the use case—the RTX Quadro 4000 hits the sweet spot for engineers immersed in photorealistic ray tracing applications, while the higher-end Quadro RTX 8000, which is equipped with 48GB of GPU memory and ability to pair two GPUs to double system memory and performance, is the highest end option. For engineers on the go, there are mobile workstations that also provide GPU acceleration. The newest editions to the Dell Precision 7000 family, for example, can support up to the Quadro RTX 5000.
Although GPUs are a crucial tool for boosting large-scale model performance, they aren’t the only solution to large model bottlenecks—sometimes users can have trouble getting data into the cache for the first time, says Ken Versprille, executive consultant at CIMdata.
“Typically, the big complaint we hear from CAD users is that it takes so long to initiate the processes and activate the assembler and they struggle with that, even with newer GPUs,” he says.
Beyond GPUs, the right choice of CPU, depending on the application, along with solid-state storage and memory options also play a big role in optimizing performance and addressing large-model complexity.
In general, the system’s RAM (random access memory) helps CAD software performance overall, since it lets the software load large files into the memory for immediate access. But for some graphics-related operations, the GPU’s built-in memory also makes a difference for the same reason.
Configuring the right workstation for these workloads requires a holistic view of the entire system. Dell offers Dell Precision Optimizer, an AI-based tool that provides a real-time view of system performance. In addition to flagging potential system bottlenecks, Optimizer can tweak hardware so the workstation runs the slated application faster than it would with default settings. That means the workstation can automatically optimize itself for specific workloads in real time.
Large Model Softsare Optimization
CAD, simulation and design tool providers are also working to identify and re-architect their solutions to enable the software to truly leverage GPUs and other optimization advancements. Large model performance improvements have been top-of-mind for most of these vendors.
In addition to expanded GPU support, vendors are introducing data management features that allow engineering teams to holistically work on large models and render complete assemblies by limiting what must be loaded into memory or directing more processing work to the GPUs.
Within Autodesk, for example, if an engineer makes a model change in Inventor, the system can split the calculations into multiple parts that are computed independently and brought back together at the end, den Hartog explains. Ray tracing operations are another use case where code parallelization can deliver performance increases, he says.
Autodesk is also investing in real-world model testing to understand workflows and real-world bottlenecks and a feature called adaptive graphics. The latter is a specific capability that detects if the software refresh frame rate dips below a certain threshold when working with huge models; if so, it adjusts by not rendering some of the smaller parts while rotating a model.
IronCAD also claims a significant improvement in large assembly performance in IronCAD 2020 thanks to improvements like reduced load/save times; speed improvements in the IronCAD View Creation mode, especially with large assemblies; and new functionality to help the user selectively modify camera interaction for greater efficiency.
SOLIDWORKS also has a Large Assembly Mode, designed to accelerate assembly performance. When this mode is turned on, the software employs a number of strategies to reduce the graphics workload. It involves foregoing non-critical display features to make the dataset lighter. For example, the software suspends high-quality transparency and automatic highlighting of selectable areas during certain operations. These tricks help the assembly respond without delay when you rotate, zoom or pan.
The latest version of SOLIDWORKS also includes an Enhanced Graphics feature (provided you have a certified graphics card) that leverages the GPU to improve responsiveness when working with large models.
Altair claims its Altair OptiStruct structural analysis solver can achieve up to 10X speedups on an NVIDIA GPU-accelerated system architecture. GPU acceleration was first enabled with OptiStruct’s direct solver and large scale NVH solver AMSES. More recently, GPU support has been extended to the PCG iterative solver, which provided similar performance improvements. The OptiStruct 2019 release included support for multiple GPUs so that engineers could extend those improvements to large model scenarios.
With software vendors innovating new ways to leverage GPU horsepower, and hardware makers continuously pushing for new advancements, there are definite signs that large-model performance is only going to continue to improve.
However, it’s up to engineers to know their software and fully leverage these features. That requires training, education, and adoption of good modeling practices. “Vendors are putting in a lot of bells and whistles to determine what you’re working on so that only the graphics data required is fully loaded, which ensures much faster performance,” says CIMdata’s Versprille. “Engineers need to explore their options in this area and try to understand them as best they can.”