Latest News
August 18, 2024
Astec Industries designs and manufactures heavy equipment for the construction and road building industries. Its machines need to crush, move, and mix rocks, and the company relies heavily on fluid, mechanical, and multi-body dynamics simulations, as well as discrete element method (DEM) simulation to model the behavior of gravel, rocks and other particles.
Andrew Hobbs, Director of Advanced Technologies at Astec Digital, spoke to us about how GPU acceleration has helped advance its DEM simulation via its Altair EDEM software, which is part of the Altair® HyperWorks® design and simulation platform. Using NVIDIA's flagship desktop GPU, the NVIDIA RTX™ A6000 GPUs, Astec testing showed a 90x improvement in simulating realistic particle shapes.
Hobbs began his career as a graduate in 2001 with Astec doing FEA and CFD simulations. Since then, the simulation team has grown from one employee to eight. Over more than 20 years, the company gradually expanded its use of simulation to the point that it is now an essential part of the design workflow.
You can learn more about Astec and its DEM efforts in these videos about simulation of rock crushing and asphalt mixing, as well as a video interview with Hobbs. You can also read more about NVIDIA RTX GPUs and Altair EDEM here and here.
What are some of the key simulation challenges you face at Astec?
We are a company that makes equipment for every part of the Rock to Road® process, from getting rocks out of the ground, making them smaller, to moving, sorting, and stockpiling, through to asphalt and concrete plants, all the way to pavers and reclaimers. For asphalt production we design plants that heat and dry rock and mix and coat it with liquid bitumen. These are very complex multi-physics processes with high energy requirements. Simulation provides us with a tool to analyze and optimize our designs for better performance and efficiency.
We started using EDEM, which is now part of Altair, in 2006 in the very early days of commercial DEM codes. Initially the CPU compute power was not there to do everything we wanted to do, but the application was a good fit.
In much of our equipment harsh operating conditions and high temperatures mean direct observation of the internal process is impossible. You can see what is going in and coming out, but changes to internal features have historically been very trial and error. DEM gives us visibility into what is happening inside the equipment, and then the opportunity to very quickly try different design changes to see how they can affect performance. It has been a game changer for equipment design.
How has DEM evolved? How accurately can you predict the behavior of irregular objects like gravel? DEM has traditionally used spheres, correct?
Spheres are computationally a very efficient way to track contacts, and that is why most DEM codes used spheres. By adding more complex physics to the contact models that determine particle behavior it’s possible to represent bulk flow of non-spherical materials like rock with sphere or clumped spheres. For simulating the flowability of aggregates, we’ve found that a properly calibrated material model using clumped spheres is accurate and fast. You can go for hyperrealism in terms of shape, but that comes at a cost.
For applications where interlocking forces are important or there are orientation specific apertures, shape is more of an issue, so you can use polyhedral particles, which Altair supports. There is a computational cost to that, and for irregular shaped materials like rock you still need to decide on a representative shape or set of polyhedral shapes.
As with all simulation, decisions about methodology including how you approach particle shape depends on what question you are trying to answer, the level of accuracy required, and how much time you have to answer it.
What types of software and hardware are you using for simulation, other than Altair?
Most of the team have Dell Precision 7920 workstations with NVIDIA RTX A6000 GPUs for day-to-day simulation projects. The technology both from solver and from the hardware perspective has advanced significantly, and with access to cloud compute instead of just being able to do a handful of simulations to provide an engineering designer with an answer, we can now look at hundreds of simulations. We can run simulations in the cloud to build a reduced order model (ROM), or use machine learning to create an optimization routine to narrow a very wide initial design space and identify an optimum we might not have found through an iterative design process. We do not use the cloud for every problem, but it has changed the landscape of what is possible with DEM.
We still use local compute for most of our work, but large design of experiments containing hundreds of variations can be drastically accelerated with cloud GPU instances of multiple NVIDIA A100 Tensor Core GPUs.
We’ve developed strong relationships with our CAE vendors over the years. For CFD, FEA, and some multi-body (MBD) simulation we are predominantly using ANSYS tools. If we are coupling DEM with MBD, we use Altair® MotionView® and Altair® MotionSolve®. When doing large scale design of experiments (DOEs), we use Altair HyperStudy to optimize designs and train ROMs. If we are coupling MBD with FEA, or feeding inputs into Ansys Mechanical, we use Ansys Motion. Astec uses SOLIDWORKS for CAD.
How have the more powerful GPUs and workstations affected your DEM workflows?
GPU and DEM are made for each other because of the nature of contact detection algorithms. It has been a game changer in terms of the number of particles we can simulate and the computational speed. When we started out with DEM, we were looking at 25,000 particles in our drum, with an 8-core workstation, which was as good as you could get back then. It would take a day to solve 60 seconds of simulated time. Now in the same amount of time we can simulate 60 seconds of a million particles quite easily with the RTX A6000, and we can extend the physics models to include liquid coating and cohesion, heat transfer, and couple to CFD and MBD.
Currently we are using a single GPU in the workstations, but the next step for us is probably going to be looking at multi-GPU workstations. As we look ahead towards real-time digital twins, using simulation to supply the data to train models will require us to run hundreds of simulations.
What are your plans for artificial intelligence and digital twins?
We are still in the process of understanding where AI can help us. There are great use cases for machine learning and optimization studies and using simulation to create synthetic data. We’ve already seen the benefits of this approach. Simulation gives us a completely controlled, virtual environment to try out lots of things.
Astec Digital’s cloud platform that connects Astec equipment will enable more data visibility and a step closer to real-time digital twins. As more data is available from physical equipment, we can compare the physical systems performance to the predicted performance from a simulation produced ROM, and then close that loop to provide better inputs to our simulations from physical sensors. We are making good progress in this area and the next few years will be really exciting. The technology is there, but we have some work to connect equipment data to simulation and machine learning models to deliver real value to our customers.
I don't believe AI will replace simulation anytime soon. Most of our applications are very niche and not easy to generalize. Any kind of AI model we use is something we will have to train ourselves. We are looking at that and exploring where it would make sense to do this.
We have been using NVIDIA Omniverse, initially as a post-processing tool. Simulation is great as a design tool, but also valuable for sales and marketing. Our equipment is a black box, and being able to show the advantages of our design using simulation results and nice post processing is helpful.
We hope NVIDIA Omniverse will be the 3D platform we use for the digital twins connecting CAD, simulation, IoT data, and machine learning. We are using NVIDIA Omniverse and NVIDIA Isaac Sim™ to produce synthetic data to train computer vision models. NVIDIA Omniverse excels at creating realistic virtual worlds. Combining this with the physics models in our simulations and IoT sensor data will be a very powerful tool to build intelligent and autonomous controls to improve the performance and sustainability of our equipment.