Latest News
October 1, 2017
During the Formula One (F1) races, with millions watching the live feed on TV and online, the skilled drivers must balance their cars’ downforce and drag to handle them efficiently in tight corners and turns. But in the weeks leading up to the race, the engineering teams behind the F1 vehicles also must maintain a different kind of balance.
The Fédération Internationale de l’Automobile (FIA), the governing body of F1 racing, has strict regulations on how much computational fluid dynamics (CFD) and wind tunnel testing a team can employ. The team’s combined use of CFD (measured in teraFLOPS) and wind tunnel (measured in wind-on time, or the time wind is blowing) must not exceed the permitted allocation.
“We have to figure out a way to get the best results using the least amount of teraFLOPS,” Craig Skinner, the Red Bull Racing F1 team’s deputy head of aerodynamics, explains. “So if we have no need for wind tunnel in a certain period, we increase our CFD usage. Or if we need to run more wind tunnel tests, we have to turn down our CFD. That way, by the end of the eight-week period, we would have used up every [wind tunnel or CFD] minute allowed in our budget.”
In the same way F1 teams find ways to speed up their simulation jobs with high-performance computing (HPC), they also speed up production and manufacturing by enlisting additive manufacturing (AM), known more widely as 3D printing. “From a prototyping and tooling standpoint, 3D printing is a much faster option,” says Jim Vurpillat, marketing director, Automotive and Aerospace Industries, Stratasys.
One characteristic distinguishes the F1 teams’ use of HPC and AM from their other automotive engineering counterparts—speed.
Keeping a Close Watch on the Cluster
Because CFD is an integral part of their work, many F1 teams keep and manage their own proprietary clusters. Andy Wade, lead application engineer, ANSYS, has been working with F1 teams for more than a decade. His engagement with the racing teams began at Fluent, the CFD software firm that ANSYS acquired in 2006. “Almost all F1 teams have their own clusters. They usually have about several thousand cores at their disposal,” Wade says.
Because the FIA has strict guidelines on how much CFD a team can use, engineers also find ways to keep a close watch on their CFD usage. Sahara Force India, the F1 team from India, uses Univa’s Grid Engine to manage its HPC usage.
“We like to make the most of our allocation. It’s seldom a problem in the first few weeks, but in the final phase, it’s important [to stay within the balance],” says Gaétan Didier, head of CFD, Sahara Force India. “So in the beginning of the week, we set the soft limit for CFD. Univa’s Grid Engine is useful in keeping track of the FLOPS we use. It would stop new simulations from launching if we’re getting too close to the limit.”
Univa Grid Engine software “manages workloads automatically, maximizes shared resources and accelerates deployment of any container, application or service in any technology environment, on-premise or in the cloud,” the company explains.
Memory-Filled Nodes
Steve M. Legensky, general manager of Intelligent Light, also has engagement with F1 teams. The company’s post-processing software FieldView has been used by CFD engineers on the racing teams. “Most F1 teams that are doing well tend to have their own HPC resources,” Legensky observes. “But there are also teams that find ways to utilize off-premise HPC resources from outside vendors.”
Intelligent Light’s FieldView can scale on HPC hardware. Therefore, the software speeds up significantly in loading, displaying and processing CFD result files when running on HPC (as opposed to desktop workstations). Though the company is not a hardware provider, its experts often provide counsel on the best way to set up the HPC for the best outcome.
“Just throwing cores against the problem doesn’t solve it. We usually work with F1 teams to advise them on how to configure their nodes so they have nodes that are suitable for solving and nodes suitable for post-processing,” explains Legensky. “F1 teams want to run multiple jobs at the same time. So we help them configure their systems with what we call Fat Nodes—systems with just a few sockets, about 16 processing cores, NVIDIA GPUs (graphics processing units) and half a terabyte of RAM. For post-processing, RAM makes a difference.”
RAM capacity ensures that the CFD result files—a combination of numeric outputs and interactive graphic depictions of airflow fields—can be loaded onto the system without any hiccups.
On-Demand HPC
Recently, many on-demand HPC vendors have sprung up, from the ubiquitous Amazon Web Services (AWS) to specialty vendors like Rescale. Therefore, engineering firms that do not want to invest in on-premise servers and HPC clusters have the option to run their CFD jobs on outside vendors’ hardware in the pay-per-usage model. But for some F1 teams, this approach raises concerns about data security.
“If we use an outside vendor, we have to make sure the data created is accessible to us and by us only,” says Didier. “We have to make sure nothing residual remains when the analysis run is over. There’s also an issue with the retrieval of the data. Usually, when the simulation is running, we want to see it live, right away. Outsourced jobs can take some time. Interactive review of the data isn’t always as smooth as doing it with on-premise hardware.”
Didier isn’t completely ruling out the option for on-demand hardware. He also sees the benefits of additional HPC. “Most of the time, we’re using [our on-premise cluster] to run simulation on the current model at full throttle, so we can’t run other simulations. I wish we had a second cluster to run unrestricted simulations,” he says.
FIA counts CFD studies on the current vehicle model against the allotted balance. However, teams may run CFD studies on older vehicle models without restriction. Although the insights derived from older models may not be as relevant as studies done with the most updated vehicle model, they may still yield valuable insights about the vehicle’s performance. Therefore, having a second cluster could give a team the option to run additional “unrestricted” CFD jobs. Didier says his team is considering a hybrid approach—a proprietary cluster augmented with on-demand computing—for the future.
More Info
Racing Teams
Vendors
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Kenneth WongKenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.
Follow DE