Create Computer Clusters for Small Engineering Teams
Can a small engineering house build (and maintain) an effective computer cluster?
Latest News
June 1, 2012
By Peter Varhol
Most engineers are well aware of the growing performance advantages of computing clusters for many types of engineering work. Any computing problem that can be broken down into large numbers of small, but independent computations has the potential to be significantly accelerated by the many processor cores available on most clusters. This includes just about any type of simulation, as well as standard analyses such as computational fluid dynamics (CFD) and finite element analysis (FEA).
Computing clusters, such as this one from IBM, have revolutionized design
engineering by speeding analysis and simulation.
In contrast, most design practices are sequential in nature, and can only make use of one core at a time. It’s important to note that any software used for these applications has to be written specifically to make such computations independent. Several leading engineering analysis vendors, such as ANSYS, have made their software friendly for multicore systems.
Despite those limitations, clusters have become the state of the art for much of engineering computing. Virtually all of the Top 500 supercomputers employ some form of cluster technology. But the cost and complexity could still put the technology out of reach of small engineering groups.
More Complex Than It Sounds
Actually making use of a cluster’s significant benefits is more difficult than it seems at first glance. Built from scratch, clusters can be technically difficult to purchase, configure and administer. The combination of processors and processor cores, cache, system memory and interconnect are highly dependent upon one another—and on the actual type of work being done.
Further, administering a cluster involves loading and managing jobs, watching computational trends to properly allocate resources for specific types of jobs, and making sure that jobs are queued up appropriately to take their turn on the cluster. If you get it wrong, at best you’re not optimizing your use of the cluster. At worst, it means wasting much of the time and expense of obtaining the cluster in the first place.
Engineering groups that approach the process as simply “buying multiple PCs” are likely to have underestimated the amount of planning and computing skill needed to get a high level of performance from a cluster. Probably the biggest misconception surrounds the interconnect. Simply performing computations on processor cores is only part of the equation. Because data has to be moved rapidly among systems in the cluster, transfer speed can make or break the process.
The Roads to Cluster Computing
There are several distinctly different ways of getting to cluster computing without the time, money and skills needed to start from scratch. Possibly the simplest is that afforded by a combination of technologies from the likes of Intel, HP and virtualization vendor Parallels. Using Parallels’ Extreme Workstation, engineers can create a virtual machine that spans multiple workstations, opening up a potentially large number of cores and amounts of memory to devote to the cluster.
The key to clustering here is Parallels, which segregates processor cores, memory and disk space for use by the cluster job. For high performance computing (HPC), Extreme Workstation implements Intel’s Direct Virtual I/O (VT-d) technology, which provides a means of accessing workstation I/O very quickly through software. Because the biggest bottleneck of cluster computing is typically moving code and data from one location to another within the cluster, improvements in I/O are an important key to a successful clustering operation.
Such a cluster is built with high-end, single-user workstations that divide computing resources between the interactive engineering user and the cluster jobs. It can be useful to analyze individual parts of a larger project, or running Monte Carlo simulations for sensitivity analysis. Its principal benefit is the rapid turnaround of these types of jobs. Engineers can run analyses while continuing design work, and immediately obtain the results to look at certain design characteristics. This feedback can be integrated back into the design without waiting for a traditional cluster or mainframe job to be scheduled.
For those who would like to custom-configure a cluster, Intel has devised a program called Intel Cluster Ready, where hardware and software vendors have already done much of the testing and engineering work. In effect, Intel has published a set of specifications on how system architectures, memory, data busses, interconnects and even software interact with one another.
A part of Intel Cluster Ready is Intel Cluster Checker, a cluster diagnostics tool that helps make HPC clusters practical for smaller shops that don’t have a lot of experience with managing clusters. The Cluster Check has two main components. First, software vendors have defined representative workloads and used Intel Cluster Checker to confirm that their applications ran successfully on an Intel Cluster Ready system.
Second, once the cluster is installed and configured, it can be run regularly by administrators to enhance system reliability and ensure optimal performance. It assesses firmware, kernel, storage and network settings, and conducts high-level tests of node and network performance on an ongoing basis. While the actual relationship between the benchmarks used and individual job performance can vary widely, they do provide the best indication of the level of efficiency available.
If you don’t want to take the do-it-yourself approach, several hardware vendors are offering prepackaged cluster solutions, including hardware and management software, to ease the transition to cluster computing. These systems tend to be straightforward to set up, configure and begin using. In many cases, the vendor or systems integrator will walk the buyer through the initial setup, configuration and management processes.
Clusters from vendors such as BOXX can be relatively easy to set up while
performing specialized tasks such as rendering.
The Appro Xtreme-X Supercomputer is one example of a cluster designed and tested by a single vendor, integrating the necessary components into a packaged solution. A high-end version of the Xtreme is high on the list of the Top 500 supercomputers. One significant value of the Appro Xtreme-X cluster is that it offers several different configurations, designed for different types of workload, including capacity computing, hybrid computing and data-intensive computing.
The Appro Cluster Engine Management Software suite can reduce the complexity of managing HPC clusters, while providing tools to run complex applications and workloads. It offers server, cluster, storage and network management features, combined with job scheduling, failover, load-balancing and revision control. Management software is an important way to understand how to use your cluster in HPC jobs.
BOXX Technologies has a specialized system designed for optimizing rendering performance for 3D graphics and animation workflows. It offers a rack-mounted cluster that consists of multiple systems, each configured with up to 12 cores and 192MB of memory, connected with Gigabit Ethernet.
Ciara Technologies manufactures what it calls Personal Clusters, including its NEXXUS C series. According to the company, the NEXXUS C is designed and optimized for advanced modeling and simulation. It promises to combine the capabilities of a data center cluster with the usability of a workstation. It can be outfitted with up to 20 Intel Xeon processors for a total of 120 cores, 16-GP-GPU and almost 2TB of memory.
Your First Steps
If your engineering group doesn’t have any experience with clusters, and you don’t have dedicated and experienced IT support, it’s important to start small and get expert help if possible. Preconfigured cluster configurations from the likes of Appro, BOXX and Ciara can provide an out-of-the-box solution to get quickly up and running.
However, clusters should be configured and tuned carefully to make sure they are executing their workloads efficiently. You get this level of understanding primarily through experience.
A great way to learn about cluster computing from the ground up is the workstation cluster. You still have to use high-end workstations with at least Gigabit Ethernet connectivity, but those workstations can be used interactively—and at the same time, their resource can be applied to a cluster. In addition to being cost-effective, they provide an easy way of starting to understand how clusters need to be managed.
Once the group gains experience in both cluster configuration and management with a small workstation cluster, it may be time to look at one of the higher-end approaches. A solution from a single vendor can make sense, although you’re paying for that vendor’s expertise in integrating and configuring it for your needs.
Building your own cluster is a more difficult challenge, and ideally you would like to have dedicated IT expertise to do so. Whether or not you do, Intel Cluster Ready represents a smart way to configure a cluster to meet specific needs. While there will possibly be some configuration issues, much of the integration work has already been done.
Even if you’re a team of only a few engineers, chances are you’ll benefit from some level of clustering. You can do a more detailed level of analysis, or deliver end products more quickly than you can today. But clusters aren’t like PC workstations. You need to understand the relationship between cluster configuration and your workloads, and you need software to queue up and monitor jobs. Still, if your team can make this leap, the cluster will ultimately reward you with better designs more quickly.
Contributing Editor Peter Varhol covers the HPC and IT beat for DE. His expertise is software development, math systems, and systems management. You can reach him at [email protected].
MORE INFO
ANSYS
Appro
BOXX Technologies
Ciara Technologies
HP
Intel
Parallels
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Peter VarholContributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].
Follow DE