Engineering Workstations Change Dramatically

Workstations were a passing fad, but you still have to pay attention to what you purchase.

Workstations were a passing fad, but you still have to pay attention to what you purchase.

By Peter Varhol

 
Engineering Workstations Change Dramatically
Workstations like this Nehalem-powered system from HP promise to deliver high performance at very low costs for engineering design,  analysis, simulation, and rendering activities.

We don’t talk about engineering workstations as a computing platform very much today. That’s because many higher-end off-the-shelf desktop and even laptop computers have enough power to serve that role. For a few thousand dollars, you can buy a system that can satisfy most of your needs as a design engineer. You can pay still more, but the days of the $50,000-plus specialized workstation are long since past. And the recent demise of SGI,  whose assets were acquired by Rackspace, didn’t help the reputation of pure engineering workstation systems.

  And that leaves many questions unanswered. What is the best processor for your type of work? How much storage do you need, and what type of storage is best? What about memory? Is a large, high-res monitor essential? Is there a systems architecture that works best for engineering? What operating system works best for you?

  That few thousand dollars can be spent in a variety of ways. The best way will depend on the type of work you do, the applications you use,  and your own way of working. In any case, you should understand the tradeoffs so that the system you or your organization purchases does the best job for you, today and in the future.

Hardware Architecture Alternatives
The standard PC remains the most flexible architecture for a wide variety of engineering applications. It can be used effectively in analysis, design, simulation, data acquisition, and software coding. The combination of price, performance, and familiarity give it a built-in advantage for engineering uses.

  Further, specific models from Dell and HP are designed to be used by engineering professionals, and usually have a lot of memory, fast disks and data transfers, and the most advanced processors available.

  Intel’s Nehalem architecture has the promise to make a significant difference for high-performance computing (HPC) applications. One significant innovation in Nehalem is its so-called Turbo Boost Technology,  which delivers additional performance automatically when needed by taking advantage of the processor’s power and thermal headroom. It does so using a technique that we once called overclocking. It was well known to those of us who built our own PCs 15 or 20 years ago that the Intel processors were capable of running faster than their rated clock speeds. Replacing the clock crystal with a faster one was an easy way of getting a computer to run faster with a slower and less expensive processor.

  Nehalem does it in a different way. It detects when a processor core is running at close to capacity, then overclocks itself one step at a time to be able to run its workload more easily. As the workload diminishes, it clocks back down to its normal speed.

  The processor also incorporates scalable shared memory with memory distributed to each processor with integrated memory controllers and high-speed point-to-point interconnects. Specifically, it has the memory controller on-chip, rather than across the memory bus on a separate chip. This enables the controller to understand what is happening in the processing pipeline and make fetch decisions based on that tight coupling. This approach has the potential to improve performance by ensuring that data and instructions are ready to go so that pipeline stalls become less common.

  Last spring Dell announced the Precision T3500, T5500, and T7500, a computer line that incorporates the Intel Nehalem architecture (now called the Xeon 5500 series) and high-end NVIDIA and ATI graphics processors. Perhaps most impressive, the top-of-the-line T7500 has a maximum memory capacity of an incredible 192GB.

  The T7500 can render complex designs in a minute or two,  designs that took 10 minutes or more on earlier high-performance systems. The interior of the tower system is elegantly designed, fitting dual processors, 12 memory slots, and assorted other hardware in a standard case.

  HP uses the Nehalem primarily for servers, but is also offering the Z800 workstation for applications requiring significant horsepower. While the most powerful version is not yet available as of this writing, it will be comparable to the system from Dell in memory capacity and bus speeds.

 
Engineering Workstations Change Dramatically
The NVIDIA Tesla processor uses the strong floating point performance characteristics of the company’s graphics chips to provide a high performance coprocessor.  Utilizing multiple processor cores, it can be very effective in high parallel computing tasks.

Another architecture alternative is a system with a standard Intel processor, but with the ability to access and execute code on other types of processors for specific purposes. It might be thought of as a compute coprocessor, or set of coprocessors, for highly compute-intensive tasks.

  For example, last fall NVIDIA delivered a 960-core GPU system using its high-end graphics processors. This system, called the Tesla,  is priced at just under $10,000. The system is rated at 36 TeraFLOPS, making it theoretically possible to solve all but the most computationally intensive problems.

  Granted, standard applications can’t run on systems like this—most commercial and custom applications are typically compiled to run on industry-standard Intel processors. But NVIDIA makes compilers available, so custom code can be compiled to run on this platform, and in doing so take advantage of the parallelism offered by the multiple cores.

  You can still get a few systems with some of the Reduced Instruction Set Computing (RISC) processors from the likes of Sun and Digital (now a part of HP) that were in vogue in the 1990s for HPC, especially for heavy-duty floating-point computation. While both Sun and HP still manufacture such systems, they are a much smaller force in the market than they were a decade ago.

Operating Systems: Still a Choice?
There was a time when engineering workstation was synonymous with the Unix operating system, a family with a common heritage but separate versions from Sun, IBM, HP, and others. Today, Unix of all flavors is pretty much in decline, giving way to the open source Linux look-alike.

  The appeal of Unix in the 1980s and 1990s was in its technical sophistication. You could multitask different applications, use network resources transparently, run sophisticated scripts, and in general do more complex tasks than you could with traditional desktop operating systems.

  Today, however, Microsoft’s Windows has adequate sophistication and is used almost universally. Thanks to the increasing capability of Windows across generations, as well as the increasing power of its host Intel processor family, Windows is the OS of choice for many engineers and engineering design applications. While it lacks the maturity of Unix, it has a single owner that puts vast resources into making it better.

  That’s not to say that Unix has completely disappeared; far from it. Apple’s highly capable Mac uses a version of Unix, and Sun has open-sourced the Intel version of its Solaris Unix brand. However, application availability has suffered in recent years, and Unix remains a player primarily on the existing RISC systems.

 
Engineering Workstations Change Dramatically
Intel’s Xeon 5500 series, also known as the Nehalem architecture, provides for the ability to automatically “overclock,” or increase its maximum execution speed, in response to high workloads.

And the open source Linux is also a rising player in the workstation realm. Sixty-four bit versions of Linux can be installed on Intel-based systems as well as a variety of others. There is little clear advantage to choosing Linux except perhaps for cost (and some would claim reliability), especially given limited application availability, but the option remains available.

Everything is a Tradeoff
Unless you have unlimited money to spend on your engineering workstation, you are going to face certain tradeoffs. Here is where your priorities should be:

Memory: It remains true that the more memory you have, the larger the working set your operating system can support. Your applications will, in general, run faster, because the code and data is being accessed from fast memory. Get as much memory as your budget will allow. Most systems intended for engineering use and equipped with a 64-bit OS can handle tens or even hundreds of GB of memory, and you should strongly consider spending more here than less.

Video: First, we need a lot of real estate on our displays. Scrolling incessantly to see the entirety of our design takes time and energy. Second, we need high resolution. It’s important to see and work on large parts of the design at a very detailed level. Third, we need speed. We can’t wait seconds or even minutes for the screen to repaint. That means a high-performance video card with lots of video memory.

Processor: The processor counts, but not as much as it used to. Pure clock speed used to be the differentiating factor, but today multiple processor cores take over where clock speeds topped out.

Specialized peripherals: Anything that will help your analysis, design, or engineering efforts is a good choice. This could be data acquisition hardware, an optical disk, solid-state storage, or an extra large and fast hard disk.

  Just because it doesn’t make sense to get an expensive,  high-performance RISC workstation for your work today, doesn’t mean that you can’t get the best system your limited dollars can buy. But while the tradeoffs you have to make are less obvious, the system will likely serve you well for a long time.

More Info:
Apple

AMD

Dell

HP

IBM

Intel

Linux

Microsoft Corp.

NVIDIA Corp.

Sun Microsystems, Inc.

Unix


Contributing Editor Peter Varhol has been involved with software development and systems management for many years. Send comments about this column to [email protected].

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Peter Varhol

Contributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].

Follow DE
#6976