High-Performance Computing is Not Yet for All
Latest News
November 2, 2015
Upon seeing the massive and tragic flooding of South Carolina in the news last month, my teenage daughter asked why all that excess water couldn’t be transported to drought-stricken California, where it’s needed. Why not just fill up a convoy of tanker trucks and head West? I remember having a similar thought when I was about her age and learned the majority of the Earth’s fresh water is frozen in the polar ice caps. Why not just tow an iceberg to Ethiopia?
I began to to explain the complicated supply chain logistics involved in such a project, the significant engineering challenges and the slim chance for a return on investment ... “So it’s just too hard and costs too much?” my daughter asked, right to the point.
Just because enough of a resource exists doesn’t mean everyone can access it. That brings me to the current state of high-performance computing (HPC).
The Democratization of HPC
We’ve all heard of Jeopardy!-winning Watson, IBM’s cognitive computing effort that can read millions of unstructured documents in seconds and finds patterns in them that can help make decisions. We see the lists of top supercomputers every year, many owned by government departments and universities. According to the TOP500 list of the world’s most powerful supercomputers, the top performer in June, Tianhe-2, hit 33.86 petaflop/s (quadrillions of calculations per second) on the organization’s benchmarks. Maybe you’ve seen the colorful images of Google’s data centers, with thousands of feet of color-coded pipes for cooling the racks (if you haven’t, see here.). It’s clear that massive computing power is available out there. And yet, one of the top complaints we still hear from engineers is how long it takes to run simulations or even update large model assemblies.
Why aren’t more engineers tapping into the power of high-performance computing? The short answer for many: It’s just too hard and costs too much.
Cloud computing was hyped as a great equalizer for small businesses to compete against the big guys. For some it is. Small companies that don’t have dedicated IT staffing or hardware resources can turn to the cloud for everything from e-mail to online CAD to simulation runs. As we note in this issue’s focus on engineering computing, not all cloud-computing resources are created equal. Still, many third-party cloud service providers and software vendors are addressing the ease-of-use and licensing issues.
For large companies that have a hefty investment in computing infrastructure and staff, there’s good news as well. There are a plethora of options available to them. Even supercomputers are not out of reach, as prices have fallen dramatically. For engineers in large companies who want to access more computing power, the biggest hurdles are often internal policies and red tape.
Finding the Missing Middle
Mid-sized companies have options as well, but for them the choice is not as clear. Pay-as-you-go models don’t always make sense for companies who use advanced simulation regularly. Investing in additional on-premise resources may not be attractive when it seems the whole world is moving to the cloud. When you’re working with very large files, moving to the cloud isn’t as easy. There’s a significant investment in time just to get your file uploaded; so much so that it might not be worth the speed boost you get in using off-site resources to solve the simulation.
The issue of getting large amounts of data into the cloud is so common that Amazon launched AWS Import/Export Snowball last month. The company calls it “a petabyte-scale data transport solution that uses secure appliances to transfer large amounts of data into and out of AWS (Amazon Web Services).” The appliance looks like a big, hard plastic desktop tower case. It has a Kindle on the outside and drives to hold up to 50TB of data on the inside, along with its own power supply and 10GB networking port. Users plug the appliance into their local network, install its client and then transfer their files to it. The client encrypts the files as they are transferred. Once the transfer is complete, the appliance can be sent back to Amazon via FedEx. The Kindle on the side acts as an E Ink shipping label. While this may seem like a step backward to the days of Sneakernet, when it was faster to walk a file over to the next cubicle than it was to email it, Amazon points out that it can take months and cost thousands of dollars to transfer 100TB of data over a dedicated 100 Mbps connection.
It’s easy to forget we have the power of yesterday’s supercomputers in today’s desktop workstations. That sets a high bar. We now expect the same ease of use when it comes to tapping into HPC resources. We’re not there yet, but we’ve begun addressing the logistics, engineering challenges and ROI needed to make computing resources accessible to all.
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Jamie GoochJamie Gooch is the former editorial director of Digital Engineering.
Follow DE