Supercomputing 2010 Wrap-Up
Latest News
November 22, 2010
This year’s SC10 conference brought together thousands of attendees and speakers with one purpose: fast computing and its many uses.
The International Conference for High Performance Computing, Networking, Storage and Analysis, otherwise known as SC10, retains the size, excitement, and air of innovation that most conferences today lack. A combination of academic innovation and commercial potential gave it the atmosphere of an intellectual carnival, with innovative research results and product ideas flowing like hot and cold running whiskey. With about five thousand attendees, hundreds of technical sessions, and over three hundred exhibitors, it brought together innovations in fast computing in a single place.
SC10 was held November 13-19 in the New Orleans Convention Center, and occupied a large portion of the facility. The theme of the conference was “The Future of Discovery.” It was a misnomer, only because it wasn’t about the future, but rather about computing possibilities today. While there were some technology vision statements, most of the groundwork has already been laid and already in active use, often commercially but also in academia.
The conference had three technology thrusts - climate simulation, heterogeneous computing, and data intensive computing. For those not interested in climate simulation, take note that the math behind it is directly applicable to fluid dynamics, but far more complex. Heterogeneous computing looks at how we can tap into existing but idle computing power to form giant clusters of on-demand supercomputing. Data intensive computing deals with managing and processing large data sets, and is often associated with modeling and simulation. All of these technology thrusts were well-represented in the hundreds of technical sessions and tutorials offered throughout the week.
The opening keynote on Tuesday, November 16 was given by Clayton Christensen, Harvard Business School professor and author of The Innovator’s Dilemma. Recovering from a recent stroke, he gave an inspirational talk on how disruptive technologies enable entrepreneurs to upend traditional industries and radically change how we perceive those industries.
Among the vendors, there were two main themes: how do we get faster still, and how do we run effectively in the cloud. Vendors such as Microsoft presented a vision of scheduling and managing compute-intensive jobs on supercomputing clusters in the cloud. NVIDIA and its many partners promoted an energetic and rapidly innovating GPU-based supercomputing industry. Even Intel, demonstrating its own CPU computing approach, acknowledged that tomorrow’s computing platforms were likely to include GPUs as a part of the cluster.
Why should engineers care about the Supercomputing Conference? Just about everything demonstrated and discussed at SC10, from GPUs to cloud clusters, are available for use today. These are the solutions to time and performance obstacles we face on a daily basis in our work. If we cannot afford or justify them today, we know that they are available when the time comes.
Most engineers I know enjoy the few occasions they are able to satisfy their intellectual thirsts. While our daily work can be intellectually stimulating, occasionally we miss the ability to experience the excitement of new technologies and applications that made us want to spend our life’s work in engineering. The Supercomputing Conference is precisely the place to engage our intellect and learn what the best of our peers are able to teach us.
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Peter VarholContributing Editor Peter Varhol covers the HPC and IT beat for Digital Engineering. His expertise is software development, math systems, and systems management. You can reach him at [email protected].
Follow DE