HPC Gets President’s Stamp of Approval
Latest News
August 11, 2015
High performance computing (HPC) has hit a new benchmark: It's landed on the radar screen of the country's highest office, becoming the subject of an executive order by President Obama.
Late last month, the president signed an executive order launching The National Strategic Computing Initiative (NSCI), a federally-funded, coordinated research and development strategy spanning multiple agencies that's tasked with building exaflop supercomputers.
Over the next decade, the goal of the organization is to build supercomputers capable of one exaflop (1018 operations per second) for handling the next-generation big data and large-scale computing problems that will have a revolutionary impact on both the commercial sector and in scientific discovery. According to the fact sheet put out on NSCI:
HPC systems, through a combination of processing capability and storage capacity, can solve computational problems that are beyond the capability of small- to medium-scale systems. They are vital to the nation's interests in science, medicine, engineering, technology and industry. The NSCI will spur the creation and deployment of computing technology at the leading edge, helping to advance Administration priorities for economic competitiveness, scientific discovery, and national security.
The focus of the research won't just be on pushing the performance of HPC systems, but also to encapsulate technology that can aid in efficient manipulation of vast and rapidly expanding pools of both numerical and non-numerical data. Think about computational fluid dynamics (CFD), an instrumental tool for aircraft design, which is inextricably tied to HPC. Current HPC technology can only handle simplified models of airflow around a wing under limited flight conditions, experts say. However, research by NASA found that by introducing HPC platforms capable of exaflop-level performance, organizations would be able to incorporate full modeling of aircraft turbulence and more dynamic flight conditions into their simulations, therefore improving aircraft design.
New exaflop-level classes of HPC systems also have tremendous applicability for large-scale data analytics applications like processing and finding insights in Web pages, genome datasets and the output of scientific instruments, the press release stated.
The NSCI's mission has five strategic themes:
- Building systems that apply exaflops of computing power to exabytes of data. Beyond using HPC to simulate a plane or colliding automobiles, the NSCI vision is to combine the computing power and data capacity of two classes of systems to create a new HPC generation capable of generating deeper insights because it can mash up simulation with actual data.
- Keeping the United States at the forefront of HPC development. The United States has been the leader in this class of large-scale computing, and this new program is designed to make sure it stays that way. The Department of Energy has developed an initiative to deliver exascale systems capable of exaflop performance and it has also identified the core challenges to doing so, committing to research that will help break down the barriers.
- Make HPC application developers more productive. It's not easy to develop HPC applications — it requires a high level of expertise and effort to get programming right and to tune everything so there is maximum performance on the targeted machine. As part of this effort, government agencies will support research for creating new approaches to programming HPC systems in the hopes of breaking down barriers to entry and fostering a flush ecosystem of third-party developers.
- Make HPC readily available. Current HPC technology is still expensive and is often difficult to use. Many scientists and engineers lack training in how to best leverage the platforms for simulation and modeling. As part of NSCI's work, agencies will collaborate with computer makers and cloud providers to make HPC resources more readily available to both the public and private sector. There will also be efforts to create educational materials to drive adoption of next-generation HPC.
- Foster new hardware technology as a basis for future HPC systems. There is only so much scalability possible with semiconductor technology. As a result, the NSCI will sponsor research to ensure continued improvements in HPC performance beyond the next decade.
One such possible booster technology is GPUs, according to officials at NVIDIA. While an exaflop supercomputer relying on CPUs alone would suck up 2 gigawatts of electricity, a comparable systems leveraging GPUs bends that curve, creating a foundation that can handle up to 10 times more operations per unit of energy, NVIDIA officials claimed in a blog post. To that end, NVIDIA is rethinking the way existing machines are built to accommodate GPU technology. The company is developing the high-speed NVLink interconnect to help CPUs and GPUs exchange data five to 12 times faster and the company recently a new toolkit that dramatically simplifies programming for parallel processors, including GPUs.
To learn more about the road to exascale HPC, watch this video, from a 2015 Stanford HPC conference, which showcases a panel of experts discussing the topic.
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Beth StackpoleBeth Stackpole is a contributing editor to Digital Engineering. Send e-mail about this article to [email protected].
Follow DE