Cray Introduces ClusterStor E1000 Storage to Fuel-Converged Workloads of the Exascale Era

New HPC Parallel Storage System delivers scalability and performance to power data-driven workloads Including AI, analytics, simulation and modeling, company says.

New HPC Parallel Storage System delivers scalability and performance to power data-driven workloads Including AI, analytics, simulation and modeling, company says.

Cray, a Hewlett Packard Enterprise company, has unveiled its Cray ClusterStor E1000 system, a new parallel storage platform for the Exascale Era. ClusterStor E1000 addresses the growth of data from converged workloads and the need to access that data at great speed, by offering an balance of storage performance, efficiency and scalability, the company says.

The next-generation global file storage system has already been selected by the U.S. Department of Energy (DOE) for use at the Argonne Leadership Computing Facility, Oak Ridge National Laboratory and Lawrence Livermore National Laboratory, where the first three U.S. exascale supercomputers will be housed (respectively Aurora, Frontier and El Capitan).

With the introduction of the ClusterStor E1000 storage system, Cray has completed the re-architecture of its end-to-end infrastructure portfolio, which encompasses Cray Shasta supercomputers, Cray Slingshot interconnect and the Cray software platform. With Cray’s next-generation end-to-end supercomputing architecture, available for any datacenter environment, customers around the world can realize the full potential of their data.

“To handle the massive growth in data that corporations worldwide are dealing with in their digital transformations, a completely new approach to storage is required,” says Peter Ungaro, president and CEO of Cray, a Hewlett Packard Enterprise company. “Cray’s new storage platform is a comprehensive rethinking of what high performance storage means for the Exascale Era. The intelligent software and hardware design of ClusterStor E1000 orchestrates the data flow with the workflow—that’s something no other solution on the market can do.”

As the external high performance storage system for the first three U.S. exascale systems, Cray ClusterStor E1000 will total over 1.3 exabytes of storage for all three systems combined. The National Energy Research Scientific Computing Center (NERSC) also selected ClusterStor E1000, will be an all NVMe (non-volatile memory express) parallel file system at a scale of 30 petabytes of usable capacity.

Recognizing the data access challenges presented by the Exascale Era, Cray’s ClusterStor E1000 enables organizations to achieve their research missions and business objectives faster by offering: storage performance, performance efficiency and scalability.

ClusterStor E1000 systems can deliver up to 1.6 terabytes per second and up to 50 million I/O operations per second per rack. New purpose-engineered end-to-end PCIe 4.0 storage controllers serve the maximum performance of the underlying storage media to the compute nodes and new intelligent Cray software, ClusterStor Data Services, allows customers to align the data flow with their specific workflow, meaning they can place the application data at the right time on the right storage media (SSD pool or HDD pool) in the file system. 

An entry-level system starts at 30 gigabytes per second and at less than 60 terabytes usable capacity. Customers can start at the size dictated by their current needs and scale as those needs grow, with maximum architectural headroom for future growth. The ClusterStor E1000 storage system can connect to any HPC compute system that supports high speed networks like 200 Gbps Cray Slingshot, Infiniband EDR/HDR and 100/200 Gbps Ethernet.

Sources: Press materials received from the company and additional information gleaned from the company’s website.

More Cray Coverage

More Hewlett Packard Enterprise Coverage

Share This Article




About the Author

DE Editors's avatar
DE Editors

DE’s editors contribute news and new product announcements to Digital Engineering. Press releases can be sent to them via [email protected].

Follow DE
#23271