Microway Delivers NVIDIA Tesla S1070-based Number Smasher GPU Clusters
Preconfigured clusters use up to 20 Tesla S1070s in 24U to 44U cabinets.
Latest News
May 6, 2009
By DE Editors
Microway has announced its preconfigured NVIDIA Tesla S1070-based high density Number Smasher GPU-Cluster. This configuration, along with Microway’s WhisperStation-PSC Tesla Personal Supercomputer, are Tesla-based solutions that Microway provides to its HPC users.
Microway’s Tesla Preconfigured Clusters deliver supercomputing performance with lower power, lower cost, and use fewer systems than standard CPU-only clusters, according to the company. The clusters are powered by the Nvidia Tesla S1070, a 4 TFLOP, 1U system that employs Tesla T10 GPUs based on the CUDA massively parallel architecture. With 4GB of memory per GPU, support for IEEE 754 single and double floating point precision, and a 102GB/sec GDDR3 interface to the memory, S1070 can speed the transition to parallel computing.
Microway offers several configurations of its Tesla preconfigured clusters with FasTree InfiniScale IV InfiniBand and up to 20 Tesla S1070s in 24U to 44U cabinets. Each Tesla S1070 achieves 4 TFLOPS compute performance. Up to 80 TFLOPS performance is achievable in a 44U configuration. Any of these configurations may be resized to build a larger or smaller cluster.
The Microway Tesla preconfigured cluster comes with preinstalled with:
- Linux distribution of choice, including Red Hat, SUSE, Fedora and Ubuntu.
- CUDA 2.2 Toolkit and software development kit.
- Microway Cluster Management Software (MCMS), which integrates with optional MPI Link-Checker Fabric Validation Suite and InfiniScope.
For more information, visit Microway.
Sources: Press materials received from the company and additional information gleaned from the company’s website.
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
DE EditorsDE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].