QLogic TrueScale InfiniBand Integrates with NVIDIA Tesla GPUs
QLogic and NVIDIA team with NCSA to achieve No. 3 ranking on Green500 HPC computing list.
Latest News
March 14, 2011
By DE Editors
QLogic Corp. has announced that a cluster using NVIDIA Tesla graphics processing units (GPUs), QLogic InfiniBand switches and adapters, and operated by the National Center for Supercomputing Applications (NCSA) achieved a No. 3 ranking for MFlops/watt on the Green500 list of the world’s top supercomputers.
Launched in 2006, the Green500 ranks the most energy-efficient supercomputers in the world using performance-per-watt to encourage HPC vendors and users to deploy more cost-effective computing systems. The NCSA’s hybrid cluster incorporated Intel Core i3 2.93Ghz dual core processors with NVIDIA Tesla C2050 GPUs and QLogic TrueScale InfiniBand solutions, producing a score of 933.06 MFlops/watt — nearly four times more efficient than average supercomputers.
“We wanted to expand the frontiers of computational science, and the combination of NVIDIA Tesla GPUs with QLogic TrueScale InfiniBand fabrics is enabling this exploration,” says Professor Wen-mei Hwu of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign. “In collaboration with NCSA’s Innovative Systems Laboratory, the scaling and power efficiency from this combination of technologies has helped to place us near the top of the Green500 list.”
QLogic’s newest Truescale InfiniBand software release does not require Linux kernel patches or special InfiniBand drivers to integrate with NVIDIA Tesla GPUs, making it easier to install and maintain GPUs for HPC applications.
For more information, visit QLogic and NVIDIA.
Sources: Press materials received from the company and additional information gleaned from the company’s website.
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
DE EditorsDE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].