Mellanox InfiniBand Accelerates 40Gbps Supercomputer
InfiniBand momentum on the Top500 continues with 16 percent semi-annual growth, and connects the majority of top 100 supercomputers.
Latest News
November 25, 2008
By DE Editors
Mellanox Technologies, Ltd. (Santa Clara, CA) announced at SC08 (Austin, TX) that its 40Gbps InfiniBand switches and adapters now power one of the world’s fastest supercomputers, and that InfiniBand continues to be the fastest-growing cluster interconnect represented on the industry’s TOP500 list.
Among the top 500 supercomputers in the world reported in the 32nd edition of the TOP500 list, 142 systems (28 percent) are connected with InfiniBand, an increase of 16 percent for InfiniBand-based systems compared to the previous TOP500 list published in June 2008.
Mellanox ConnectX 20Gbps and 40Gbps InfiniBand adapters and switch systems based on its InfiniScale III and IV switch silicon provide scalable, low-latency, and power-efficient interconnect for the world’s fastest supercomputer and the majority of other top 100 systems.
At SC08, Mellanox Technologies, Ltd. also announced a multi-vendor 10 Gigabit Ethernet file system and storage infrastructure demonstration using a 10Gbps Low Latency Ethernet (LLE) cluster network using Lustre technology for computational checkpointing and persistent visualization data storage. The demonstration includes the following vendor components: Mellanox - ConnectX EN 10 Gigabit Ethernet Adapters, Arista Networks - 7124 10 Gigabit Switches, Sun Microsystems - Lustre file system and Sun Fire X4540 storage server, System Fabric Works - software integration and demonstration deployment.
For more information, please visit Mellanox.
Sources: Press materials received from the company and additional information gleaned from the company’s website.
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
DE EditorsDE’s editors contribute news and new product announcements to Digital Engineering.
Press releases may be sent to them via [email protected].