InfiniBand Updates Specs Preparing for 10000 Node Exascale Clusters
We've long been fans of InfiniBand, watching as new generations of enterprise class scale-out clusters and storage solutions learn from the HPC world how to achieve really high-speed interconnection. InfiniBand itself may never win the popular market race against Ethernet, but newer generations of Ethernet are looking more and more like InfiniBand. And parts of the IB world, namely RDMA and RoCE, have swept into datacenters almost unaware (e.g. look under the hood of SMB 3.0).
The InfiniBand Trade Association today released an updated InfiniBand spec in the form of a v1.3 of their Volume 1 covering requirements for switches, routers, and adapters, and management of the fabric. It's worth keeping up with IB, as it clearly shows where the broader networking market is capable of going. In this spec update, we note improvements to the way very large clusters can be setup and managed with clearer views into large switch hierarchies. We also see that there are some new requirements for providing "intelligent" cabling information and deep network statistics within ever smarter management layer.
Maybe geeky stuff, but it allows IB to keep up with "exascales" of data and lead the way large scale-out computer networking gets done. This is particularly important as the 1000 node clusters of today grow towards the 10,000 node clusters of tomorrow. Stay tuned, as we hope to get a chance soon to report more deeply on RoCE in particular!
There are no comments to display. Scroll down to leave your own!