InfiniBand as Data Center Communication Virtualization
Recently we posted a new market assessment of InfiniBand and its growing role in enterprise data centers, so I've been thinking a lot about low-latency switched fabrics and what they imply for IT organizations. I'd like to add a more philosophical thought about the optimized design of InfiniBand and its role as data center communication virtualization.
From the start, InfiniBand's design goal was to provide a high performing messaging service for applications even if they existed in entirely separate address spaces across servers. By architecting from the "top down" rather than layering up from something like Ethernet's byte stream transport, InfiniBand is able to deliver highly efficient and effective messaging between applications. In fact, the resulting messaging service can be thought of as "virtual channel IO" (I'm sure much to the delight of mainframers).
InfiniBand achieves its superior high bandwidth and low latency partly by letting applications bypass the operating system to access a virtual channel directly. There are two "semantics" available for channel communication - either a message passing style where the sender does not have access to the receiver's incoming buffer, or one with a virtually shared buffer (Remote Direct Memory Access or RDMA) in which the target lets the initiator read or write into its buffer directly. In neither case is the host operating system involved in address translation or buffer copying.
InfiniBand also serves as a converged network in dense data centers, with its one hardware channel adapter (HCA) capable of carrying all the network and storage traffic that might otherwise require several HBA's and/or Fibre Channel adapters. In a real sense, InfiniBand provides optimized virtualized communication between applications horizontally and at the same time between IT infrastructure resources vertically. It's no wonder that InfiniBand is marching on the data center!
InfiniBand, a high-performance switched network fabric protocol, is starting to come out from its long-time HPC success and is now marching into data center core where densities have been increasing as clustering and scale-out architectures take over. Big Data, virtualization/clouds, web scale apps, and high-performance storage are all driving data center interconnects towards low-latency switched fabrics like InfiniBand. Recently it was our pleasure to revisit and update our assessment of InfiniBand - follow this link to a summary post and the full paper "InfiniBand's March on the Data Center".
There are no comments to display. Scroll down to leave your own!