Proximal AutoCache: Overcoming Virtual Server Storage Issues
For the past five years, Taneja Group has been tracking storage challenges in a virtual server environment. It’s not a pretty picture. Traditional storage approaches create IO bottlenecks that slow performance and prevent customers from achieving many of the potential advantages of a virtual server infrastructure, including performance acceleration, increased server utilization, enhanced IT agility, and the Capex and Opex cost savings realized from consolidation.
One of the keys to achieving consolidation benefits is to maximize the number of virtual servers running across a cluster of physical servers. But as VM density increases in a traditional storage environment, the storage subsystem – with its fixed set of controllers, ports, host bus adapters, and spindles – must handle the growing IO load, and bottlenecks are inevitable. These bottlenecks degrade application performance, and often force administrators to reduce VM density.
The most common way that users have addressed this problem is to overprovision storage resources – i.e. continue to add spindles, ports, controllers and adapters until the storage system can handle the virtual server workload. But this can be extremely expensive, and is only a temporary fix in a rapidly growing VM environment. Another popular fix is caching technology, which is a good start but comes with its own set of problems depending on the product architecture. Caching appliances work to a point but add distinct complexity and cost. Controller-based caching is more efficient but is rarely scalable, a big problem in fast-growing virtual networks.
So what is a virtual server administrator to do? Proximal Data suggests its new AutoCache product is the answer to solving virtual server storage IO issues. Proximal’s server-side caching solution is integrated within the virtualization platform itself, not in a cache appliance or storage controller. AutoCache serves as a write-through cache, capturing active IO into a PCIe-based flash card or SSD and directing outgoing priority traffic to VMs. AutoCache does not store data so never requires cache flushing or data protection. It also saves on cache overhead by indexing cached blocks in host memory instead of its own.
Advanced virtualization capabilities like live VM migration and HA still work without additional overhead, and AutoCache is compatible with existing storage arrays, so that users can still take advantage of native array functionality, such as snapshots and replication. Proximal’s initial solution is for VMware vSphere, but we expect other hypervisor solutions to follow.
One of the strongest benefits we see from Proximal is performance acceleration, which is is a hot market this year. (See my colleague Jeff Boles’ column on storage performance.) A lot of vendors are trumpeting the glories of their performance acceleration solutions. But Proximal claims some serious know-how when it comes down to efficiently indexing and referencing data, which is an important element of fast data caching at scale. Time will tell if this turns out to be the case, and whether this might open the doors for Proximal Data to do other innovative data management tasks down the road. The battle is on for accelerating IO with flash in a way that doesn't break your current storage infrastructure.
There are no comments to display. Scroll down to leave your own!