Storage performance has long been the bane of the enterprise infrastructure. Fortunately, in the past couple of years, solid-state technologies have allowed new comers as well as established storage vendors to start shaping up clever, cost effective, and highly efficient storage solutions that unlock greater storage performance. It is our opinion that the most innovative of these solutions are the ones that require no real alteration in the storage infrastructure, nor a change in data management and protection practices.
This is entirely possible with server-side caching solutions today. Server-side caching solutions typically use either PCIe solid-state NAND Flash or SAS/SATA SSDs installed in the server alongside a hardware or software IO handler component that mirrors commonly utilized data blocks onto the local high speed solid-state storage. Then the IO handler redirects server requests for data blocks to those local copies that are served up with lower latency (microseconds instead of milliseconds) and greater bandwidth than the original backend storage. Since data is simply cached, instead of moved, the solution is transparent to the infrastructure. Data remains consolidated on the same enterprise infrastructure, and all of the original data management practices – such as snapshots and backup – still work. Moreover, server-side caches can actually offload IO from the backend storage system, and can allow a single storage system to effectively serve many more clients. Clearly there’s tremendous potential value in a solution that can be transparently inserted into the infrastructure and address storage performance problems.
Storage challenges in the virtual infrastructure are tremendous. Virtualization consolidates more IO than ever before, and then obscures the sources of that IO so that end-to-end visibility and understanding become next to impossible. As the storage practitioner labors on with business-as-usual, deploying yet more storage and fighting fires attempting to keep up with demands, the business is losing the battle around trying to do more with less.
The problem is that inserting the virtual infrastructure in the middle of the application-to-storage connection, and then massively expanding the virtual infrastructure, introduces a tremendous amount of complexity. A seemingly endless stream of storage vendors are circling this problem today with an apparent answer – storage systems that deliver more performance. But more “bang for the buck” is too often just an attempt to cover up the lack of an answer for complexity-induced management inefficiency – ranging across activities like provisioning, peering into utilization, troubleshooting performance problems, and planning for the future.
With an answer to this problem, one vendor has been sailing to wide spread adoption, and leaving a number of fundamentally changed enterprises in their wake. That vendor is Tintri, and they’ve focused on changing the way storage is integrated and used, instead of just tweaking storage performance. Tintri integrates more deeply with the virtual infrastructure than any other product we’ve seen, and creates distinct advantages in both storage capabilities and on-going management.
Taneja Group recently had the opportunity to put Tintri’s VMstore array through a hands-on exercise, to see for ourselves whether there’s mileage to be had from a virtualization-specific storage solution. Without doubt, there is clear merit to Tintri’s approach. A virtualization specific storage system can reinvent a broad range of storage management interactions – by being VM-aware – and fundamentally alter the complexity of the virtual infrastructure for the better. In our view, these changes stand to have massive impact on the TCO of virtualization initiatives (some of which are identified in the table of highlights below) but the story doesn’t end there. At the same time they’ve fundamentally changed management, Tintri has also innovated around storage technology that enables Tintri VMstore to serve up storage beneath even the most extreme virtual infrastructures.
Solid state storage technology – typically storage devices based on NAND flash – have opened up new horizons for storage systems over the past couple of years. The storage market has seemingly been flooded by new products incorporating solid-state storage somewhere within their product line while making promises of break through levels of storage performance. Vendors have found there are some challenges when it comes to putting solid-state technology in the storage system. Many storage products entering the market in turn have a few wrinkles beneath the surface.
Just recently, AMI has introduced yet another SAN storage appliance. The StorTrends 3500i integrates a comprehensive solid-state storage layer into the StorTrends iTX architecture. The 3500i brings with it the ability to use Solid State Drives (SSDs) in multiple roles – as a full flash array or hybrid storage array. As a hybrid storage array, the SSDs can be utilized as cache, tier, or a combination of the two. Along with SSD caching and tiering, the StorTrends 3500i incorporates these performance features with the field proven storage architecture validated by more than 1,100 global installs. The net result is a high performance and cost effective storage array. In fact, the 3500i array looks poised to be one of the most comprehensively equipped storage system options for the mid-range enterprise storage customer looking for solid state acceleration for their workloads.
With this most recent storage system launch, StorTrends once again caught our attention, and we approached AMI with the idea of a hands-on lab exercise that we call a Taneja Group Technology Validation. Our goal with this testing? To see whether StorTrends truly preserved all of their storage functionality with the integration of SSD into the 3500i storage system, and whether the 3500i was up to the task of harnessing the blazing fast storage performance of SSD.
Storing digital data has long been a perilous task. Not only are stored digital bits subject to the catastrophic failure of the devices they rest upon, but the nature of shared digital bits subjects them to error and even intentional destruction. In the virtual infrastructure, the dangers and challenges subtly shift. Data is more highly consolidated and more systems depend wholly on shared data repositories; this increases data risks. Many virtual machines connecting to single shared storage pools mean that IO or storage performance has become an incredibly precious resource; this complicates backup, and means that backup IO can cripple a busy infrastructure. Backup is a more important operation than ever before, but it is also fundamentally more challenging than ever before.
Fortunately, the industry rapidly learned this lesson in the earlier days of virtualization, and has aggressively innovated to bring tools and technologies to bear on the challenge of backup and recovery for virtualized environments. APIs have unlocked more direct access to data, and products have finally come to market that make protection easier to use, and more compatible with the dynamic, mobile workloads of the virtual data center. Nonetheless differences abound between product offerings, often rooted in the subtleties of architecture – architectures that ultimately determine whether a backup product is best suited for SMB-sized needs, or whether a solution can scale to support the large enterprise.
Moreover, within the virtual data center, TCO centers on resource efficiency, and a backup strategy can be one of the most significant determinants of that efficiency. On one hand, traditional backup just does not work and can cripple efficiency. There is simply too much IO contention and application complexity in trying to convert a legacy physical infrastructure backup approach to the virtual infrastructure. On the other hand, there are a number of specialized point solutions designed to tackle some of the challenges of virtual infrastructure backup. But too often, these products do not scale sufficiently, lack consolidated management, and stand to impose tremendous operational overhead as the customer’s environment and data grows. When taking a strategic look at the options, it often looks like backup approaches fly directly in the face of resource efficiency.
Virtual Storage Appliances (VSAs) have been around for a while – just over 5 years ago, the earliest vendors started to sample market interest in this technology. In theory, the market was interested, but perhaps more so on paper than in actual adoption during those early days. Regardless, that interest drove more vendors to release VSAs and today there are dozens of Virtual Storage Appliances on the market. Many of these are focused on capabilities such as backup, but at least a handful can serve as primary storage beneath the virtual infrastructure.
The primary storage VSAs on the market came about as product or marketing experiments; perhaps to let customers experience a storage system without making a full investment, allow customers to ingest rogue virtual infrastructure storage back into their existing storage infrastructure, or enable consistent storage management as customers deployed workloads with remote service providers.
For certain, many of these primary storage VSAs have never found their footing, and still languish as a neglected technology in a dusty corner of a vendor’s product portfolio. But there have been exceptions. One is HP StoreVirtual. HP has been quite serious about delivering StoreVirtual as a real storage solution with hefty capabilities. StoreVirtual is one of HP’s several converged storage technologies that is blurring the boundaries between storage and compute, and helping customer infrastructures to scale and adapt while maintaining maximum efficiency. The popular StoreVirtual product line comes in a variety of physical formats, from entry-level 1U 4 drive systems to extremely dense BladeSystem SANs. Approximately 5 years ago, the StoreVirtual software foundation was also released in Virtual Storage Appliance form. This StoreVirtual VSA is a full storage system that looks, acts, and functions just like its physical StoreVirtual brethren. The intent behind HP’s StoreVirtual VSA is increased ease of use, increased storage functionality in the virtual infrastructure, and greater adaptability, within a dense footprint that can make use of any available storage resources (direct attached server storage or networked storage). HP claims that StoreVirtual VSA leads the market in ease of use, performance, efficiency, and storage capabilities – all of which makes it ideally positioned to service primary workloads in the data center.
In this Technology Validation, we set out to examine StoreVirtual VSA, and through comparison to another leading virtual storage appliance (VMware’s vSphere Storage Appliance – VMware VSA) evaluate the effectiveness of StoreVirtual VSA’s architecture in enabling superior, primary-workload-ready storage in the virtual infrastructure. With an eye on ease of use, efficiency, and flexibility, we put StoreVirtual VSA and VMware vSphere Storage Appliance through a detailed examination that included both a review of functionality and a hands-on lab examination of performance, scalability, resiliency, and ease of use.
Just a couple of years ago, as solid-state storage technologies began finding significant mainstream adoption, Taneja Group began closely following a vendor whose architectural roadmap seemed to destine them to be the pre-eminent architectural leader for scale-out, high performance, enterprise-ready, cost-effective solid-state arrays.
That vendor was Kaminario, who first entered the market with a highly resilient, scale-out architecture that promised extreme performance with more linear scalability as well as superior availability / serviceability versus other offerings we then saw on the market.
In the past couple of years, Kaminario has continued advancing their technology in both performance and features, systematically adding the mainstream features that the enterprise demands – and are that are too often missing on high performance storage systems: features like snapshots, utilization reporting, resiliency that tolerates full node failures, and more.
In turn, Kaminario recently drew the attention of Taneja Group Labs. Scale-out and enterprise-class storage management features are not easy to architect (especially not together), and we wanted to know whether Kaminario could deliver enterprise-class wrappings with all of their historic scale-out capabilities.