Includes virtual infrastructure technologies (server, desktop, I/O), virtual infrastructure management (monitoring, optimization and performance), and virtualized data center operations and strategies (automation and Cloud computing).
Virtualization is arguably the most disruptive technology shift in data center infrastructure and management in the last decade. While its basic principles may not be new, virtualization has never been so widespread, nor has it been applied to as many platforms as it is today. Taneja Group analysts combine expert knowledge of server and storage virtualization with keen insight into their impact on all aspects of IT operations and management to give our clients the research and analysis required to take advantage of this “virtual evolution.” Our virtualization practice covers all virtual infrastructure components: server virtualization/hypervisors, desktop/client virtualization, storage virtualization, and network and I/O virtualization. We also explore application virtualization and delivery strategies. In addition, Taneja is uniquely focused on the end-to-end impact of virtualization on IT management, from the desktop to the Cloud, including: virtual server lifecycle management; virtual infrastructure instrumentation, performance management, and optimization; data protection, backup, and HA/DR for virtual environments; data center and run-book automation; and virtual infrastructure security and compliance management.
Storage performance has long been the bane of the enterprise infrastructure. Fortunately, in the past couple of years, solid-state technologies have allowed new comers as well as established storage vendors to start shaping up clever, cost effective, and highly efficient storage solutions that unlock greater storage performance. It is our opinion that the most innovative of these solutions are the ones that require no real alteration in the storage infrastructure, nor a change in data management and protection practices.
This is entirely possible with server-side caching solutions today. Server-side caching solutions typically use either PCIe solid-state NAND Flash or SAS/SATA SSDs installed in the server alongside a hardware or software IO handler component that mirrors commonly utilized data blocks onto the local high speed solid-state storage. Then the IO handler redirects server requests for data blocks to those local copies that are served up with lower latency (microseconds instead of milliseconds) and greater bandwidth than the original backend storage. Since data is simply cached, instead of moved, the solution is transparent to the infrastructure. Data remains consolidated on the same enterprise infrastructure, and all of the original data management practices – such as snapshots and backup – still work. Moreover, server-side caches can actually offload IO from the backend storage system, and can allow a single storage system to effectively serve many more clients. Clearly there’s tremendous potential value in a solution that can be transparently inserted into the infrastructure and address storage performance problems.
Storage has long been a major source of operational and architectural challenges for IT practitioners, but today these challenges are most felt in the virtual infrastructure. The challenges are sprung from the physicality of storage – while the virtual infrastructure has made IT entirely more agile and adaptable than ever before, storage still depends upon digital bits that are permanently stored on a physical device somewhere within the data center.
For practitioners who have experienced the pain caused by this – configuration hurdles, painful data migrations, and even disasters – the idea of software-defined storage likely sounds somewhat ludicrous. But the term also holds tremendous potential to change the way IT is done by tackling this one last vestige of the traditional, inflexible IT infrastructure.
The reality is that software-defined storage isn’t that far away. In the virtual infrastructure, a number of vendors have long offered Virtual Storage Appliances (VSAs) that can make storage remarkably close to software-defined. These solutions allow administrators to easily and rapidly deploy storage controllers within the virtual infrastructure, and equip either networked storage pools or the direct-attached storage within a server with enterprise-class storage features that are consistent and easily managed by the virtual administrator, irrespective of where the virtual infrastructure is (in the cloud, or on premise). Such solutions can make comprehensive storage functionality available in places where it could never be had before, allow for higher utilization of stranded pools of storage (such as local disk in the server), and enable a homogeneous management approach even across many distributed locations.
The 2012-2013 calendar years have brought an increasing amount of attention and energy in the VSA marketplace. While the longest established, major vendor VSA solution in the marketplace has been HP’s StoreVirtual VSA, in 2013 an equally major vendor – VMware – introduced a similar, software-based, scale-out storage solution for the virtual infrastructure – VSAN. While VMware’s VSAN does not directly carry a VSA moniker, and in fact stands separate from VMware’s own vSphere Storage Appliance, VSAN has an architecture very similar to the latest generation of HP’s own StoreVirtual VSA. Both of these products are scale-out storage software solutions that are deployed in the virtual infrastructure and contain solid-state caching/tiering capabilities that enhance performance and make them enterprise-ready for production workloads. VMware’s 2013 announcement finally meant HP is no longer the sole major vendor (Fortune 500) with a primary storage VSA approach. This only adds validation to other vendors who have long offered VSA-based solutions, vendors like FalconStor, Nexenta, and StorMagic.
We’ve turned to a high level assessment of five market leaders who are today offering VSA or software storage in the virtual infrastructure. We’ve assessed these solutions here with an eye toward how they fit as primary storage for the virtual infrastructure. In this landscape, we’ve profiled the key characteristics and capabilities critical to storage systems fulfilling this role. At the end of our assessment, clearly each solution has a place in the market, but not all VSA solutions are ready for primary storage. Those that are, may stand to reinvent the practice of storage in customer data centers.
Storage challenges in the virtual infrastructure are tremendous. Virtualization consolidates more IO than ever before, and then obscures the sources of that IO so that end-to-end visibility and understanding become next to impossible. As the storage practitioner labors on with business-as-usual, deploying yet more storage and fighting fires attempting to keep up with demands, the business is losing the battle around trying to do more with less.
The problem is that inserting the virtual infrastructure in the middle of the application-to-storage connection, and then massively expanding the virtual infrastructure, introduces a tremendous amount of complexity. A seemingly endless stream of storage vendors are circling this problem today with an apparent answer – storage systems that deliver more performance. But more “bang for the buck” is too often just an attempt to cover up the lack of an answer for complexity-induced management inefficiency – ranging across activities like provisioning, peering into utilization, troubleshooting performance problems, and planning for the future.
With an answer to this problem, one vendor has been sailing to wide spread adoption, and leaving a number of fundamentally changed enterprises in their wake. That vendor is Tintri, and they’ve focused on changing the way storage is integrated and used, instead of just tweaking storage performance. Tintri integrates more deeply with the virtual infrastructure than any other product we’ve seen, and creates distinct advantages in both storage capabilities and on-going management.
Taneja Group recently had the opportunity to put Tintri’s VMstore array through a hands-on exercise, to see for ourselves whether there’s mileage to be had from a virtualization-specific storage solution. Without doubt, there is clear merit to Tintri’s approach. A virtualization specific storage system can reinvent a broad range of storage management interactions – by being VM-aware – and fundamentally alter the complexity of the virtual infrastructure for the better. In our view, these changes stand to have massive impact on the TCO of virtualization initiatives (some of which are identified in the table of highlights below) but the story doesn’t end there. At the same time they’ve fundamentally changed management, Tintri has also innovated around storage technology that enables Tintri VMstore to serve up storage beneath even the most extreme virtual infrastructures.
Storage has long been the tail on the proverbial dog in virtualized environments. The random I/O streams generated by multiple consolidated VMs creates an “I/O blender” effect, which overwhelms traditional array-based architectures and compromises application performance. As many customers have learned the hard way, doing storage right in the virtual infrastructure required a fresh and innovative approach.
These sentiments were echoed in the findings of Taneja Group’s latest research study on storage acceleration and performance. More than half of the 280 buyers and practitioners we surveyed have an immediate need to accelerate one or more applications running in their virtual infrastructures. While three quarters of survey respondents are seriously considering deploying a storage acceleration solution, only a handful are willing to give up or compromise their existing storage capabilities in the process. Customers need better performance, but in most cases can neither afford nor stomach a wholesale upgrade or replacement of their storage infrastructure to achieve it.
Fortunately for performance-challenged, mid-sized and enterprise customers, there is a better alternative. QLogic’s FabricCache QLE10000 is a server-side, SAN caching solution, which is designed to accelerate multi-server virtualized and clustered applications. Based on QLogic’s innovative Mt. Rainier technology, the QLE10000is the industry’s first caching SAN adapter offering that enables the cache from individual servers to be pooled and shared across multiple physical servers. This breakthrough functionality is delivered in the form of a combined Fibre Channel and caching host bus adapter (HBA), which plugs into existing HBA slots and is transparent to hypervisors, operating systems, and applications. QLogic’s FabricCache QLE10000 adapter cost effectively boosts performance of critical applications, while enabling customers to preserve their existing storage investments.
In their quest to achieve better storage performance for their critical applications, mid-market customers often face a difficult quandary. Whether they have maxed out performance on their existing iSCSI arrays, or are deploying storage for a new production application, customers may find that their choices force painful compromises.
When it comes to solving immediate application performance issues, server-side flash storage can be a tempting option. Server-based flash is pragmatic and accessible, and inexpensive enough that most application owners can procure it without IT intervention. But by isolating storage in each server, such an approach breaks a company's data management strategy, and can lead to a patchwork of acceleration band-aids, one per application.
At the other end of the spectrum, customers thinking more strategically may look to a hybrid or all-flash storage array to solve their performance needs. But as many iSCSI customers have learned the hard way, the potential performance gains of flash storage can be encumbered by network speed. In addition to this performance constraint, array-based flash storage offerings tend to touch multiple application teams and involve big dollars, and may only be considered a viable option once pain points have been thoroughly and widely felt.
Fortunately for performance-challenged, iSCSI customers, there is a better alternative. Astute Networks ViSX sits in the middle, offering a broader solution than flash in the server, bust cost effective and tactically achievable as well. As an all-flash storage appliance that resides between servers and iSCSI storage arrays, ViSX complements and enhances existing iSCSI SAN environments, delivering wire-speed storage access without disrupting or forcing changes to server, virtual server, storage or application layers. Customers can invest in ViSX before their performance pain points get too big, or before they've gone down the road of breaking their infrastructure with a tactical solution.
Server virtualization brings a vast array of benefits ranging from direct cost savings to indirect improvements in business agility and client satisfaction. But for the IT investment decision-maker, it’s those measurable “hard” costs that matter most. Fortunately, virtualized environments deliver a quantifiably lower Total Cost of Ownership (TCO) compared to legacy physical infrastructures. Since we have all experienced the economic imperative to minimize TCO, it’s easy to understand why virtualization has been driven across significant percentages of modern data centers. Virtualization today is a proven, cost-effective, and nearly ubiquitous IT solution.
But the further challenge for IT investors now is to choose the best virtualization solution to get the “biggest bang for the buck”. Unfortunately the traditional cost per infrastructure (server CPU, storage GB, et.al.) metrics used to judge physical hardware are not sufficient buying criteria in a virtual environment. In a virtualized and cloudy world, cost per application comes closer to capturing the true value of a virtual infrastructure investment. For example, the more virtual machines (VMs) that can be hosted within the same size investment, the lower the cost per application. Therefore a key comparison metric between virtualization solutions is virtual machine (VM) density. All other things being equal (e.g. applications, choice of hypervisor, specific allocations and policies), an infrastructure supporting a higher VM density provides a better value.
As virtualization deployments grow to include active production workloads, they greatly stress and challenge traditional IT infrastructure. The virtualization hypervisor “blends up” client IO workloads and condenses IO intensive activities (i.e. migration, snapshots, backups), with the result that the underlying storage often presents the biggest constraint on effective VM density. Therefore it’s critically important when selecting storage for virtual environments to get past general marketing and focus on validated claims of proven VM density.