Includes virtual infrastructure technologies (server, desktop, I/O), virtual infrastructure management (monitoring, optimization and performance), and virtualized data center operations and strategies (automation and Cloud computing).
Virtualization is arguably the most disruptive technology shift in data center infrastructure and management in the last decade. While its basic principles may not be new, virtualization has never been so widespread, nor has it been applied to as many platforms as it is today. Taneja Group analysts combine expert knowledge of server and storage virtualization with keen insight into their impact on all aspects of IT operations and management to give our clients the research and analysis required to take advantage of this “virtual evolution.” Our virtualization practice covers all virtual infrastructure components: server virtualization/hypervisors, desktop/client virtualization, storage virtualization, and network and I/O virtualization. We also explore application virtualization and delivery strategies. In addition, Taneja is uniquely focused on the end-to-end impact of virtualization on IT management, from the desktop to the Cloud, including: virtual server lifecycle management; virtual infrastructure instrumentation, performance management, and optimization; data protection, backup, and HA/DR for virtual environments; data center and run-book automation; and virtual infrastructure security and compliance management.
Storage has long been the tail on the proverbial dog in virtualized environments. The random I/O streams generated by multiple consolidated VMs creates an “I/O blender” effect, which overwhelms traditional array-based architectures and compromises application performance. As many customers have learned the hard way, doing storage right in the virtual infrastructure required a fresh and innovative approach.
These sentiments were echoed in the findings of Taneja Group’s latest research study on storage acceleration and performance. More than half of the 280 buyers and practitioners we surveyed have an immediate need to accelerate one or more applications running in their virtual infrastructures. While three quarters of survey respondents are seriously considering deploying a storage acceleration solution, only a handful are willing to give up or compromise their existing storage capabilities in the process. Customers need better performance, but in most cases can neither afford nor stomach a wholesale upgrade or replacement of their storage infrastructure to achieve it.
Fortunately for performance-challenged, mid-sized and enterprise customers, there is a better alternative. QLogic’s FabricCache QLE10000 is a server-side, SAN caching solution, which is designed to accelerate multi-server virtualized and clustered applications. Based on QLogic’s innovative Mt. Rainier technology, the QLE10000is the industry’s first caching SAN adapter offering that enables the cache from individual servers to be pooled and shared across multiple physical servers. This breakthrough functionality is delivered in the form of a combined Fibre Channel and caching host bus adapter (HBA), which plugs into existing HBA slots and is transparent to hypervisors, operating systems, and applications. QLogic’s FabricCache QLE10000 adapter cost effectively boosts performance of critical applications, while enabling customers to preserve their existing storage investments.
In their quest to achieve better storage performance for their critical applications, mid-market customers often face a difficult quandary. Whether they have maxed out performance on their existing iSCSI arrays, or are deploying storage for a new production application, customers may find that their choices force painful compromises.
When it comes to solving immediate application performance issues, server-side flash storage can be a tempting option. Server-based flash is pragmatic and accessible, and inexpensive enough that most application owners can procure it without IT intervention. But by isolating storage in each server, such an approach breaks a company's data management strategy, and can lead to a patchwork of acceleration band-aids, one per application.
At the other end of the spectrum, customers thinking more strategically may look to a hybrid or all-flash storage array to solve their performance needs. But as many iSCSI customers have learned the hard way, the potential performance gains of flash storage can be encumbered by network speed. In addition to this performance constraint, array-based flash storage offerings tend to touch multiple application teams and involve big dollars, and may only be considered a viable option once pain points have been thoroughly and widely felt.
Fortunately for performance-challenged, iSCSI customers, there is a better alternative. Astute Networks ViSX sits in the middle, offering a broader solution than flash in the server, bust cost effective and tactically achievable as well. As an all-flash storage appliance that resides between servers and iSCSI storage arrays, ViSX complements and enhances existing iSCSI SAN environments, delivering wire-speed storage access without disrupting or forcing changes to server, virtual server, storage or application layers. Customers can invest in ViSX before their performance pain points get too big, or before they've gone down the road of breaking their infrastructure with a tactical solution.
Server virtualization brings a vast array of benefits ranging from direct cost savings to indirect improvements in business agility and client satisfaction. But for the IT investment decision-maker, it’s those measurable “hard” costs that matter most. Fortunately, virtualized environments deliver a quantifiably lower Total Cost of Ownership (TCO) compared to legacy physical infrastructures. Since we have all experienced the economic imperative to minimize TCO, it’s easy to understand why virtualization has been driven across significant percentages of modern data centers. Virtualization today is a proven, cost-effective, and nearly ubiquitous IT solution.
But the further challenge for IT investors now is to choose the best virtualization solution to get the “biggest bang for the buck”. Unfortunately the traditional cost per infrastructure (server CPU, storage GB, et.al.) metrics used to judge physical hardware are not sufficient buying criteria in a virtual environment. In a virtualized and cloudy world, cost per application comes closer to capturing the true value of a virtual infrastructure investment. For example, the more virtual machines (VMs) that can be hosted within the same size investment, the lower the cost per application. Therefore a key comparison metric between virtualization solutions is virtual machine (VM) density. All other things being equal (e.g. applications, choice of hypervisor, specific allocations and policies), an infrastructure supporting a higher VM density provides a better value.
As virtualization deployments grow to include active production workloads, they greatly stress and challenge traditional IT infrastructure. The virtualization hypervisor “blends up” client IO workloads and condenses IO intensive activities (i.e. migration, snapshots, backups), with the result that the underlying storage often presents the biggest constraint on effective VM density. Therefore it’s critically important when selecting storage for virtual environments to get past general marketing and focus on validated claims of proven VM density.
Server virtualization has deeply penetrated IT and now hosts well more than half of all server instances, but storage virtualization has been slower to catch on. Yet the main constraint on further server virtualization adoption stems from poorly aligned storage. Perhaps the storage world just moves slower due to the “weight” of data, but if so it will also pick up more momentum. Here at Taneja Group we think due to virtualization pressure and desire for cloud (and now software defined data center) infrastructures, proven storage virtualization is next on everybody’s radar. This is good news for IBM and the IBM SAN Volume Controller (SVC). SVC, first launched in 2003, not only put a firm stake in the ground as to what block storage virtualization could be, but for a decade it has continued to evolve and has been for some time what we think of as the gold standard for what block storage virtualization can be.
In the face of ever-growing data, new processing paradigms, and aggressively evolving applications, storage virtualization provides an ideally adaptive approach by creating optimal logical storage services out of otherwise disparate and inflexible physical storage arrays. Like server virtualization, storage virtualization helps tackle difficult IT challenges in guaranteeing performance at scale, optimizing capacity utilization, taming complexity, increasing availability, and assuring data protection and DR across the enterprise - all while earning significant cost and efficiency benefits.
In fact, robust storage virtualization is becoming as necessary as server virtualization. Dynamic architectures require virtualizing all resources – compute, network, and storage. Experience with cloud implementations shows that server and storage virtualization are both necessary and complimentary, and lead towards the next generation data center built on end-to-end consistency, high automation, and “software defined” principles.
While many IT storage strategists have pursued storage consolidation and adopted tiering practices to tame some growth challenges, they should all now look to storage virtualization to achieve higher levels of flexibility, agility, and resilience. However, storage virtualization adoption has lagged behind server virtualization. This is where IBM brings to the table a tremendously proven solution that has succeeded in more than 10,000 shipments to customers of almost every size and storage mix, coupled with world-class support and services to guarantee success.
In this profile, we’ll briefly define storage virtualization and the key benefits it brings to the modern data center and consider what it means when IBM says it is “Making Storage Better”. In that light we’ll look in more depth at IBM SVC, its architecture and key product features that have helped establish it as the market leading block storage virtualization solution.
Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.
However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.
Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe, or Dedupe 1.0 as it is sometimes referred to, is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.
A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.
Federating dedupe across systems goes a long way to solve that problem. HP StoreOnce extends consistent dedupe across the infrastructure. Only HP implements the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.
This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges, how HP is achieving its vision of federated dedupe with StoreOnce –- and what HP’s StoreOnce VSA announcement and achievement means to backup service providers, enterprise ROBO, and SMB customers.
Virtual Storage Appliances (VSAs) have been around for a while – just over 5 years ago, the earliest vendors started to sample market interest in this technology. In theory, the market was interested, but perhaps more so on paper than in actual adoption during those early days. Regardless, that interest drove more vendors to release VSAs and today there are dozens of Virtual Storage Appliances on the market. Many of these are focused on capabilities such as backup, but at least a handful can serve as primary storage beneath the virtual infrastructure.
The primary storage VSAs on the market came about as product or marketing experiments; perhaps to let customers experience a storage system without making a full investment, allow customers to ingest rogue virtual infrastructure storage back into their existing storage infrastructure, or enable consistent storage management as customers deployed workloads with remote service providers.
For certain, many of these primary storage VSAs have never found their footing, and still languish as a neglected technology in a dusty corner of a vendor’s product portfolio. But there have been exceptions. One is HP StoreVirtual. HP has been quite serious about delivering StoreVirtual as a real storage solution with hefty capabilities. StoreVirtual is one of HP’s several converged storage technologies that is blurring the boundaries between storage and compute, and helping customer infrastructures to scale and adapt while maintaining maximum efficiency. The popular StoreVirtual product line comes in a variety of physical formats, from entry-level 1U 4 drive systems to extremely dense BladeSystem SANs. Approximately 5 years ago, the StoreVirtual software foundation was also released in Virtual Storage Appliance form. This StoreVirtual VSA is a full storage system that looks, acts, and functions just like its physical StoreVirtual brethren. The intent behind HP’s StoreVirtual VSA is increased ease of use, increased storage functionality in the virtual infrastructure, and greater adaptability, within a dense footprint that can make use of any available storage resources (direct attached server storage or networked storage). HP claims that StoreVirtual VSA leads the market in ease of use, performance, efficiency, and storage capabilities – all of which makes it ideally positioned to service primary workloads in the data center.
In this Technology Validation, we set out to examine StoreVirtual VSA, and through comparison to another leading virtual storage appliance (VMware’s vSphere Storage Appliance – VMware VSA) evaluate the effectiveness of StoreVirtual VSA’s architecture in enabling superior, primary-workload-ready storage in the virtual infrastructure. With an eye on ease of use, efficiency, and flexibility, we put StoreVirtual VSA and VMware vSphere Storage Appliance through a detailed examination that included both a review of functionality and a hands-on lab examination of performance, scalability, resiliency, and ease of use.