This Field Report was created by Taneja Group for Nutanix. The Taneja Group analyzed the experiences of seven Nutanix Virtual Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.
As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership. VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.
In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.
Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.
This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments.
Fast-growing virtualized environments present a thorny storage challenge to IT. Whether mission-critical applications with demanding SLAs, VDI rollouts with boot storms, or deploying a private cloud for large dev & test environments: delivering virtualized environments and cloud deployments using traditional storage can stall or break a virtualization project.
Flash technology is certainly part of the solution to performance challenges posed by virtualized workloads, but can be prohibitively expensive to broadly implement across the environment. Although flash can be deployed in a number of targeted ways and placed in the infrastructure, the more it is tied down to specific hosts and workloads, the less benefit it provides to the overall production environment. This in turn causes more management overhead.
Recently Taneja Group ran Tintri VMstore storage through our hands-on validation lab and documented significant large factors of improvement over traditional storage. Those factors accrue through Tintri’s cost-effective acquisition, simplicity and ease of deployment and data migration, effective high performance and availability and smooth expansion over time.
This Field Report validates our impressive lab findings with feedback from the field: six customers who have Tintri storage in production environments. While each customer has a unique own story to tell, we found that everyone documented a compelling value proposition based on TCO factors. Throughout our research we found that Tintri’s approach provides significantly lower TCO than traditional storage solutions.
The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.
In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:
· Implementation efficiency that accelerates time to realizing value from integrated systems
· Operational efficiency through optimized workload density and an ideally right sized set of infrastructure
· Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together
· Scale and agility efficiency unlocked through a repeatedly deployable building block approach
· Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure
In late 2013, HP introduced a new portfolio oSDDCffering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.
The massive trend to virtualize servers has brought great benefits to IT data centers everywhere, but other domains of IT infrastructure have been challenged to likewise evolve. In particular, enterprise storage has remained expensively tied to a traditional hardware infrastructure based on antiquated logical constructs that are not well aligned with virtual workloads – ultimately impairing both IT efficiency and organizational agility.
Software-Defined Storage provides a new approach to making better use of storage resources in the virtual environment. Some software-defined solutions are even enabling storage provisioning and management on an object, database or per-VM level instead of struggling with block storage LUN’s or file volumes. In particular, VM-centricity, especially when combined with an automatic policy-based approach to management, enables virtual admins to deal with storage in the same mindset and in the same flow as other virtual admin tasks.
In this paper, we will look at VMware’s Virtual SAN product and its impact on operations. Virtual SAN brings both virtualized storage infrastructure and VM-centric storage together into one solution that significantly reduces cost compared to a traditional SAN. While this kind of software-defined storage alters the acquisition cost of storage in several big ways (avoiding proprietary storage hardware, dedicated storage adapters and fabrics, et.al.) here at Taneja Group what we find more significant is the opportunity for solutions like VMware’s Virtual SAN to fundamentally alter the on-going operational (or OPEX) costs of storage.
In this report, we will look at how Software-Defined Storage stands to transform the long term OPEX for storage by examining VMware’s Virtual SAN product. We’ll do this by examining a representative handful of key operational tasks associated with enterprise storage and the virtual infrastructure in our validation lab. We’ll examine the key data points recorded from our comparative hands-on examination, estimating the overall time and effort required for common OPEX tasks on both VMware Virtual SAN and traditional enterprise storage.
Cloud computing has several clear business models. SaaS delivers software, upgrades and maintenance as a service, saving customers money by eliminating costs of ownership that the cloud provider now bears. Several technology factors contribute to SaaS' increasing popularity including protocol standardization, the ubiquity of Web browsing, access to broadband networks, and rapid application development. It’s not perfect – people have legitimate concerns about data security, governance, vendor lock-in, and data portability, but based on its success, the advantages of SaaS seem to be outweighing its challenges. And the market segment is growing fast.
Another cloud computing model is IaaS, where the customer outsources the compute infrastructure to a cloud provider. This model is gaining traction, especially for application development and testing. App developers are able to take the capital they would otherwise have to spend on buying computing gear and target it to specific development projects that are underway in Internet data centers. The problem with IaaS is that cloud software development doesn’t necessarily translate well into on-premises deployments and many developers prefer to develop SaaS.
Storage in the cloud is yet another business model with different dynamics. While SaaS and Iaas are strongly oriented towards cloud deployments, there are strong pressures driving cloud storage toward on-premises deployments. While storing data in the cloud for SaaS and IaaS computing is certainly important, the vast amount of data still resides on-premises where its growth is largely unchecked. If the cloud storage is going to succeed, it needs to become relevant to the people managing data in corporate on-premises data centers.
Storage has long been the tail on the proverbial dog in virtualized environments. The random I/O streams generated by multiple consolidated VMs creates an “I/O blender” effect, which overwhelms traditional array-based architectures and compromises application performance. As many customers have learned the hard way, doing storage right in the virtual infrastructure required a fresh and innovative approach.
These sentiments were echoed in the findings of Taneja Group’s latest research study on storage acceleration and performance. More than half of the 280 buyers and practitioners we surveyed have an immediate need to accelerate one or more applications running in their virtual infrastructures. While three quarters of survey respondents are seriously considering deploying a storage acceleration solution, only a handful are willing to give up or compromise their existing storage capabilities in the process. Customers need better performance, but in most cases can neither afford nor stomach a wholesale upgrade or replacement of their storage infrastructure to achieve it.
Fortunately for performance-challenged, mid-sized and enterprise customers, there is a better alternative. QLogic’s FabricCache QLE10000 is a server-side, SAN caching solution, which is designed to accelerate multi-server virtualized and clustered applications. Based on QLogic’s innovative Mt. Rainier technology, the QLE10000is the industry’s first caching SAN adapter offering that enables the cache from individual servers to be pooled and shared across multiple physical servers. This breakthrough functionality is delivered in the form of a combined Fibre Channel and caching host bus adapter (HBA), which plugs into existing HBA slots and is transparent to hypervisors, operating systems, and applications. QLogic’s FabricCache QLE10000 adapter cost effectively boosts performance of critical applications, while enabling customers to preserve their existing storage investments.