Taneja Group | VM-centric
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: VM-centric

Profiles/Reports

Software-defined Storage and VMware's Virtual SAN Redefining Storage Operations

The massive trend to virtualize servers has brought great benefits to IT data centers everywhere, but other domains of IT infrastructure have been challenged to likewise evolve. In particular, enterprise storage has remained expensively tied to a traditional hardware infrastructure based on antiquated logical constructs that are not well aligned with virtual workloads – ultimately impairing both IT efficiency and organizational agility.

Software-Defined Storage provides a new approach to making better use of storage resources in the virtual environment. Some software-defined solutions are even enabling storage provisioning and management on an object, database or per-VM level instead of struggling with block storage LUN’s or file volumes. In particular, VM-centricity, especially when combined with an automatic policy-based approach to management, enables virtual admins to deal with storage in the same mindset and in the same flow as other virtual admin tasks.

In this paper, we will look at VMware’s Virtual SAN product and its impact on operations. Virtual SAN brings both virtualized storage infrastructure and VM-centric storage together into one solution that significantly reduces cost compared to a traditional SAN. While this kind of software-defined storage alters the acquisition cost of storage in several big ways (avoiding proprietary storage hardware, dedicated storage adapters and fabrics, et.al.) here at Taneja Group what we find more significant is the opportunity for solutions like VMware’s Virtual SAN to fundamentally alter the on-going operational (or OPEX) costs of storage.

In this report, we will look at how Software-Defined Storage stands to transform the long term OPEX for storage by examining VMware’s Virtual SAN product. We’ll do this by examining a representative handful of key operational tasks associated with enterprise storage and the virtual infrastructure in our validation lab. We’ll examine the key data points recorded from our comparative hands-on examination, estimating the overall time and effort required for common OPEX tasks on both VMware Virtual SAN and traditional enterprise storage.

Publish date: 08/08/14
news

Prepare for VMware VVOLs and how they will change storage products

The benefits of VMware VVOLs are vast and unquestioned. But most IT shops are struggling to deal with how they are changing today's storage products.

  • Premiered: 04/03/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VVOL TBA VVOLs TBA VMware VVOLs TBA Storage TBA Virtualization TBA Virtual Machine TBA VM TBA LUN TBA Performance TBA Capacity TBA Acopia TBA Caching TBA Thin Provisioning TBA Snapshots TBA cloning TBA replication TBA Deduplication TBA Encryption TBA Hypervisor TBA Volumes TBA SAN TBA NAS TBA storage container TBA vSphere TBA DAS TBA software-defined TBA software-defined storage TBA SDS TBA Virtual SAN
news

Are vSphere Virtual Volumes necessary with an EVO:RAIL appliance?

Taneja Group's Tom Fenton explains how VVOLs work, and the storage differences between VVOLs and VSAN.

  • Premiered: 04/28/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Tom Fenton TBA VVOL TBA VVOLs TBA VMware VVOLs TBA Storage TBA VMWare TBA vSphere TBA EVO:RAIL TBA hyperconverged TBA VSAN TBA Virtual SAN TBA VM-centric TBA LUN TBA storage provisioning TBA SPBM TBA Storage Policy Based Management
news

What storage problems can vSphere Virtual Volumes solve?

VSphere VVOLs allow virtual administrators to self-provision a pool of storage as they see fit.

  • Premiered: 04/28/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Tom Fenton TBA vSphere TBA VMWare TBA VMware vSphere TBA VVOL TBA VVOLs TBA VMware VVOLs TBA Virtual Volumes TBA VMware Virtual Volumes TBA Storage TBA virtual administrator TBA Virtualization TBA Data Center TBA NFS TBA Fibre Channel TBA FC TBA iSCSI TBA Ethernet TBA LUN TBA SPBM TBA Storage Policy Based Management TBA Snapshots TBA cloning TBA replication TBA QoS TBA VM-centric
news

VMware VVOLs could lift external storage systems

VMware VVOLs could be the elixir traditional external storage systems need to gain ground on VM-centric storage and hyper-converged products.

  • Premiered: 08/03/15
  • Author: Jeff Kato
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VVOL TBA VVOLs TBA VMware VVOLs TBA VM-centric TBA Storage TBA hyper-converged TBA Cloud TBA Virtualization TBA Virtual Machine TBA VM TBA convergence TBA converged TBA Dell TBA EMC TBA HP TBA IBM TBA NetApp TBA VMWare TBA vSphere TBA vSphere 6 TBA Nutanix TBA SimpliVity TBA Tintri TBA QoS TBA Performance TBA high availability TBA HA TBA RAID TBA Capacity
news

Navigate today's hyper-converged market

The hyper-converged market is rife with products from competing vendors, making choosing a hyper-converged system a difficult proposition.

  • Premiered: 11/05/15
  • Author: Jeff Kato
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA hyper convergence TBA hyper-converged TBA Storage TBA HCI TBA Dell TBA EMC TBA HP TBA VMWare TBA Nutanix TBA Scale Computing TBA SimpliVity TBA DataCore TBA Gridstore TBA Maxta TBA Nimboxx TBA Pivot3 TBA modular TBA scalability TBA scalable TBA VDI TBA Virtual Desktop Infrastructure TBA Performance TBA Virtual Machine TBA VM TBA VM-centric TBA Virtualization TBA Microsoft TBA KVM TBA Hypervisor TBA Open Source
Profiles/Reports

HyperConverged Infrastructure Powered by Pivot3: Benefits of a More Efficient HCI Architecture

Virtualization has matured and become widely adopted in the enterprise market. HyperConverged Infrastructure (HCI), with virtualization at its core, is taking the market by storm, enabling virtualization for businesses of all sizes. The success of these technologies has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the time and effort required to create custom infrastructure from best-of-breed DIY components.

With HCI, the traditional three-tier architecture has been collapsed into a single system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. The immense success of this approach has led to increased competition in this space and the customers are required to sort through the various offerings, analyzing key attributes to determine which are significant.

One of these competing vendors, Pivot3, was founded in 2002 and has been in the HCI market since 2008, well before the term HyperConverged was used. For many years, Pivot3’s vSTAC architecture has provided the most efficient scale-out Software-Defined Storage (SDS) system available on the market. This efficiency is attributed to three design innovations. The first is their extremely efficient and reliable erasure coding technology called Scalar Erasure Coding. Conversely, many leading HCI implementations use replication-based redundancy techniques which are heavy on storage capacity utilization. Scalar Erasure Coding from Pivot3 can deliver significant capacity savings depending on the level of drive protection selected. The second innovation is Pivot3’s Global Hyperconvergence which creates a cross-cluster virtual SAN, the HyperSAN: in case of appliance failure, a VM migrates to another node and continues operations without the need to divert compute power to copy data over to that node. The third innovation has been a reduction in CPU overhead needed to implement the SDS features and other VM centric management tasks. Implementation of the HCI software uses the same CPU complex as business applications, this additional usage is referred to as the HCI overhead tax. HCI overhead tax is important since the licensing cost for many applications and infrastructure software are based on a per CPU basis. Even with today’s ever- increasing cores per CPU there still can be significant cost saving by keeping the HCI overhead tax low.

The Pivot3 family of HCI products delivering high data efficiency with a very low overhead are an ideal solution for storage-centric business workload environments where storage costs and reliability are critical success factors. One example of this is a VDI implementation where cost per seat determines success. Other examples would be capacity-centric workloads such as big data or video surveillance that could benefit from a Pivot3 HCI approach with leading storage capacity and reliability. In this paper we compare Pivot3 with other leading HCI architectures. We utilized data extracted from the alternative HCI vendor’s reference architectures for VDI implementations. Using real world examples, we have demonstrated that with other solutions, users must purchase up to 136% more raw storage capacity and up to 59% more total CPU cores than are required when using equivalent Pivot3 products. These impressive results can lead to significant costs savings. 

Publish date: 12/10/15
Profiles/Reports

Array Efficient, VM-Centric Data Protection: HPE Data Protector and 3PAR StoreServ

One of the biggest storage trends we are seeing in our current research here at Taneja Group is that of storage buyers (and operators) looking for more functionality – and at the same time increased simplicity – from their storage infrastructure. For this and many other reasons, including TCO (both CAPEX and OPEX) and improved service delivery, functional “convergence” is currently a big IT theme. In storage we see IT folks wanting to eliminate excessive layers in their complex stacks of hardware and software that were historically needed to accomplish common tasks. Perhaps the biggest, most critical, and unfortunately onerous and unnecessarily complex task that enterprise storage folks have had to face is that of backup and recovery. As a key trusted vendor of both data protection and storage solutions, we note that HPE continues to invest in producing better solutions in this space.

HPE has diligently been working towards integrating data protection functionality natively within their enterprise storage solutions starting with the highly capable tier-1 3PAR StoreServ arrays. This isn’t to say that the storage array now turns into a single autonomous unit, becoming a chokepoint or critical point of failure, but rather that it becomes capable of directly providing key data services to downstream storage clients while being directed and optimized by intelligent management (which often has a system-wide or larger perspective). This approach removes excess layers of 3rd party products and the inefficient indirect data flows traditionally needed to provide, assure, and then accelerate comprehensive data protection schemes. Ultimately this evolution creates a type of “software-defined data protection” in which the controlling backup and recovery software, in this case HPE’s industry-leading Data Protector, directly manages application-centric array-efficient snapshots.

In this report we examine this disruptively simple approach and how HPE extends it to the virtual environment – converging backup capabilities between Data Protector and 3PAR StoreServ to provide hardware assisted agentless backup and recovery for virtual machines. With HPE’s approach, offloading VM-centric snapshots to the array while continuing to rely on the hypervisor to coordinate the physical resources of virtual machines, virtualized organizations gain on many fronts including greater backup efficiency, reduced OPEX, greater data protection coverage, immediate and fine-grained recovery, and ultimately a more resilient enterprise. We’ll also look at why HPE is in a unique position to offer this kind of “converging” market leadership, with a complete end-to-end solution stack including innovative research and development, sales, support, and professional services.

Publish date: 12/21/15
Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
news

Evaluating Data Protection for Hyperconverged Infrastructure

Hyperconvergence is a still-evolving trend, and the number of vendors in the space is making the evaluation of hyperconverged infrastructure complex. One criterion to consider in any infrastructure review is data protection.

  • Premiered: 02/02/16
  • Author: Jim Whalen
  • Published: InfoStor
Topic(s): TBA Jim Whalen TBA Storage TBA hyperconverged TBA hyperconvergence TBA hyperconverged infrastructure TBA Data protection TBA DP TBA Backup TBA replication TBA DR TBA Disaster Recovery TBA SimpliVity TBA Deduplication TBA Compression TBA WAN TBA WANO TBA WAN Optimization TBA VM TBA Virtual Machine TBA VM-centric TBA VM-centricity TBA Compute TBA Networking TBA Hypervisor TBA Virtualization TBA scale-out TBA IOPS TBA Pivot3 TBA Gridstore TBA converged
Profiles/Reports

Business Continuity Best Practices for SMB

Virtualization’s biggest driver is big savings: slashing expenditures on servers, licenses, management, and energy. Another major benefit is the increased ease of disaster recovery and business continuity (DR/BC) in virtualized environments.

Note that disaster recovery and business continuity are closely aligned but not identical. We define disaster recovery as the process of restoring lost data, applications and systems following a profound data loss event, such as a natural disaster, a deliberate data breach or employee negligence. Business continuity takes DR a step further. BC’s goal is not only to recover the computing environment but also to recover them swiftly and with zero data loss. This is where recovery point objectives (RPO) and recovery time objectives (RTO) enter the picture, with IT assigning differing RPO and RTO strategies according to application priority.

DR/BC can be difficult to do well in data centers with traditional physical servers, particularly in SMB with limited IT budgets and generalist IT staff. Many of these servers are siloed with direct-attached storage and individual data protection processes. Mirroring and replication used to require one-to-one hardware correspondence and can be expensive, leading to a universal reliance on localized backup as data protection. In addition, small IT staffs do not always take the time to perfect their backup processes across disparate servers. Either they do not do it at all –rolling the dice and hoping there won’t be a disaster – or they slap backups on tape or USB drives and stick them on a shelf.

Virtualization can transform this environment into a much more efficient and protected data center. Backing up VMs from a handful of host servers is faster and less resource-intensive than backing up tens or hundreds of physical servers. And with scheduled replication, companies achieve faster backup and much improved recovery objectives.

However, many SMBs avoid virtualization. They cite factors such as cost, unfamiliarity with hypervisors, and added complexity. And they are not wrong: virtualization can introduce complexity, it can be expensive, and it can require familiarity with hypervisors. Virtualization cuts down on physical servers but is resource-intensive, especially as the virtualized environment grows. This means capital costs for high performance CPUs and storage. SMBs may also have to deal VM licensing and management costs, administrative burdens, and the challenge of protecting and replicating virtualized data on a strict budget.

For all its complexity and learning curve, is virtualization worth it for SMBs? Definitely. Its benefits far outweigh its problems, particularly its advantages for DR/BC. But for many SMBs, traditional virtualization is often too expensive and complex to warrant the effort. We believe that the answer is HyperConverged Infrastructure: HCI. Of HCI providers, Scale Computing is exceptionally attractive to the SMB. This paper will explain why. 

Publish date: 09/30/15
news

Galactic Exchange Launches Into Big Data Space With 5 Minute Set-Up Spark/Hadoop Powered Clusters

Galactic Exchange, Inc. officially came out of stealth mode this week to announce initial beta availability of ClusterGX™, an open source clustering solution which provides unprecedented simplicity of deployment and management of Spark/Hadoop clusters.

  • Premiered: 03/25/16
  • Author: Taneja Group
  • Published: Inside Big Data
Topic(s): TBA Galactic Exchange TBA ClusterGX TBA cluster TBA clusters TBA Open Source TBA Spark TBA Hadoop TBA Cloud TBA managed cloud TBA cloud cluster TBA Docker TBA Storage TBA cluster scaling TBA VM TBA Virtual Machine TBA Big Data TBA Virtualization TBA hyperconverged TBA hyperconvergence TBA VM-centric TBA Mike Matchett
Profiles/Reports

The Hyperconverged Data Center: Nutanix Customers Explain Why They Replaced Their EMC SANS

Taneja Group spoke with several Nutanix customers in order to understand why they switched from EMC storage to the Nutanix platform. All of the respondent’s articulated key architectural benefits of hyperconvergence versus a traditional 3-tier solutions. In addition, specific Nutanix features for mission-critical production environments were often cited.

Hyperconverged systems have become a mainstream alternative to traditional 3-tier architecture consisting of separate compute, storage and networking products. Nutanix collapses this complex environment into software-based infrastructure optimized for virtual environments. Hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual assets. Hyperconvergence offers a key value proposition over 3-tier architecture:  instead of deploying, managing and integrating separate components – storage, servers, networking, data services, and hypervisors – these components are combined into a modular high performance system.

The customers we interviewed operate in very different industries. In common, they all maintained data centers undergoing fundamental changes, typically involving an opportunity to refresh some portion of their 3-tier infrastructure. This enabled the evaluation of hyperconvergence in supporting those changes. Customers interviewed found that Nutanix hyperconvergence delivered benefits in the areas of scalability, simplicity, value, performance, and support. If we could use one phrase to explain why Nutanix’ is winning over EMC customers in the enterprise market it would be “Ease of Everything.” Nutanix works, and works consistently with small and large clusters, in single and multiple datacenters, with specialist or generalist IT support, and across hypervisors.

The five generations of Nutanix products span many years of product innovation. Web-scale architecture has been the key to Nutanix platform’s enterprise capable performance, simplicity and scalability. Building technology like this requires years of innovation and focus and is not an add-on for existing products and architectures.

The modern data center is quickly changing. Extreme data growth and complexity are driving data center directors toward innovative technology that will grow with them. Given the benefits of Nutanix web-scale architecture – and the Ease of Everything – data center directors can confidently adopt Nutanix as their partner in data center transformation just as the following EMC customers did.

Publish date: 03/31/16
news

Hyper-converged vendors offer new use cases, products

Hyper-converged market systems show they are ready to branch out beyond primary storage applications. In fact, it's happening now.

  • Premiered: 04/15/16
  • Author: Arun Taneja
  • Published: TechTarget: Search Converged IT
Topic(s): TBA hyper-converged TBA hyperconverged TBA hyperconvergence TBA Primary Storage TBA Storage TBA Nutanix TBA SimpliVity TBA scale-out TBA scale-out architecture TBA converged TBA convergence TBA HPE TBA ConvergedSystem TBA NetApp TBA FlexPod TBA VCE TBA Compute TBA Hypervisor TBA Virtualization TBA Mobility TBA Cloud TBA cloud integration TBA web-scale TBA web-scale storage TBA secondary storage TBA VM TBA VM centricity TBA VM-centric TBA Virtual Machine TBA WAN Optimization
Profiles/Reports

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16
Profiles/Reports

Optimizing VM Storage Performance & Capacity - Tintri Customers Leverage New Predictive Analytics

Today we are seeing big impacts on storage from the huge increase in the scale of an organization’s important data (e.g. Big Data, Internet Of Things) and the growing size of virtualization clusters (e.g. never-ending VM’s, VDI, cloud-building). In addition, virtualization adoption tends to increase the generalization of IT admins. In particular, IT groups are focusing more on servicing users and applications and no longer want to be just managing infrastructure for infrastructure’s sake. Everything that IT does is becoming interpreted, analyzed, and managed in application/business terms, including storage to optimize the return on their total IT investment. To move forward, an organization’s storage infrastructure not only needs to grow internally smarter, it also needs to become both VM and application aware.

While server virtualization made a lot of things better for the over-taxed IT shop, delivering quality storage services in hypervisor infrastructures with traditional storage created difficult challenges. In response Tintri pioneered per-VM storage infrastructure. The Tintri VMstore has eliminated multiple points of storage friction and pain. In fact, it’s now becoming a mandatory checkbox across the storage market for all arrays to claim some kind of VM-centricity. Unfortunately, traditional arrays are mainly focused on checking off rudimentary support for external hypervisor APIs that only serve to re-package the same old storage. The best fit to today’s (and tomorrow’s) virtual storage requirements will only come from fully engineered VM-centric storage and application-aware approaches as Tintri has done.

However, it’s not enough to simply drop in storage that automatically drives best practice policies and handles today’s needs. We all know change is constant, and key to preparing for both growth and change is having a detailed, properly focused view of today’s large scale environments, along with smart planning tools that help IT both optimize current resources and make the best IT investment decisions going forward. To meet those larger needs, Tintri has rolled out a Tintri Analytics SaaS-based offering that applies big data analytical power to the large scale of their customer’s VMstore VM-aware metrics.

In this report we will look briefly at Tintri’s overall “per-VM” storage approach and then take a deeper look at their new Tintri Analytics offering. The new Tintri Analytics management service further optimizes their app-aware VM storage with advanced VM-centric performance and capacity management. With this new service, Tintri is helping their customers receive greater visibility, insight and analysis over large, cloud-scale virtual operations. We’ll see how “big data” enhanced intelligence provides significant value and differentiation, and get a glimpse of the payback that a predictive approach provides both the virtual admin and application owners. 

Publish date: 11/04/16
Profiles/Reports

Datrium's Optimized Platform for Virtualized IT: "Open Convergence" Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16
news

HPE pays $650 million for SimpliVity hyper-convergence

The long-awaited HPE-SimpliVity deal cost HPE $650 million for the hyper-converged pioneer. The buy gives HPE an installed base, as well as data reduction and protection features.

  • Premiered: 01/18/17
  • Author: Taneja Group
  • Published: TechTarget: Search Converged Infrastructure
Topic(s): TBA HPE TBA SimpliVity TBA hyperconverged TBA hyper-converged TBA hyperconvergence TBA Data reduction TBA Data protection TBA Nutanix TBA Data Deduplication TBA Deduplication TBA Compression TBA HCI TBA hyperconverged infrastructure TBA Arun Taneja TBA Storage TBA ProLiant TBA OmniStack TBA Virtual Machine TBA VM TBA VM-centric TBA VM-centricity TBA WAN Optimization TBA replication TBA Disaster Recovery TBA DR TBA Dell EMC TBA Cisco TBA Lenovo TBA Huawei TBA Dedupe