This Field Report was created by Taneja Group for Nutanix in late 2014 with updates in 2015. The Taneja Group analyzed the experiences of seven Nutanix Xtreme Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.
As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership (see an opinion of the DELL/EMC merger at end of this document). VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.
In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.
Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.
This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments.
Hyperconvergence is one of the hottest IT trends going in to 2016. In a recent Taneja Group survey of senior enterprise IT folks we found that over 25% of organizations are looking to adopt hyperconvergence as their primary data center architecture. Yet the centralized enterprise datacenter may just be the tip of the iceberg when it comes to the vast opportunity for hyperconverged solutions. Where there are remote or branch office (ROBO) requirements demanding localized computing, some form of hyperconvergence would seem the ideal way to address the scale, distribution, protection and remote management challenges involved in putting IT infrastructure “out there” remotely and in large numbers.
However, most of today’s popular hyperconverged appliances were designed as data center infrastructure, converging data center IT resources like servers, storage, virtualization and networking into Lego™ like IT building blocks. While these at first might seem ideal for ROBOs – the promise of dropping in “whole” modular appliances precludes any number of onsite integration and maintenance challenges, ROBOs have different and often more challenging requirements than a datacenter. A ROBO does not often come with trained IT staff or a protected datacenter environment. They are, by definition, located remotely across relatively unreliable networks. And they fan out to the thousands (or tens of thousands) of locations.
Certainly any amount of convergence simplifies infrastructure making easier to deploy and maintain. But in general popular hyperconvergence appliances haven’t been designed to be remotely managed en masse, don’t address unreliable networks, and converge storage locally and directly within themselves. Persisting data in the ROBO is a recipe leading to a myriad of ROBO data protection issues. In ROBO scenarios, the datacenter form of hyperconvergence is not significantly better than simple converged infrastructure (e.g. pre-configured rack or blades in a box).
Riverbed’s SteelFusion we feel has brought full hyperconvergence benefits to the ROBO edge of the organization. They’ve married their world-class WANO technologies, virtualization, and remote storage “projection” to create what we might call “Edge Hyperconvergence”. We see the edge hyperconverged SteelFusion as purposely designed for companies with any number of ROBO’s that each require local IT processing.
Dell Storage SC Series achieved a five nines (5 9s) availability rating years ago. Now the SC Series is displaying 5 9s and greater with technologies that are moving availability even farther up the scale. This is a big achievement based on real, measurable field data: the only numbers that really count.
Not every piece of data requires 5 9s capability. However, critical Tier 1 applications do need it. Outage costs vary by industry but easily total millions of dollars per hour in highly regulated and data-intensive industries. Some of the organizations in these verticals are enterprises, but many more are mid-sized businesses with exceptionally mission-critical data stores.
Consider such applications as e-commerce systems. Online customers are notorious for abandoning shopping carts even when the application is running smoothly. Downing an e-commerce system can easily cost millions of dollars in lost sales over a few days or hours, not to mention a loss of reputation. Other mission-critical applications that must be available include OLTP, CRM or even email systems.
Web applications present another HA mission. SaaS providers with sales support or finance software can hardly afford downtime. Streaming sites with subscribers also lose large amounts of future revenue if they go down. Many customers will ask for refunds or cancel their subscriptions and never return.
However, most highly available 5 9s systems have large purchase prices and high ongoing expenses. Many small enterprise and mid-sized business cannot afford these high-priced systems or the staff that goes with them. They know they need availability and try to save money and time by buying cheaper systems with 4 9s availability or lower. Their philosophy is that these systems are good enough. And they are good enough for general storage, but not for data whose unavailability quickly spirals up into the millions of dollars. Buying less than 5 9s in this type of environment is a false economy.
Still, even the risk of sub-par availability doesn’t raise the money that a business needs for high-end availability systems. This is where the story gets very interesting. Dell Storage SC Series offers 5 9s and higher availability – and it does it at a mid-range cost. Dell does not sacrifice high availability architecture for a lower CAPEX and OPEX but also provides dynamic scalability, management simplicity, redundant storage, space-saving snapshots and automatic tiering. Thanks to the architecture behind Dell Storage SC Series, Dell has achieved a unique position in the high availability stakes.
Backup and recovery, replication, recovery assurance: all are more crucial than ever in the light of massively growing data. But complexity has grown right alongside expanding data. Data centers and their managers strain under the burdens of legacy physical data protection, fast-growing virtual data requirements, backup decisions around local, remote and cloud sites, and the need for specialist IT to administer complex data protection processes.
In response, Unitrends has launched a compelling new version of Unitrends Enterprise Backup (UEB): Release 9.0. Its completely revamped user interface and experience significantly reduces management overhead and lets even new users easily perform sophisticated functions using the redesigned dashboard. And its key capabilities are second to none for modern data protection in physical and virtual environments.
One of UEB 9.0’s differentiating strengths (indeed, the entire Unitrends product line) is the fact that in today’s increasingly more virtualized world, they still offer deep support for physical as well as virtual environments. This is more important than it might at first appear. There is a huge installed base of legacy equipment in existence and a lot of it has still not been moved into a virtual environment; yet it all needs to be protected. Within this legacy base, there are many mission-critical applications still running on physical servers that remain high priority protection targets. In these environments, many admins are forced to purchase specialized tools for protecting virtual environments separate from physical ones, or to use point backup products for specific applications. Both options carry extra costs by buying multiple applications that do essentially the same thing, and by hiring multiple people trained to use them.
This is why no matter how virtualized an environment is, if there is even one critical application that is still physical, admins need to strongly consider a solution that protects both. This gives the data center maximum protection with lower operating costs, since they no longer need multiple data protection packages and trained staff to run them.
This is where Unitrends steps in. With its rich capabilities and intuitive interface, UEB 9.0 protects data throughout the data center, and does not require IT specialists. This Product in Depth assesses Unitrends Enterprise Backup 9.0, the latest version of Unitrends flagship data protection platform. We put the new user interface through its paces to see just how intuitive it is, what information it provides and how many clicks it takes to perform some basic operations. We also did a deep dive into the functionality provided by the backup engine itself, some of which is a carryover from earlier versions and some which are new for 9.0.
All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.
The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.
With application and service availability in mind, companies such as Veritas have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Veritas Technologies LLC. has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Veritas ApplicationHA (developed in partnership with VMware) and Veritas InfoScale Availability (formerly Veritas Cluster Server). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.
Data Protection Designed for Flash - Better Together: HP 3PAR StoreServ Storage and StoreOnce System
Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs? Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?
These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.
In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HP 3PAR StoreServ Storage, HP StoreOnce System backup appliances, and HP StoreOnce Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.