Taneja Group | RPO
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: RPO

Profiles/Reports

BakBone Introduces NetVault: TrueCDP – Integrated, Continuous Data Protection

In this profile we review this challenge of fast and granular data recoverability, the role of CDP, and the advantage of BakBone’s NetVault: TrueCDP for fast and consistent file recovery in environments running the BakBone suite of data protection products.

Publish date: 09/01/07
Profiles/Reports

Quest NetVault FastRecover: Next Generation Best Practices for Data Protection

Quest NetVault FastRecover continuously captures data and synthetically reconstructs full data sets for every point in time, allowing individual file as well as full data set access. NetVault FastRecover leverages the ability of host agents to see into host system interactions. A centralized management interface configures host agents and allows administrators to control which server volumes are protected, and which NetVault FastRecover servers (either on the local network or remotely over a WAN) are used.

Publish date: 07/24/12
news

In the Cloud Era, The Era of Convergence Is Upon Us

The era of IT infrastructure convergence is upon us. Every major vendor has some type of offering under this category. Startups and smaller players are also "talking" convergence. But what exactly is convergence and why are all the vendors so interested in getting included in this category? We will explain below the history of convergence, what it is, what it is not, what benefits accrue from such systems, who the players are, and who is leading the pack in true convergence.

  • Premiered: 06/10/14
  • Author: Arun Taneja
  • Published: Virtualization Review
Topic(s): TBA Arun Taneja TBA Virtualization Review TBA Virtualization TBA IT infrastructure TBA Converged Infrastructure TBA convergence TBA Networking TBA HDD TBA WANO TBA WAN Optimization TBA Data Deduplication TBA Deduplication TBA Hybrid Array TBA Hybrid TBA Cloud Computing TBA Cloud Storage TBA Storage TBA Cloud TBA Hadoop TBA Storage Virtualization TBA Compression TBA RTO TBA RPO TBA DR TBA Disaster Recovery TBA Remote Office TBA ROBO TBA Compute TBA hyperconvergence TBA Server Virtualization
news

In the Cloud Era, The Era of Convergence Is Upon Us

What exactly is convergence and what is making vendors scramble to get included in this category?

  • Premiered: 06/10/15
  • Author: Arun Taneja
  • Published: Virtualization Review
Topic(s): TBA IT infrastructure TBA convergence TBA Storage TBA HDD TBA WANO TBA WAN Optimization TBA Storage Virtualization TBA Virtualization TBA Cloud TBA Data Deduplication TBA Deduplication TBA Compression TBA Hybrid Array TBA Hybrid TBA Backup TBA Hadoop TBA Cloud Storage TBA Hybrid Cloud Storage TBA Hybrid Cloud TBA Capacity TBA Disaster Recovery TBA DR TBA RTO TBA RPO TBA hyperconvergence TBA hyperconverged TBA VCE TBA VMWare TBA Cisco TBA EMC
news

Flat backup grows into viable tool for data protection

Flat backups reduce license fees and improve recovery point objectives and recovery time objectives, making them useful for data protection.

  • Premiered: 07/08/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Data Backup
Topic(s): TBA Backup TBA Data protection TBA DP TBA Recovery TBA Snapshots TBA flat backup TBA NetApp TBA HP TBA EMC TBA 3PAR TBA StoreServ TBA StoreOnce TBA VMAX TBA Data Domain TBA RPO TBA RTO TBA COW TBA ROW TBA Disaster Recovery TBA DR TBA WAN TBA Microsoft TBA Oracle TBA SAP TBA SnapProtect TBA RMC TBA Recovery Manager Central TBA ProtectPoint TBA Virtualization TBA VMWare
Profiles/Reports

DP Designed for Flash - Better Together: HPE 3PAR StoreServ Storage and StoreOnce System

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HPE 3PAR StoreServ Storage, HPE StoreOnce System backup appliances, and HPE Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 06/06/16
Profiles/Reports

Unitrends Enterprise Backup 9.0: Simple and Powerful Data Protection for the Whole Data Center

Backup and recovery, replication, recovery assurance: all are more crucial than ever in the light of massively growing data. But complexity has grown right alongside expanding data. Data centers and their managers strain under the burdens of legacy physical data protection, fast-growing virtual data requirements, backup decisions around local, remote and cloud sites, and the need for specialist IT to administer complex data protection processes.

In response, Unitrends has launched a compelling new version of Unitrends Enterprise Backup (UEB): Release 9.0. Its completely revamped user interface and experience significantly reduces management overhead and lets even new users easily perform sophisticated functions using the redesigned dashboard. And its key capabilities are second to none for modern data protection in physical and virtual environments.

One of UEB 9.0’s differentiating strengths (indeed, the entire Unitrends product line) is the fact that in today’s increasingly more virtualized world, they still offer deep support for physical as well as virtual environments. This is more important than it might at first appear. There is a huge installed base of legacy equipment in existence and a lot of it has still not been moved into a virtual environment; yet it all needs to be protected. Within this legacy base, there are many mission-critical applications still running on physical servers that remain high priority protection targets. In these environments, many admins are forced to purchase specialized tools for protecting virtual environments separate from physical ones, or to use point backup products for specific applications. Both options carry extra costs by buying multiple applications that do essentially the same thing, and by hiring multiple people trained to use them.

This is why no matter how virtualized an environment is, if there is even one critical application that is still physical, admins need to strongly consider a solution that protects both. This gives the data center maximum protection with lower operating costs, since they no longer need multiple data protection packages and trained staff to run them.

This is where Unitrends steps in. With its rich capabilities and intuitive interface, UEB 9.0 protects data throughout the data center, and does not require IT specialists. This Product in Depth assesses Unitrends Enterprise Backup 9.0, the latest version of Unitrends flagship data protection platform. We put the new user interface through its paces to see just how intuitive it is, what information it provides and how many clicks it takes to perform some basic operations. We also did a deep dive into the functionality provided by the backup engine itself, some of which is a carryover from earlier versions and some which are new for 9.0.

Publish date: 09/17/15
news

Unitrends Release 9.0 Beta Sets New Industry Standard for the User Experience

Unitrends, a leader in enterprise-level cloud recovery, today announced Release 9.0 Beta, the data protection software powering its physical and virtual backup appliances as well as Unitrends Cloud.

  • Premiered: 08/21/15
  • Author: Taneja Group
  • Published: SYS-CON Media
Topic(s): TBA Unitrends TBA Cloud TBA Data protection TBA Virtualization TBA Unitrends Cloud TBA Backup TBA VM TBA Virtual Machine TBA file recovery TBA Recovery TBA Disaster Recovery TBA DR TBA Jim Whalen TBA VMWare TBA VMware vSphere TBA vSphere TBA Microsoft TBA Hyper-V TBA RPO TBA RTO
Profiles/Reports

Free Report - Better Together: HP 3PAR StoreServ Storage and StoreOnce System Opinion

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HP 3PAR StoreServ Storage, HP StoreOnce System backup appliances, and HP StoreOnce Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 09/25/15
news

VMware vSphere 6 release good news for storage admins

VMware's vSphere 6 release shows that the vendor is aiming for a completely software-defined data center with a fully virtualized infrastructure.

  • Premiered: 10/05/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VMware vSphere TBA vSphere TBA vSphere 6 TBA software-defined TBA Software-Defined Data Center TBA SDDC TBA Virtualization TBA virtualized infrastructure TBA VSAN TBA VVOLs TBA VMware VVOLs TBA Virtual Volumes TBA VMotion TBA high availability TBA Security TBA scalability TBA Data protection TBA replication TBA VMware PEX TBA Fault Tolerance TBA Virtual Machine TBA VM TBA Provisioning TBA Storage Management TBA SLA TBA 3D Flash TBA FT TBA vCPU TBA CPU
news

The New Era of Secondary Storage HyperConvergence

The rise of hyperconverged infrastructure platforms has driven tremendous change in the primary storage space, perhaps even greater than the move from direct attached to networked storage in decades past.

  • Premiered: 10/22/15
  • Author: Jim Whalen
  • Published: Enterprise Storage Forum
Topic(s): TBA Storage TBA secondary storage TBA Primary Storage TBA hyperconverged TBA hyperconverged infrastructure TBA hyperconvergence TBA DR TBA Disaster Recovery TBA SATA TBA RTO TBA RPO TBA Data protection TBA DP TBA Virtualization TBA Snapshots TBA VM TBA Virtual Machine TBA Disaster Recovery as a Service TBA DRaaS TBA DevOps TBA Hadoop TBA cluster TBA Actifio TBA Zerto TBA replication TBA Data Domain TBA HP TBA 3PAR TBA StoreServ TBA StoreOnce
news

Hyperscale Cloud Vendors vs. Cloud-Enabled Data Protection Vendors

Cost alone should not be the deciding factor when deciding between a hyperscale cloud vendor or a cloud-enabled data protection provider.

  • Premiered: 11/06/15
  • Author: Jim Whalen
  • Published: Datamation
Topic(s): TBA datamation TBA hyperscale TBA Cloud TBA Data protection TBA Amazon TBA Google TBA Microsoft TBA Cloud Storage TBA Storage TBA Data Storage TBA Data Management TBA DP TBA Compute TBA Software as a Service TBA Saas TBA Infrastructure as a Service TBA IaaS TBA cloud data center TBA DP cloud TBA Barracuda TBA SMB TBA Backup TBA replication TBA VMWare TBA VM TBA Virtual Machine TBA Disaster Recovery TBA DR TBA Datto TBA WAN
Profiles/Reports

Full Database Protection Without the Full Backup Plan: Oracle's Cloud-Scaled Zero Data Loss Recovery

Today’s tidal wave of big data isn’t just made up of loose unstructured documents – huge data growth is happening everywhere including in high-value structured datasets kept in databases like Oracle Database 12c. This data is any company’s most valuable core data that powers most key business applications – and it’s growing fast! According to Oracle, in 5 years (by 2020) most enterprises expect 50x data growth. As their scope and coverage grow, these key databases inherently become even more critical to our businesses. At the same time, the sheer number of database-driven applications and users is also multiplying – and they increasingly need to be online, globally, 24 x 7. Which all leads to the big burning question: How can we possibly protect all this critical data, data we depend on more and more even as it grows, all the time?

We just can’t keep taking more time out of the 24-hour day for longer and larger database backups. The traditional batch window backup approach is already often beyond practical limits and its problems are only getting worse with data growth – missed backup windows, increased performance degradation, unavailability, fragility, risk and cost. It’s now time for a new data protection approach that can do away with the idea of batch window backups, yet still provide immediate backup copies to recover from failures, corruption, and other disasters.

Oracle has stepped up in a big way, and marshaling expertise and technologies from across their engineered systems portfolio, has developed a new Zero Data Loss Recovery Appliance. Note the very intentional name that is focused on total recoverability – the Recovery Appliance is definitely not just another backup target. This new appliance eliminates the pains and risks of the full database backup window approach completely through a highly engineered continuous data protection solution for Oracle databases. It is now possible to immediately recover any database to any point in time desired, as the Recovery Appliance provides “virtual” full backups on demand and can scale to protect thousands of databases and petabytes of capacity. In fact, it offloads backup processes from production database servers which can increase performance in Oracle environments typically by  25%. Adopting this new backup and recovery solution will actually give CPU cycles back to the business.

In this report, we’ll briefly review why conventional data protection approaches based on the backup window are fast becoming obsolete. Then we’ll look into how Oracle has designed the new Recovery Appliance to provide a unique approach to ensuring data protection in real-time, at scale, for thousands of databases and PBs of data. We’ll see how zero data loss, incremental forever backups, continuous validation, and other innovations have completely changed the game of database data protection. For the first time there is now a real and practical way to fully protect a global corporation’s databases—on-premise and in the cloud—even in the face of today’s tremendous big data growth.

Publish date: 12/22/15
Profiles/Reports

The HPE Solution to Backup Complexity and Scale: HPE Data Protector and StoreOnce

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HPE as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HPE is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HPE hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HPE people have taken to heart a “customer first” message to provide a truly solution-focused HPE experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HPE business unit components are in the “box”. And significantly, this approach elevates HPE from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HPE is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HPE StoreOnce and Data Protector products are truly game changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 01/15/16
Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
Profiles/Reports

Business Continuity Best Practices for SMB

Virtualization’s biggest driver is big savings: slashing expenditures on servers, licenses, management, and energy. Another major benefit is the increased ease of disaster recovery and business continuity (DR/BC) in virtualized environments.

Note that disaster recovery and business continuity are closely aligned but not identical. We define disaster recovery as the process of restoring lost data, applications and systems following a profound data loss event, such as a natural disaster, a deliberate data breach or employee negligence. Business continuity takes DR a step further. BC’s goal is not only to recover the computing environment but also to recover them swiftly and with zero data loss. This is where recovery point objectives (RPO) and recovery time objectives (RTO) enter the picture, with IT assigning differing RPO and RTO strategies according to application priority.

DR/BC can be difficult to do well in data centers with traditional physical servers, particularly in SMB with limited IT budgets and generalist IT staff. Many of these servers are siloed with direct-attached storage and individual data protection processes. Mirroring and replication used to require one-to-one hardware correspondence and can be expensive, leading to a universal reliance on localized backup as data protection. In addition, small IT staffs do not always take the time to perfect their backup processes across disparate servers. Either they do not do it at all –rolling the dice and hoping there won’t be a disaster – or they slap backups on tape or USB drives and stick them on a shelf.

Virtualization can transform this environment into a much more efficient and protected data center. Backing up VMs from a handful of host servers is faster and less resource-intensive than backing up tens or hundreds of physical servers. And with scheduled replication, companies achieve faster backup and much improved recovery objectives.

However, many SMBs avoid virtualization. They cite factors such as cost, unfamiliarity with hypervisors, and added complexity. And they are not wrong: virtualization can introduce complexity, it can be expensive, and it can require familiarity with hypervisors. Virtualization cuts down on physical servers but is resource-intensive, especially as the virtualized environment grows. This means capital costs for high performance CPUs and storage. SMBs may also have to deal VM licensing and management costs, administrative burdens, and the challenge of protecting and replicating virtualized data on a strict budget.

For all its complexity and learning curve, is virtualization worth it for SMBs? Definitely. Its benefits far outweigh its problems, particularly its advantages for DR/BC. But for many SMBs, traditional virtualization is often too expensive and complex to warrant the effort. We believe that the answer is HyperConverged Infrastructure: HCI. Of HCI providers, Scale Computing is exceptionally attractive to the SMB. This paper will explain why. 

Publish date: 09/30/15
news

Compliance in the Cloud

Compliance in the cloud raises numerous questions: Responsibility for protecting this data resides with the business, the cloud backup vendor and the cloud owner, but most chiefly it's the business's responsibility.

  • Premiered: 03/15/16
  • Author: Jim Whalen
  • Published: Datamation
Topic(s): TBA compliance TBA Cloud TBA HIPAA TBA Jim Whalen TBA Saas TBA RTO TBA RPO TBA DR TBA Disaster Recovery TBA recovery assurance TBA data retention TBA DRaaS TBA VM TBA Virtual Machine TBA MSP TBA Security TBA data security TBA Data protection TBA SLA TBA VPN TBA Backup TBA Data Center TBA Cloud Storage TBA Unitrends
Profiles/Reports

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16
Profiles/Reports

The Hyperconverged Data Center: Nutanix Customers Explain Why They Replaced Their EMC SANS

Taneja Group spoke with several Nutanix customers in order to understand why they switched from EMC storage to the Nutanix platform. All of the respondent’s articulated key architectural benefits of hyperconvergence versus a traditional 3-tier solutions. In addition, specific Nutanix features for mission-critical production environments were often cited.

Hyperconverged systems have become a mainstream alternative to traditional 3-tier architecture consisting of separate compute, storage and networking products. Nutanix collapses this complex environment into software-based infrastructure optimized for virtual environments. Hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual assets. Hyperconvergence offers a key value proposition over 3-tier architecture:  instead of deploying, managing and integrating separate components – storage, servers, networking, data services, and hypervisors – these components are combined into a modular high performance system.

The customers we interviewed operate in very different industries. In common, they all maintained data centers undergoing fundamental changes, typically involving an opportunity to refresh some portion of their 3-tier infrastructure. This enabled the evaluation of hyperconvergence in supporting those changes. Customers interviewed found that Nutanix hyperconvergence delivered benefits in the areas of scalability, simplicity, value, performance, and support. If we could use one phrase to explain why Nutanix’ is winning over EMC customers in the enterprise market it would be “Ease of Everything.” Nutanix works, and works consistently with small and large clusters, in single and multiple datacenters, with specialist or generalist IT support, and across hypervisors.

The five generations of Nutanix products span many years of product innovation. Web-scale architecture has been the key to Nutanix platform’s enterprise capable performance, simplicity and scalability. Building technology like this requires years of innovation and focus and is not an add-on for existing products and architectures.

The modern data center is quickly changing. Extreme data growth and complexity are driving data center directors toward innovative technology that will grow with them. Given the benefits of Nutanix web-scale architecture – and the Ease of Everything – data center directors can confidently adopt Nutanix as their partner in data center transformation just as the following EMC customers did.

Publish date: 03/31/16
Profiles/Reports

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16