Taneja Group | hyperconverged+infrastructure
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: hyperconverged+infrastructure

news

The New Era of Secondary Storage HyperConvergence

The rise of hyperconverged infrastructure platforms has driven tremendous change in the primary storage space, perhaps even greater than the move from direct attached to networked storage in decades past.

  • Premiered: 10/22/15
  • Author: Jim Whalen
  • Published: Enterprise Storage Forum
Topic(s): TBA Storage TBA secondary storage TBA Primary Storage TBA hyperconverged TBA hyperconverged infrastructure TBA hyperconvergence TBA DR TBA Disaster Recovery TBA SATA TBA RTO TBA RPO TBA Data protection TBA DP TBA Virtualization TBA Snapshots TBA VM TBA Virtual Machine TBA Disaster Recovery as a Service TBA DRaaS TBA DevOps TBA Hadoop TBA cluster TBA Actifio TBA Zerto TBA replication TBA Data Domain TBA HP TBA 3PAR TBA StoreServ TBA StoreOnce
news / Blog

Potential impact of Dell Buying EMC for VCE Customers – It may be a good time to try HyperConverged

Even with the improved ease-of-use with converged 3-tier products like EMC VCE, customers routinely see complexity and cost as key reasons to switch to HCI.

Profiles/Reports

HyperConverged Infrastructure Powered by Pivot3: Benefits of a More Efficient HCI Architecture

Virtualization has matured and become widely adopted in the enterprise market. HyperConverged Infrastructure (HCI), with virtualization at its core, is taking the market by storm, enabling virtualization for businesses of all sizes. The success of these technologies has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the time and effort required to create custom infrastructure from best-of-breed DIY components.

With HCI, the traditional three-tier architecture has been collapsed into a single system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. The immense success of this approach has led to increased competition in this space and the customers are required to sort through the various offerings, analyzing key attributes to determine which are significant.

One of these competing vendors, Pivot3, was founded in 2002 and has been in the HCI market since 2008, well before the term HyperConverged was used. For many years, Pivot3’s vSTAC architecture has provided the most efficient scale-out Software-Defined Storage (SDS) system available on the market. This efficiency is attributed to three design innovations. The first is their extremely efficient and reliable erasure coding technology called Scalar Erasure Coding. Conversely, many leading HCI implementations use replication-based redundancy techniques which are heavy on storage capacity utilization. Scalar Erasure Coding from Pivot3 can deliver significant capacity savings depending on the level of drive protection selected. The second innovation is Pivot3’s Global Hyperconvergence which creates a cross-cluster virtual SAN, the HyperSAN: in case of appliance failure, a VM migrates to another node and continues operations without the need to divert compute power to copy data over to that node. The third innovation has been a reduction in CPU overhead needed to implement the SDS features and other VM centric management tasks. Implementation of the HCI software uses the same CPU complex as business applications, this additional usage is referred to as the HCI overhead tax. HCI overhead tax is important since the licensing cost for many applications and infrastructure software are based on a per CPU basis. Even with today’s ever- increasing cores per CPU there still can be significant cost saving by keeping the HCI overhead tax low.

The Pivot3 family of HCI products delivering high data efficiency with a very low overhead are an ideal solution for storage-centric business workload environments where storage costs and reliability are critical success factors. One example of this is a VDI implementation where cost per seat determines success. Other examples would be capacity-centric workloads such as big data or video surveillance that could benefit from a Pivot3 HCI approach with leading storage capacity and reliability. In this paper we compare Pivot3 with other leading HCI architectures. We utilized data extracted from the alternative HCI vendor’s reference architectures for VDI implementations. Using real world examples, we have demonstrated that with other solutions, users must purchase up to 136% more raw storage capacity and up to 59% more total CPU cores than are required when using equivalent Pivot3 products. These impressive results can lead to significant costs savings. 

Publish date: 12/10/15
news

Pivot3 Custom Plug-in Streamlines Control of vSTAC OS Through Popular VMware vCenter Server

Pivot3, a pioneer and innovator in the development of hyper-converged infrastructure (HCI), today announced the release of the vSTAC OS Management Client Integration Plug-In, designed to connect with VMware vCenter Server.

  • Premiered: 12/15/15
  • Author: Taneja Group
  • Published: Business Wire
Topic(s): TBA Pivot3 TBA VMWare TBA VMware vCenter TBA vCenter TBA hyperconverged TBA HCI TBA hyperconverged infrastructure TBA HyperSAN TBA SAN TBA Storage TBA Virtualization TBA vSphere TBA scale-out TBA software-defined TBA Jeff Kato
Profiles/Reports

Edge HyperConvergence for Robo's: Riverbed SteelFusion Brings IT All Together

Hyperconvergence is one of the hottest IT trends going in to 2016. In a recent Taneja Group survey of senior enterprise IT folks we found that over 25% of organizations are looking to adopt hyperconvergence as their primary data center architecture. Yet the centralized enterprise datacenter may just be the tip of the iceberg when it comes to the vast opportunity for hyperconverged solutions. Where there are remote or branch office (ROBO) requirements demanding localized computing, some form of hyperconvergence would seem the ideal way to address the scale, distribution, protection and remote management challenges involved in putting IT infrastructure “out there” remotely and in large numbers.

However, most of today’s popular hyperconverged appliances were designed as data center infrastructure, converging data center IT resources like servers, storage, virtualization and networking into Lego™ like IT building blocks.  While these at first might seem ideal for ROBOs – the promise of dropping in “whole” modular appliances precludes any number of onsite integration and maintenance challenges, ROBOs have different and often more challenging requirements than a datacenter. A ROBO does not often come with trained IT staff or a protected datacenter environment. They are, by definition, located remotely across relatively unreliable networks. And they fan out to the thousands (or tens of thousands) of locations.

Certainly any amount of convergence simplifies infrastructure making easier to deploy and maintain. But in general popular hyperconvergence appliances haven’t been designed to be remotely managed en masse, don’t address unreliable networks, and converge storage locally and directly within themselves. Persisting data in the ROBO is a recipe leading to a myriad of ROBO data protection issues. In ROBO scenarios, the datacenter form of hyperconvergence is not significantly better than simple converged infrastructure (e.g. pre-configured rack or blades in a box).

Riverbed’s SteelFusion we feel has brought full hyperconvergence benefits to the ROBO edge of the organization. They’ve married their world-class WANO technologies, virtualization, and remote storage “projection” to create what we might call “Edge Hyperconvergence”. We see the edge hyperconverged SteelFusion as purposely designed for companies with any number of ROBO’s that each require local IT processing.

Publish date: 12/17/15
Resources

Hyperconverged Infrastructure for Demanding Enterprise Workloads

Date: January 26, 2016 at 8:00 am PT / 11:00 am ET
Presenters: Jeff Kato, Taneja Group; Sachin Chheda, Nutanix

In 2015, IT organizations large and small found hyperconverged infrastructure built with the right technology foundation to be a capable replacement for traditional servers and standalone storage in the datacenter for demanding enterprise applications such as critical databases.

Join the experts at Taneja Group and Nutanix in this technical webinar covering the infrastructure requirements for today’s demanding enterprise applications and how a virtualized first methodology along with a software-based approach to storage can change the way enterprise applications are hosted and served.

The session will also cover the different architectural options for different hyperconverged infrastructure and walk through real world implementations of IT organizations Nutanix Xtreme Computing Platform for demanding enterprise applications such as Oracle and SAP.

  • Premiered: 01/26/16
  • Location: OnDemand
  • Speaker(s): Jeff Kato, Taneja Group; Sachin Chheda, Nutanix
Topic(s): TBA Topic(s): Nutanix Topic(s): TBA Topic(s): Jeff Kato Topic(s): TBA Topic(s): hyperconverged infrastructure Topic(s): TBA Topic(s): hyperconverged Topic(s): TBA Topic(s): hyperconvergence Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): Datacenter Topic(s): TBA Topic(s): Enterprise Topic(s): TBA Topic(s): Oracle Topic(s): TBA Topic(s): SAP
Profiles/Reports

Nutanix Versus VCE: Web-Scale Versus Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix in late 2014 with updates in 2015. The Taneja Group analyzed the experiences of seven Nutanix Xtreme Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership (see an opinion of the DELL/EMC merger at end of this document). VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments. 

Publish date: 01/14/16
Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
news / Blog

Taneja Group Predictions for 2016 – Arun Taneja, #3

We thought the bloodbath for traditional external storage array products would subside by now but now we believe it will continue unabated into 2016.

news

Evaluating Data Protection for Hyperconverged Infrastructure

Hyperconvergence is a still-evolving trend, and the number of vendors in the space is making the evaluation of hyperconverged infrastructure complex. One criterion to consider in any infrastructure review is data protection.

  • Premiered: 02/02/16
  • Author: Jim Whalen
  • Published: InfoStor
Topic(s): TBA Jim Whalen TBA Storage TBA hyperconverged TBA hyperconvergence TBA hyperconverged infrastructure TBA Data protection TBA DP TBA Backup TBA replication TBA DR TBA Disaster Recovery TBA SimpliVity TBA Deduplication TBA Compression TBA WAN TBA WANO TBA WAN Optimization TBA VM TBA Virtual Machine TBA VM-centric TBA VM-centricity TBA Compute TBA Networking TBA Hypervisor TBA Virtualization TBA scale-out TBA IOPS TBA Pivot3 TBA Gridstore TBA converged
Profiles/Reports

Scale Computing HC3: A Look at a Hyperconverged Appliance

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by virtual workloads, IT has been able to pack more systems into the data center than before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management, as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before - tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance and bring about more capability. All too often, an increase in capability has come at the cost of complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. 

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

Scale Computing, an early pioneer in HyperConverged solutions, has released multiple versions of HC3 appliances and now includes the 6th generation of Scale’s HyperCore Operating System. Scale Computing continues to push the boundary in regards to simplicity, value and availability that many SMB IT departments everywhere have come to rely on.  HC3 is an integration of storage and virtualized compute within a scale-out building block architecture that couples all of the elements of a virtual data center together inside a hyperconverged appliance. The result is a system that is simple to use and does away with much of the complexity associated with virtualization in the data center. By virtualizing and intermingling compute and storage inside a system that is designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex compute clusters, provision and manage storage, and a bevy of other day to day administrative tasks. Provisioning additional resources - any resource - becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service - our hands-on lab service - to the task of evaluating whether Scale Computing's HC3 could deliver on these promises in the real world. For this task, we put an HC3 cluster through the paces to see how well it deployed, how it held up under use, and what special features it delivered that might go beyond the features found in traditional integrations of discreet compute and storage systems.

While we did touch upon whether Scale's architecture could scale performance as well as capacity, we focused our testing upon how the seamless integration of storage and compute within HC3 tackles key complexity challenges in the traditional virtual infrastructure.

As it turns out, HC3 is a far different system than the traditional compute and storage systems that we've looked at before. HC3's combination of compute and storage takes place within a scale-out paradigm, where adding more resources is simply a matter of adding additional nodes to a cluster. This immediately brings on more storage and compute resources, and makes adapting and growing the IT infrastructure a no-brainer exercise. On top of this adaptability, virtual machines (VMs) can run on any of the nodes, without any complex external networking. This delivers seamless utilization of all datacenter resources, in a dense and power efficient footprint, while significantly enhancing storage performance.

Meanwhile, within an HC3 cluster, these capabilities are all delivered on top of a uniquely robust system architecture that can tolerate any failure - from a disk to an entire cluster node - and guarantee a level of availability seldom seen by mid-sized customers. Moreover, that uniquely robust, clustered, scale-out architecture can also intermix different generation of nodes in a way that will put an end to painful upgrades by reducing them to simply decommissioning old nodes as new ones are introduced.

HC3’s flexibility, ease of deployment, robustness and a management interface is the simplest and easiest to use that we have seen. This makes HC3 a disruptive game changer for SMB and SME businesses. HC3 stands to banish complex IT infrastructure deployment, permanently alter on-going operational costs, and take application availability to a new level. With those capabilities in focus, single bottom-line observations don’t do HC3 justice. In our assessment, HC3 may take as little as 1/10th the effort to setup and install as traditional infrastructure, 1/4th the effort to configure and deploy a virtual machine (VM) versus doing so using traditional infrastructure, and can banish the planning, performance troubleshooting, and reconfiguration exercises that can consume as much as 25-50% of an IT administrator’s time. HC3 is about delivering on all of these promises simultaneously, and with the additional features we'll discuss, transforming the way SMB/SME IT is done.

Publish date: 09/30/15
Profiles/Reports

Business Continuity Best Practices for SMB

Virtualization’s biggest driver is big savings: slashing expenditures on servers, licenses, management, and energy. Another major benefit is the increased ease of disaster recovery and business continuity (DR/BC) in virtualized environments.

Note that disaster recovery and business continuity are closely aligned but not identical. We define disaster recovery as the process of restoring lost data, applications and systems following a profound data loss event, such as a natural disaster, a deliberate data breach or employee negligence. Business continuity takes DR a step further. BC’s goal is not only to recover the computing environment but also to recover them swiftly and with zero data loss. This is where recovery point objectives (RPO) and recovery time objectives (RTO) enter the picture, with IT assigning differing RPO and RTO strategies according to application priority.

DR/BC can be difficult to do well in data centers with traditional physical servers, particularly in SMB with limited IT budgets and generalist IT staff. Many of these servers are siloed with direct-attached storage and individual data protection processes. Mirroring and replication used to require one-to-one hardware correspondence and can be expensive, leading to a universal reliance on localized backup as data protection. In addition, small IT staffs do not always take the time to perfect their backup processes across disparate servers. Either they do not do it at all –rolling the dice and hoping there won’t be a disaster – or they slap backups on tape or USB drives and stick them on a shelf.

Virtualization can transform this environment into a much more efficient and protected data center. Backing up VMs from a handful of host servers is faster and less resource-intensive than backing up tens or hundreds of physical servers. And with scheduled replication, companies achieve faster backup and much improved recovery objectives.

However, many SMBs avoid virtualization. They cite factors such as cost, unfamiliarity with hypervisors, and added complexity. And they are not wrong: virtualization can introduce complexity, it can be expensive, and it can require familiarity with hypervisors. Virtualization cuts down on physical servers but is resource-intensive, especially as the virtualized environment grows. This means capital costs for high performance CPUs and storage. SMBs may also have to deal VM licensing and management costs, administrative burdens, and the challenge of protecting and replicating virtualized data on a strict budget.

For all its complexity and learning curve, is virtualization worth it for SMBs? Definitely. Its benefits far outweigh its problems, particularly its advantages for DR/BC. But for many SMBs, traditional virtualization is often too expensive and complex to warrant the effort. We believe that the answer is HyperConverged Infrastructure: HCI. Of HCI providers, Scale Computing is exceptionally attractive to the SMB. This paper will explain why. 

Publish date: 09/30/15
Profiles/Reports

Transforming the Data Center: SimpliVity Delivers Hyperconverged Platform with Native DP

Hyperconvergence has come a long way in the past five years. Growth rates are astronomical and customers are replacing traditional three-layer configurations with hyperconverged solutions at record numbers. But not all hyperconverged solutions in the market are alike. As the market matures, this fact is coming to light. Of course, all hyperconverged solutions tightly integrate compute and storage (that is par for the course) but beyond that similarities end quickly.

One of the striking differences between SimpliVity’s hyperconverged infrastructure architecture and others is the tight integration of data protection functionality. The DNA for that is built in from the very start: SimpliVity hyperconverged infrastructure systems perform inline deduplication and compression of data at the time of data creation. Thereafter, data is kept in the “reduced” state throughout its lifecycle. This has serious positive implications on latency, performance, and bandwidth but equally importantly, it transforms data protection and other secondary uses of data. 

At Taneja Group, we have been very aware of this differentiating feature of SimpliVity’s solution. So when we were asked to interview five SimpliVity customers to determine if they were getting tangible benefits (or not), we jumped at the opportunity.

This Field Report is about their experiences. We must state at the beginning that we focused primarily on their data protection experiences in this report. Hyperconvergence is all about simplicity and cost reduction. But SimpliVity’s hyperconverged infrastructure also eliminated another big headache: data protection. These customers may not have bought SimpliVity for data protection purposes, but the fact that they were essentially able to get rid of all their other data protection products was a very pleasant surprise for them. That was a big plus for these customers. To be sure, data protection is not simply backup and restore but also includes a number of other functions such as replication, DR, WAN optimization, and more. 

For a broader understanding of SimpliVity’s product capabilities, other Taneja Group write-ups are available. This one focuses on data protection. Read on for these five customers’ experiences.

Publish date: 02/01/16
news

Assimilate converged IT infrastructure into the data center

With the hype around converged and hyper-converged IT infrastructure, resistance seems futile. While your IT organization will likely be a mix of converged, dedicated and cloud-based resources, convergence is moving into new areas.

  • Premiered: 03/16/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA convergence TBA Converged Infrastructure TBA Datacenter TBA Storage TBA converged TBA IT convergence TBA converged IT TBA Cloud TBA Public Cloud TBA OPEX TBA VCE TBA vBlock TBA Virtualization TBA software-defined TBA hyperconverged infrastructure TBA HCI TBA Nutanix TBA SimpliVity TBA Data protection TBA DP TBA Microsoft TBA StorSimple TBA Hybrid Cloud TBA Archive TBA Backup TBA BC TBA Business Continuity TBA Disaster Recovery TBA DR TBA data lake
Profiles/Reports

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16
Profiles/Reports

The Hyperconverged Data Center: Nutanix Customers Explain Why They Replaced Their EMC SANS

Taneja Group spoke with several Nutanix customers in order to understand why they switched from EMC storage to the Nutanix platform. All of the respondent’s articulated key architectural benefits of hyperconvergence versus a traditional 3-tier solutions. In addition, specific Nutanix features for mission-critical production environments were often cited.

Hyperconverged systems have become a mainstream alternative to traditional 3-tier architecture consisting of separate compute, storage and networking products. Nutanix collapses this complex environment into software-based infrastructure optimized for virtual environments. Hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual assets. Hyperconvergence offers a key value proposition over 3-tier architecture:  instead of deploying, managing and integrating separate components – storage, servers, networking, data services, and hypervisors – these components are combined into a modular high performance system.

The customers we interviewed operate in very different industries. In common, they all maintained data centers undergoing fundamental changes, typically involving an opportunity to refresh some portion of their 3-tier infrastructure. This enabled the evaluation of hyperconvergence in supporting those changes. Customers interviewed found that Nutanix hyperconvergence delivered benefits in the areas of scalability, simplicity, value, performance, and support. If we could use one phrase to explain why Nutanix’ is winning over EMC customers in the enterprise market it would be “Ease of Everything.” Nutanix works, and works consistently with small and large clusters, in single and multiple datacenters, with specialist or generalist IT support, and across hypervisors.

The five generations of Nutanix products span many years of product innovation. Web-scale architecture has been the key to Nutanix platform’s enterprise capable performance, simplicity and scalability. Building technology like this requires years of innovation and focus and is not an add-on for existing products and architectures.

The modern data center is quickly changing. Extreme data growth and complexity are driving data center directors toward innovative technology that will grow with them. Given the benefits of Nutanix web-scale architecture – and the Ease of Everything – data center directors can confidently adopt Nutanix as their partner in data center transformation just as the following EMC customers did.

Publish date: 03/31/16
news

5 Key Elements of Bimodal IT

These five technology trends make it possible for bimodal IT to bring new levels of speed, flexibility and innovation to launching IT services.

  • Premiered: 04/07/16
  • Author: Taneja Group
  • Published: BizTech Magazine
Topic(s): TBA Big Data TBA mobile computing TBA IT infrastructure TBA Data Center TBA Datacenter TBA webscale TBA hyperconverged TBA hyperconverged infrastructure TBA Cloud TBA Cloud Computing TBA Flash TBA SSD TBA flash storage TBA flexibility TBA Amazon TBA Facebook TBA Google TBA Microsoft TBA Deduplication TBA Nutanix TBA EMC TBA SDDC TBA Software-Defined Data Center TBA software-defined TBA Optimization