Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Data Center Systems

Includes HyperConverged, Converged, Disaggregated, and Legacy Infrastructure.

This category focuses on modern, on-premises infrastructure-based architectural approaches at the datacenter level. All aspects of the necessary infrastructure are included such as network, compute and storage. Taneja Group treats these systems as a complete solution for a particular workload whether it be general-purpose IaaS or vertical solutions targeted at specific use cases such as workload consolidation or applications such as SAP. We regularly compare and contrast the various architectural approaches that IT buyers are considering, evaluate their strengths and weaknesses, and discuss which approaches are likely to work best for specific workloads and use cases. We are always looking for shifts in industry thinking or technology adoption that might lead to an evolution of existing data center architectures, and engage with startup and large vendors alike to understand and characterize newly emerging approaches. Where possible, our reports and opinions are backed by primary research, including direct conversations with different classes of IT decision makers and influencers.

Page 3 of 14 pages  < 1 2 3 4 5 >  Last ›
Report

Transforming the Data Center: SimpliVity Delivers Hyperconverged Platform with Native DP

Hyperconvergence has come a long way in the past five years. Growth rates are astronomical and customers are replacing traditional three-layer configurations with hyperconverged solutions at record numbers. But not all hyperconverged solutions in the market are alike. As the market matures, this fact is coming to light. Of course, all hyperconverged solutions tightly integrate compute and storage (that is par for the course) but beyond that similarities end quickly.

One of the striking differences between SimpliVity’s hyperconverged infrastructure architecture and others is the tight integration of data protection functionality. The DNA for that is built in from the very start: SimpliVity hyperconverged infrastructure systems perform inline deduplication and compression of data at the time of data creation. Thereafter, data is kept in the “reduced” state throughout its lifecycle. This has serious positive implications on latency, performance, and bandwidth but equally importantly, it transforms data protection and other secondary uses of data. 

At Taneja Group, we have been very aware of this differentiating feature of SimpliVity’s solution. So when we were asked to interview five SimpliVity customers to determine if they were getting tangible benefits (or not), we jumped at the opportunity.

This Field Report is about their experiences. We must state at the beginning that we focused primarily on their data protection experiences in this report. Hyperconvergence is all about simplicity and cost reduction. But SimpliVity’s hyperconverged infrastructure also eliminated another big headache: data protection. These customers may not have bought SimpliVity for data protection purposes, but the fact that they were essentially able to get rid of all their other data protection products was a very pleasant surprise for them. That was a big plus for these customers. To be sure, data protection is not simply backup and restore but also includes a number of other functions such as replication, DR, WAN optimization, and more. 

For a broader understanding of SimpliVity’s product capabilities, other Taneja Group write-ups are available. This one focuses on data protection. Read on for these five customers’ experiences.

Publish date: 02/01/16
Report

Nutanix Versus VCE: Web-Scale Versus Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix in late 2014 with updates in 2015. The Taneja Group analyzed the experiences of seven Nutanix Xtreme Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership (see an opinion of the DELL/EMC merger at end of this document). VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments. 

Publish date: 01/14/16
Report

Edge HyperConvergence for Robo’s: Riverbed SteelFusion Brings IT All Together

Hyperconvergence is one of the hottest IT trends going in to 2016. In a recent Taneja Group survey of senior enterprise IT folks we found that over 25% of organizations are looking to adopt hyperconvergence as their primary data center architecture. Yet the centralized enterprise datacenter may just be the tip of the iceberg when it comes to the vast opportunity for hyperconverged solutions. Where there are remote or branch office (ROBO) requirements demanding localized computing, some form of hyperconvergence would seem the ideal way to address the scale, distribution, protection and remote management challenges involved in putting IT infrastructure “out there” remotely and in large numbers.

However, most of today’s popular hyperconverged appliances were designed as data center infrastructure, converging data center IT resources like servers, storage, virtualization and networking into Lego™ like IT building blocks.  While these at first might seem ideal for ROBOs – the promise of dropping in “whole” modular appliances precludes any number of onsite integration and maintenance challenges, ROBOs have different and often more challenging requirements than a datacenter. A ROBO does not often come with trained IT staff or a protected datacenter environment. They are, by definition, located remotely across relatively unreliable networks. And they fan out to the thousands (or tens of thousands) of locations.

Certainly any amount of convergence simplifies infrastructure making easier to deploy and maintain. But in general popular hyperconvergence appliances haven’t been designed to be remotely managed en masse, don’t address unreliable networks, and converge storage locally and directly within themselves. Persisting data in the ROBO is a recipe leading to a myriad of ROBO data protection issues. In ROBO scenarios, the datacenter form of hyperconvergence is not significantly better than simple converged infrastructure (e.g. pre-configured rack or blades in a box).

Riverbed’s SteelFusion we feel has brought full hyperconvergence benefits to the ROBO edge of the organization. They’ve married their world-class WANO technologies, virtualization, and remote storage “projection” to create what we might call “Edge Hyperconvergence”. We see the edge hyperconverged SteelFusion as purposely designed for companies with any number of ROBO’s that each require local IT processing.

Publish date: 12/17/15
Profile

HyperConverged Infrastructure Powered by Pivot3: Benefits of a More Efficient HCI Architecture

Virtualization has matured and become widely adopted in the enterprise market. HyperConverged Infrastructure (HCI), with virtualization at its core, is taking the market by storm, enabling virtualization for businesses of all sizes. The success of these technologies has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the time and effort required to create custom infrastructure from best-of-breed DIY components.

With HCI, the traditional three-tier architecture has been collapsed into a single system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. The immense success of this approach has led to increased competition in this space and the customers are required to sort through the various offerings, analyzing key attributes to determine which are significant.

One of these competing vendors, Pivot3, was founded in 2002 and has been in the HCI market since 2008, well before the term HyperConverged was used. For many years, Pivot3’s vSTAC architecture has provided the most efficient scale-out Software-Defined Storage (SDS) system available on the market. This efficiency is attributed to three design innovations. The first is their extremely efficient and reliable erasure coding technology called Scalar Erasure Coding. Conversely, many leading HCI implementations use replication-based redundancy techniques which are heavy on storage capacity utilization. Scalar Erasure Coding from Pivot3 can deliver significant capacity savings depending on the level of drive protection selected. The second innovation is Pivot3’s Global Hyperconvergence which creates a cross-cluster virtual SAN, the HyperSAN: in case of appliance failure, a VM migrates to another node and continues operations without the need to divert compute power to copy data over to that node. The third innovation has been a reduction in CPU overhead needed to implement the SDS features and other VM centric management tasks. Implementation of the HCI software uses the same CPU complex as business applications, this additional usage is referred to as the HCI overhead tax. HCI overhead tax is important since the licensing cost for many applications and infrastructure software are based on a per CPU basis. Even with today’s ever- increasing cores per CPU there still can be significant cost saving by keeping the HCI overhead tax low.

The Pivot3 family of HCI products delivering high data efficiency with a very low overhead are an ideal solution for storage-centric business workload environments where storage costs and reliability are critical success factors. One example of this is a VDI implementation where cost per seat determines success. Other examples would be capacity-centric workloads such as big data or video surveillance that could benefit from a Pivot3 HCI approach with leading storage capacity and reliability. In this paper we compare Pivot3 with other leading HCI architectures. We utilized data extracted from the alternative HCI vendor’s reference architectures for VDI implementations. Using real world examples, we have demonstrated that with other solutions, users must purchase up to 136% more raw storage capacity and up to 59% more total CPU cores than are required when using equivalent Pivot3 products. These impressive results can lead to significant costs savings. 

Publish date: 12/10/15
Free Reports

Multiplying the Value of All Existing IT Solutions

Decades of constantly advancing computing solutions have changed the world in tremendous ways, but interestingly, the IT folks running the show have long been stuck with only piecemeal solutions for managing and optimizing all that blazing computing power. Sometimes it seems like IT is a pit crew servicing a modern racing car with nothing but axes and hammers – highly skilled but hampered by their legacy tools.

While that may be a slight exaggeration, there is a serious lack of interoperability or opportunity to create joint insight between the highly varied perspectives that individual IT tools produce (even if  each is useful in its own purpose). There simply has never been a widely adopted standard for creating, storing or sharing system management data, much less a cross-vendor way to holistically merge heterogeneously collected or produced management data together – even for the beneficial use of harried and often frustrated IT owners that might own dozens or more differently sourced system management solutions. That is until now.

OpsDataStore has brought the IT management game to a new level with an easy to deploy, centralized, intelligent – and big data enabled – management data “service”.  It readily sucks in all the lowest level, fastest streaming management data from a plethora of tools (several ready to go at GA, but easily extended to any data source), automatically and intelligently relates data from disparate sources into a single unified “agile” model, directly provides fundamental visualization and analysis, and then can serve that unified and related data back out to enlightened and newly comprehensive downstream management workflows. OpsDataStore drops in and serves as the new systems management “nexus” between formerly disparate vendor and domain management solutions. 

If you have ever been in IT, you’ve no doubt written scripts, fiddled with logfiles, created massive spreadsheets, or otherwise attempted to stitch together some larger coherent picture by marrying and merging data from two (or 18) different management data sources. The more sources you might have, the more the problem (or opportunity) grows non-linearly. OpsDataStore promises to completely fill in this gap, enabling IT to automatically multiply the value of their existing management solutions.

Publish date: 12/03/15
Profile

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
Page 3 of 14 pages  < 1 2 3 4 5 >  Last ›