Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Software Defined/Virtualized Infrastructure

Includes Software-Defined Infrastructure (compute, storage and networking), Virtual Infrastructure technologies (server virtualization, desktop virtualization, I/O virtualization), and the interplay between these technologies and traditional storage. Covers different types of Software-Defined Storage, such as Scale-out NAS, in depth.

Taneja Group has been at the forefront of assessing and characterizing virtualization and software-defined infrastructure technologies since they began to emerge in the early 2000’s. Virtualization has caused one of the most disruptive technology shifts in data center infrastructure in the last 15 years. While its basic principles may not be new, virtualization has never been so widespread, nor has it been applied to as many platforms as it is in today. Taneja Group analysts combine expert knowledge of server and storage virtualization with keen insight into their impact on all aspects of IT operations and management to give our clients the research and analysis required to take advantage of this “virtual evolution.” We focus on the interplay of server and client virtualization technologies with storage, and study the impact on performance, security and management of the IT infrastructure. Our virtualization practice covers all virtual infrastructure components: server virtualization/hypervisors, desktop/client virtualization, storage virtualization, and network and I/O virtualization.

Page 1 of 22 pages  1 2 3 >  Last ›
Free Reports

Taneja Group Emerging Market Report on Multi-Cloud Primary Storage Abstract

Infrastructure spending on public and private cloud is growing at double-digit rates, while spending on traditional, non-cloud, IT infrastructure continues to decline. Organizations of all shapes and sizes are adopting cloud storage for a wide variety of use cases, taking advantage of the scalability and agility that the cloud has to offer.

But cloud storage adoption can have some drawbacks. Recently, we have been hearing increased grumbling from customers that are concerned about losing the option to change cloud providers, especially as their monthly outlays for cloud services continue to rise.

In response, IT professionals are beginning to consider multi-cloud approaches to primary storage, to gain the scalability and agility benefits of the cloud but without the penalty of lock-in. This is a fresh, emerging and innovating space, which promises to open up cloud storage to a range of new customers and use cases.

To gather data and develop insights regarding plans for public and multi-public cloud use, Taneja Group conducted two primary research studies in the summer of 2017. In each case, we surveyed 350+ IT decision makers and practitioners around the globe, representing a wide range of industries and business sizes, to understand their current and planned use cases and deployments of applications to the public cloud. We also surveyed a set of innovative storage vendors that are offering public and multi-cloud storage solutions.

The result is a Taneja Group report on an emerging set of storage products we call multi-cloud primary storage, which provide their data services across more than one cloud simultaneously. In this report, you’ll learn what end users are looking for in multi-cloud primary storage solutions, and how a diverse group of five cloud storage vendors are addressing the market need for a multi-cloud approach.

Publish date: 11/03/17
Report

Qumulo File Fabric extends high-performance file services to the cloud

The timing for Qumulo to extend its software-defined scalable file services to the cloud could not be better as public cloud utilization continues to grow at a phenomenal rate. Infrastructure spending on the public and private cloud is growing at double-digit rates while spending on traditional, non-cloud, IT infrastructure continues to decline and within a few short years will represent less than 50% of the entire infrastructure market. This trend is not surprising and has been widely predicted for several years. The surprising element now is how strong the momentum has become toward public cloud adoption, and the question is where the long-term equilibrium point will be between public clouds and on-premises infrastructure.

AWS was a pioneer in public cloud storage services when it introduced S3 (Simple Storage Service) over ten years ago. The approach of public cloud vendors has been to offer storage services at cut-rate pricing in what we call the “Hotel California” strategy – once they have your data, it can never leave. Recently, we have been hearing increased grumbling from customers that they are very concerned about losing the option to change infrastructure vendors and the resulting reduction in competition. In response to this, Taneja Group initiated multiple public and hybrid cloud research studies to gain insight on what storage services are needed across heterogenous cloud infrastructures. What we found is that IT practitioners are not only concerned about data security in the cloud; they are concerned about vendor lock-in created by the lack of data mobility between on-premises and public cloud infrastructures. Another surprising element we found is that IT practitioners predominately want file services across clouds and that object storage such as AWS S3 cannot meet their future cloud storage needs. This is actually not that surprising, as our research showed that many applications that businesses want to move to the cloud (to benefit from a highly dynamic compute environment) still rely on high-performance files access.

Enter Qumulo File Fabric (QF2). QF2 is a modern, highly scalable file storage system that runs in the data center and now in the public cloud. Unlike legacy scale-out NAS products, QF2 provides capacity for billions of files, closely matching the scale that could only previously be achieved with object storage solutions, but with the benefit of supporting file access protocols. Qumulo’s modern SDS, flash-first approach allows it to provide a very high-performance file storage system that can cover a wide variety of workloads. Its built-in, real-time analytics let administrators easily manage data no matter how large the footprint or where it is globally located. Continuous replication enables data to move where and when it’s required depending on business need. Qumulo refers to this unmatched file scalability and performance as universal-scale file storage.

Qumulo, founded in 2012, is rapidly growing its market presence and we recently validated their very high customer satisfaction and product capability through an extensive interview process with several customers. Qumulo recently extended their go-to-market ecosystem support through a partnership with Hewlett Packard Enterprise (HPE). Now with the launch of QF2 and support for  AWS, we expect Qumulo to continue its rapid rise as a leading provider of file services with universal scale. They are also well positioned to capture a significant share of the emerging multi-cloud storage market. We found many companies still prefer file access and there are plenty of reasons why scalable file will continue to grow and compete effectively versus object storage centric architectures.

Publish date: 09/22/17
Report

Qumulo Tackles the Machine Data Challenge: Six Customers Explain How

We are moving into a new era of data storage. The traditional storage infrastructure that we know (and do not necessarily love) was designed to process and store input from human beings. People input emails, word processing documents and spreadsheets. They created databases and recorded business transactions. Data was stored on tape, workstation hard drives, and over the LAN.

In the second stage of data storage development, humans still produced most content but there was more and more of it, and file sizes got larger and larger. Video and audio, digital imaging, websites streaming entertainment content to millions of users; and no end to data growth. Storage capacity grew to encompass large data volumes and flash became more common in hybrid and all-flash storage systems.

Today, the storage environment has undergone another major change. The major content producers are no longer people, but machines. Storing and processing machine data offers tremendous opportunities: Seismic and weather sensors that may lead to meaningful disaster warnings. Social network diagnostics that display hard evidence of terrorist activity. Connected cars that could slash automotive fatalities. Research breakthroughs around the human brain thanks to advances in microscopy.

However, building storage systems that can store raw machine data and process it is not for the faint of heart. The best solution today is massively scale-out, general purpose NAS. This type of storage system has a single namespace capable of storing billions of differently sized files, linearly scales performance and capacity, and offers data-awareness and real-time analytics using extended metadata.

There are a very few vendors in the world today who offer this solution. One of them is Qumulo. Qumulo’s mission is to provide high volume storage to business and scientific environments that produce massive volumes of machine data.

To gauge how well Qumulo works in the real world of big data, we spoke with six customers from life sciences, media and entertainment, telco/cable/satellite, higher education and the automotive industries. Each customer deals with massive machine-generated data and uses Qumulo to store, manage, and curate mission-critical data volumes 24x7. Customers cited five major benefits to Qumulo: massive scalability, high performance, data-awareness and analytics, extreme reliability, and top-flight customer support.

Read on to see how Qumulo supports large-scale data storage and processing in these mission-critical, intensive machine data environments.

Publish date: 10/26/16
Free Reports

For Lowest TCO and Maximum Agility Choose the VMware Cloud Foundation Hybrid SDDC Platform

The race is on at full speed.  What race?  The race to bring public cloud agility and economics to a data center near you. Ever since the first integrated systems came onto the scene in 2010, vendors have been furiously engineering solutions to make on-premises infrastructure as cost effective and as easy to use as the public cloud, while also providing the security, availability, and control that enterprises demand. Fundamentally, two main architectures have evolved within the race to modernize data centers that will create a foundation enabling fully private and hybrid clouds. The first approach uses traditional compute, storage, and networking infrastructure components (traditional 3-tier) overlaid with varying degrees of virtualization and management software. The second more recent approach is to build a fully virtualized data center using industry standard servers and networking and then layer on top of that a full suite of software-based compute, network, and storage virtualization with management software. This approach is often termed a Software-Defined Data Center (SDDC).

The goal of an SDDC is to extend virtualization techniques across the entire data center to enable the abstraction, pooling, and automation of all data center resources. This would allow a business to dynamically reallocate any part of the infrastructure for various workload requirements without forklifting hardware or rewiring. VMware has taken SDDC to a new level with VMware Cloud Foundation.  VMware Cloud Foundation is the only unified SDDC platform for the hybrid cloud, which brings together VMware’s compute, storage, and network virtualization into a natively integrated stack that can be deployed on-premises or run as a service from the public cloud. It establishes a common cloud infrastructure foundation that gives customers a unified and consistent operational model across the private and public cloud.

VMware Cloud Foundation delivers an industry-leading SDDC cloud infrastructure by combining VMware’s highly scalable hyper-converged software (vSphere and VSAN) with the industry leading network virtualization platform, NSX. VMware Cloud Foundation comes with unique lifecycle management capabilities (SDDC Manager) that eliminate the overhead of system operations of the cloud infrastructure stack by automating day 0 to day 2 processes such as bring-up, configuration, workload provisioning, and patching/upgrades. As a result, customers can significantly shorten application time to market, boost cloud admin productivity, reduce risk, and lower TCO.  Customers consume VMware Cloud Foundation software in three ways: factory pre-loaded on integrated systems (VxRack 1000 SDDC); deployed on top qualified Ready Nodes from HPE, QCT, Fujitsu, and others in the future, with qualified networking; and run as a service from the public cloud through IBM, vCAN partners, vCloud Air, and more to come.

In this comparative study, Taneja Group performed an in-depth analysis of VMware Cloud Foundation deployed on qualified Ready Nodes and qualified networking versus several traditional 3-tier converged infrastructure (CI) integrated systems and traditional 3-tier do-it-yourself (DIY) systems. We analyzed the capabilities and contrasted key functional differences driven by the various architectural approaches. In addition, we evaluated the key CapEx and OpEx TCO cost components.  Taneja Group configured each traditional 3-tier system's hardware capacity to be as close as possible to the VMware Cloud Foundation qualified hardware capacity.  Further, since none of the 3-tier systems had a fully integrated SDDC software stack, Taneja Group added the missing SDDC software, making it as close as possible to the VMware Cloud Foundation software stack.  The quantitative comparative results from the traditional 3-tier DIY and CI systems were averaged together into one scenario because the hardware and software components are very similar. 

Our analysis concluded that both types of solutions are more than capable of handling a variety of virtualized workload requirements. However, VMware Cloud Foundation has demonstrated a new level of ease-of-use due to its modular scale-out architecture, native integration, and automatic lifecycle management, giving it a strong value proposition when building out modern next generation data centers.  The following are the five key attributes that stood out during the analysis:

  • Native Integration of the SDDC:  VMware Cloud Foundation natively integrates vSphere, Virtual SAN (VSAN), and NSX network virtualization.
  • Simplest operational experience: VMware SDDC Manager automates the life-cycle of the SDDC stack including bring up, configuration, workload provisioning, and patches/upgrades.
  •  
  • Isolated workload domains: VMware Cloud Foundation provides unique administrator tools to flexibly provision subsets of the infrastructure for multi-tenant isolation and security.
  • Modular linear scalability: VMware Cloud Foundation employs an architecture in which capacity can be scaled by the HCI node, by the rack, or by multiple racks. 
  • Seamless Hybrid Cloud: Deploy VMware Cloud Foundation for private cloud and consume on public clouds to create a seamless hybrid cloud with a consistent operational experience.

Taneja Group’s in-depth analysis indicates that VMware Cloud Foundation will enable enterprises to achieve significant cost savings. Hyper-converged infrastructure, used by many web-scale service providers, with natively integrated SDDC software significantly reduced server, storage, and networking costs.  This hardware cost saving more than offset the incremental SDDC software costs needed to deliver the storage and networking capability that typically is provided in hardware from best of breed traditional 3-tier components. In this study, we measured the upfront CapEx and 3 years of support costs for the hardware and software components needed to build out a VMware Cloud Foundation private cloud on qualified Ready Nodes.  In addition, Taneja Group validated a model that demonstrates the labor and time OpEx savings that can be achieved through the use of integrated end-to-end automatic lifecycle management in the VMware SDDC Manager software.

 

By investing in VMware Cloud Foundation, businesses can be assured that their data center infrastructure can be easily consumed, scaled, managed, upgraded and enhanced to provide the best private cloud at the lowest cost. Using a pre-engineered modular, scale-out approach to building at web-scale means infrastructure is added in hours, not days, and businesses can be assured that adding infrastructure scales linearly without complexity.  VMware Cloud Foundation is the only platform that provides a natively integrated unified SDDC platform for the hybrid cloud with end-to-end management and with the flexibility to provision a wide variety of workloads at the push of a button.

In summary, VMware Cloud Foundation enables at least five unparalleled capabilities, generates a 45% lower 3-year TCO than the alternative traditional 3-tier approaches, and delivers a tremendous value proposition when building out a modern hybrid SDDC platform. Before blindly going down the traditional infrastructure approach, companies should take a close look at VMware Cloud Foundation, a unified SDDC platform for the hybrid cloud.

Publish date: 10/17/16
Report

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16
Report

The Hyperconverged Data Center: Nutanix Customers Explain Why They Replaced Their EMC SANS

Taneja Group spoke with several Nutanix customers in order to understand why they switched from EMC storage to the Nutanix platform. All of the respondent’s articulated key architectural benefits of hyperconvergence versus a traditional 3-tier solutions. In addition, specific Nutanix features for mission-critical production environments were often cited.

Hyperconverged systems have become a mainstream alternative to traditional 3-tier architecture consisting of separate compute, storage and networking products. Nutanix collapses this complex environment into software-based infrastructure optimized for virtual environments. Hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual assets. Hyperconvergence offers a key value proposition over 3-tier architecture:  instead of deploying, managing and integrating separate components – storage, servers, networking, data services, and hypervisors – these components are combined into a modular high performance system.

The customers we interviewed operate in very different industries. In common, they all maintained data centers undergoing fundamental changes, typically involving an opportunity to refresh some portion of their 3-tier infrastructure. This enabled the evaluation of hyperconvergence in supporting those changes. Customers interviewed found that Nutanix hyperconvergence delivered benefits in the areas of scalability, simplicity, value, performance, and support. If we could use one phrase to explain why Nutanix’ is winning over EMC customers in the enterprise market it would be “Ease of Everything.” Nutanix works, and works consistently with small and large clusters, in single and multiple datacenters, with specialist or generalist IT support, and across hypervisors.

The five generations of Nutanix products span many years of product innovation. Web-scale architecture has been the key to Nutanix platform’s enterprise capable performance, simplicity and scalability. Building technology like this requires years of innovation and focus and is not an add-on for existing products and architectures.

The modern data center is quickly changing. Extreme data growth and complexity are driving data center directors toward innovative technology that will grow with them. Given the benefits of Nutanix web-scale architecture – and the Ease of Everything – data center directors can confidently adopt Nutanix as their partner in data center transformation just as the following EMC customers did.

Publish date: 03/31/16
Page 1 of 22 pages  1 2 3 >  Last ›