Join Newsletter
Trusted Business Advisors, Expert Technology Analysts

Research Areas


Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.

All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.

Page 1 of 47 pages  1 2 3 >  Last ›
Free Reports

For Lowest TCO and Maximum Agility Choose the VMware Cloud Foundation Hybrid SDDC Platform

The race is on at full speed.  What race?  The race to bring public cloud agility and economics to a data center near you. Ever since the first integrated systems came onto the scene in 2010, vendors have been furiously engineering solutions to make on-premises infrastructure as cost effective and as easy to use as the public cloud, while also providing the security, availability, and control that enterprises demand. Fundamentally, two main architectures have evolved within the race to modernize data centers that will create a foundation enabling fully private and hybrid clouds. The first approach uses traditional compute, storage, and networking infrastructure components (traditional 3-tier) overlaid with varying degrees of virtualization and management software. The second more recent approach is to build a fully virtualized data center using industry standard servers and networking and then layer on top of that a full suite of software-based compute, network, and storage virtualization with management software. This approach is often termed a Software-Defined Data Center (SDDC).

The goal of an SDDC is to extend virtualization techniques across the entire data center to enable the abstraction, pooling, and automation of all data center resources. This would allow a business to dynamically reallocate any part of the infrastructure for various workload requirements without forklifting hardware or rewiring. VMware has taken SDDC to a new level with VMware Cloud Foundation.  VMware Cloud Foundation is the only unified SDDC platform for the hybrid cloud, which brings together VMware’s compute, storage, and network virtualization into a natively integrated stack that can be deployed on-premises or run as a service from the public cloud. It establishes a common cloud infrastructure foundation that gives customers a unified and consistent operational model across the private and public cloud.

VMware Cloud Foundation delivers an industry-leading SDDC cloud infrastructure by combining VMware’s highly scalable hyper-converged software (vSphere and VSAN) with the industry leading network virtualization platform, NSX. VMware Cloud Foundation comes with unique lifecycle management capabilities (SDDC Manager) that eliminate the overhead of system operations of the cloud infrastructure stack by automating day 0 to day 2 processes such as bring-up, configuration, workload provisioning, and patching/upgrades. As a result, customers can significantly shorten application time to market, boost cloud admin productivity, reduce risk, and lower TCO.  Customers consume VMware Cloud Foundation software in three ways: factory pre-loaded on integrated systems (VxRack 1000 SDDC); deployed on top qualified Ready Nodes from HPE, QCT, Fujitsu, and others in the future, with qualified networking; and run as a service from the public cloud through IBM, vCAN partners, vCloud Air, and more to come.

In this comparative study, Taneja Group performed an in-depth analysis of VMware Cloud Foundation deployed on qualified Ready Nodes and qualified networking versus several traditional 3-tier converged infrastructure (CI) integrated systems and traditional 3-tier do-it-yourself (DIY) systems. We analyzed the capabilities and contrasted key functional differences driven by the various architectural approaches. In addition, we evaluated the key CapEx and OpEx TCO cost components.  Taneja Group configured each traditional 3-tier system's hardware capacity to be as close as possible to the VMware Cloud Foundation qualified hardware capacity.  Further, since none of the 3-tier systems had a fully integrated SDDC software stack, Taneja Group added the missing SDDC software, making it as close as possible to the VMware Cloud Foundation software stack.  The quantitative comparative results from the traditional 3-tier DIY and CI systems were averaged together into one scenario because the hardware and software components are very similar. 

Our analysis concluded that both types of solutions are more than capable of handling a variety of virtualized workload requirements. However, VMware Cloud Foundation has demonstrated a new level of ease-of-use due to its modular scale-out architecture, native integration, and automatic lifecycle management, giving it a strong value proposition when building out modern next generation data centers.  The following are the five key attributes that stood out during the analysis:

  • Native Integration of the SDDC:  VMware Cloud Foundation natively integrates vSphere, Virtual SAN (VSAN), and NSX network virtualization.
  • Simplest operational experience: VMware SDDC Manager automates the life-cycle of the SDDC stack including bring up, configuration, workload provisioning, and patches/upgrades.
  • Isolated workload domains: VMware Cloud Foundation provides unique administrator tools to flexibly provision subsets of the infrastructure for multi-tenant isolation and security.
  • Modular linear scalability: VMware Cloud Foundation employs an architecture in which capacity can be scaled by the HCI node, by the rack, or by multiple racks. 
  • Seamless Hybrid Cloud: Deploy VMware Cloud Foundation for private cloud and consume on public clouds to create a seamless hybrid cloud with a consistent operational experience.

Taneja Group’s in-depth analysis indicates that VMware Cloud Foundation will enable enterprises to achieve significant cost savings. Hyper-converged infrastructure, used by many web-scale service providers, with natively integrated SDDC software significantly reduced server, storage, and networking costs.  This hardware cost saving more than offset the incremental SDDC software costs needed to deliver the storage and networking capability that typically is provided in hardware from best of breed traditional 3-tier components. In this study, we measured the upfront CapEx and 3 years of support costs for the hardware and software components needed to build out a VMware Cloud Foundation private cloud on qualified Ready Nodes.  In addition, Taneja Group validated a model that demonstrates the labor and time OpEx savings that can be achieved through the use of integrated end-to-end automatic lifecycle management in the VMware SDDC Manager software.


By investing in VMware Cloud Foundation, businesses can be assured that their data center infrastructure can be easily consumed, scaled, managed, upgraded and enhanced to provide the best private cloud at the lowest cost. Using a pre-engineered modular, scale-out approach to building at web-scale means infrastructure is added in hours, not days, and businesses can be assured that adding infrastructure scales linearly without complexity.  VMware Cloud Foundation is the only platform that provides a natively integrated unified SDDC platform for the hybrid cloud with end-to-end management and with the flexibility to provision a wide variety of workloads at the push of a button.

In summary, VMware Cloud Foundation enables at least five unparalleled capabilities, generates a 45% lower 3-year TCO than the alternative traditional 3-tier approaches, and delivers a tremendous value proposition when building out a modern hybrid SDDC platform. Before blindly going down the traditional infrastructure approach, companies should take a close look at VMware Cloud Foundation, a unified SDDC platform for the hybrid cloud.

Publish date: 10/17/16

5 9’s Availability in a Lower Cost Dell SC4020 Product? Yes, Really!

Every year Dell measures the availability level of its Storage Center Series of products by analyzing the actual failure data in the field. For the past few years Dell has asked Taneja Group to audit the results to ensure that these systems were indeed meeting the celebrated 5 9s availability levels. And they have. This year Dell asked us to audit the results specifically on the relatively new model, SC4020.

Even though the SC4020 is a lower cost member of the SC family, it meets 5 9s criteria just like its bigger family members. Dell did not cut costs by sacrificing availability, but by space-saving design like a single enclosure for media and controllers instead of two separate enclosures. Even with the smaller footprint – 2U to the SC8000’s 6U -- the SC4020 still achieves 5 9s using the same strict test measurement criteria.

Frankly, many vendors choose not to subject their lower cost models to 5 9s testing. The vendor may not have put a lot of development dollars into the lower cost product in an effort to reduce cost and maintain profitability on a lower-priced system.

Dell didn’t do it this way with the SC4020. Instead of watering it down by stripping features, they architected high efficiency into a smaller footprint. The resulting array is smaller and more affordable, and retains the SC Series enterprise features: high availability and reliability, performance, centralized management, not only across all SC models but also across the Dell EqualLogic PS and FS models. This level of availability and efficiency makes the SC4020 an economical and highly efficient system for the mid-market and the distributed enterprise.

Publish date: 08/31/16

HPE StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.

Publish date: 06/30/16

High Capacity SSDs are Driving the Shift to the All Flash Data Center

All Flash Arrays (AFAs) have had an impressive run of growth. From less than 5% of total array revenue in 2011, they’re expected to approach 50% of total revenue by the end of 2016, roughly a 60% CAGR. This isn’t surprising, really. Even though they’ve historically cost more on a $$/GB level (the gap is rapidly narrowing), they offer large advantages over hybrid and HDD-based arrays in every other area.

The most obvious advantage that SSDs have over HDDs is in performance. With no moving parts to slow them down, they can be over a thousand times faster than HDDs by some measures. Using them to eliminate storage bottlenecks, CIOs can squeeze more utility out of their servers. The high performance of SSD’s has allowed storage vendors to implement storage capacity optimization techniques such as thin deduplication within AFAs. Breathtaking performance combined with affordable capacity optimization has been the major driving force behind AFA market gains to date.

While people are generally aware that SSDs outperform HDDs by a large margin, they usually have less visibility into the other advantages that they bring to the table. SSDs are also superior to HDDs in the areas of reliability (and thus warranty), power consumption, cooling requirements and physical footprint. As we’ll see, these TCO advantages allow users to run at significantly lower OPEX levels when switching to AFAs from traditional, HDD-based arrays.

When looking at the total cost envelope, factoring in their superior performance, AFAs are already the intelligent purchase decision, particularly for Tier 1 mission critical workloads. Now, a new generation of high capacity SSDs is coming and it’s poised to accelerate the AFA takeover. We believe the Flash revolution in storage that started in 2011will outpace even the most optimistic forecast in 2016 easily eclipsing the 50% of total revenue predicted for external arrays. Let’s take a look at how and why.

Publish date: 06/10/16

Hybrid Storage Accelerates IT Cloud Transformation: Customers find Microsoft Azure StorSimple

After conducting a number of in-depth field interviews with real world Microsoft Azure StorSimple users, we’ve discovered that the real StorSimple story is all about helping people transition smoothly from on-premises storage to an on-premises/cloud hybrid model. From there, it helps both IT and the business accelerate broader adoption of cloud-centric hybrid IT architecture. StorSimple not only simplifies on-premises storage challenges with fully integrated automated cloud-tiering and data protection (providing elastic capacity and cloud burstability), but also optimizes distributed file sharing and application storage (with cloud-based DR, centralized management, and extensibility).

However, it’s easy to talk about features – what a product does and how it does it. These are important things to know and we’ll highlight several key capabilities in this report. But the real proof of the successful product pudding is this: what do actual customers say? What are their challenges, their hopes, their needs? And how did their storage decisions serve those needs?

To answer these questions, we took an in-depth look at StorSimple through a customer lens. Real-life enterprise customers told us about their original journeys to StorSimple, and how Microsoft is helping them to move on more fully to the cloud. Ultimately, we noted five highly-valued critical advantages of StorSimple: native data protection and disaster recovery, deployment and management simplicity across multiple locations, a high return on investment, and a dynamic storage environment that unifies files and applications across the enterprise.

Publish date: 05/09/16

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16
Page 1 of 47 pages  1 2 3 >  Last ›