Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Systems

Includes Storage Arrays, NAS, File Systems, Clustered and Distributed File Systems, FC Switches/Directors, HBA, CNA, Routers, Components, Semiconductors, Server Blades.

Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All components that make up the SAN, FC-based or iSCSI-based, and all forms of file servers, including NAS systems based on clustered or distributed file systems, are covered soup to nuts. Our analysts have deep backgrounds in file systems area in particular. Components such as Storage Network Processors, SAS Expanders, FC Controllers are covered here as well. Server Blades coverage straddles this section as well as the Infrastructure Management section above.

Page 1 of 37 pages  1 2 3 >  Last ›
Report

HPE StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.

Publish date: 06/30/16
Report

High Capacity SSDs are Driving the Shift to the All Flash Data Center

All Flash Arrays (AFAs) have had an impressive run of growth. From less than 5% of total array revenue in 2011, they’re expected to approach 50% of total revenue by the end of 2016, roughly a 60% CAGR. This isn’t surprising, really. Even though they’ve historically cost more on a $$/GB level (the gap is rapidly narrowing), they offer large advantages over hybrid and HDD-based arrays in every other area.

The most obvious advantage that SSDs have over HDDs is in performance. With no moving parts to slow them down, they can be over a thousand times faster than HDDs by some measures. Using them to eliminate storage bottlenecks, CIOs can squeeze more utility out of their servers. The high performance of SSD’s has allowed storage vendors to implement storage capacity optimization techniques such as thin deduplication within AFAs. Breathtaking performance combined with affordable capacity optimization has been the major driving force behind AFA market gains to date.

While people are generally aware that SSDs outperform HDDs by a large margin, they usually have less visibility into the other advantages that they bring to the table. SSDs are also superior to HDDs in the areas of reliability (and thus warranty), power consumption, cooling requirements and physical footprint. As we’ll see, these TCO advantages allow users to run at significantly lower OPEX levels when switching to AFAs from traditional, HDD-based arrays.

When looking at the total cost envelope, factoring in their superior performance, AFAs are already the intelligent purchase decision, particularly for Tier 1 mission critical workloads. Now, a new generation of high capacity SSDs is coming and it’s poised to accelerate the AFA takeover. We believe the Flash revolution in storage that started in 2011will outpace even the most optimistic forecast in 2016 easily eclipsing the 50% of total revenue predicted for external arrays. Let’s take a look at how and why.

Publish date: 06/10/16
Report

Hybrid Storage Accelerates IT Cloud Transformation: Customers find Microsoft Azure StorSimple

After conducting a number of in-depth field interviews with real world Microsoft Azure StorSimple users, we’ve discovered that the real StorSimple story is all about helping people transition smoothly from on-premises storage to an on-premises/cloud hybrid model. From there, it helps both IT and the business accelerate broader adoption of cloud-centric hybrid IT architecture. StorSimple not only simplifies on-premises storage challenges with fully integrated automated cloud-tiering and data protection (providing elastic capacity and cloud burstability), but also optimizes distributed file sharing and application storage (with cloud-based DR, centralized management, and extensibility).

However, it’s easy to talk about features – what a product does and how it does it. These are important things to know and we’ll highlight several key capabilities in this report. But the real proof of the successful product pudding is this: what do actual customers say? What are their challenges, their hopes, their needs? And how did their storage decisions serve those needs?

To answer these questions, we took an in-depth look at StorSimple through a customer lens. Real-life enterprise customers told us about their original journeys to StorSimple, and how Microsoft is helping them to move on more fully to the cloud. Ultimately, we noted five highly-valued critical advantages of StorSimple: native data protection and disaster recovery, deployment and management simplicity across multiple locations, a high return on investment, and a dynamic storage environment that unifies files and applications across the enterprise.

Publish date: 05/09/16
Report

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16
Report

The Hyperconverged Data Center: Nutanix Customers Explain Why They Replaced Their EMC SANS

Taneja Group spoke with several Nutanix customers in order to understand why they switched from EMC storage to the Nutanix platform. All of the respondent’s articulated key architectural benefits of hyperconvergence versus a traditional 3-tier solutions. In addition, specific Nutanix features for mission-critical production environments were often cited.

Hyperconverged systems have become a mainstream alternative to traditional 3-tier architecture consisting of separate compute, storage and networking products. Nutanix collapses this complex environment into software-based infrastructure optimized for virtual environments. Hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual assets. Hyperconvergence offers a key value proposition over 3-tier architecture:  instead of deploying, managing and integrating separate components – storage, servers, networking, data services, and hypervisors – these components are combined into a modular high performance system.

The customers we interviewed operate in very different industries. In common, they all maintained data centers undergoing fundamental changes, typically involving an opportunity to refresh some portion of their 3-tier infrastructure. This enabled the evaluation of hyperconvergence in supporting those changes. Customers interviewed found that Nutanix hyperconvergence delivered benefits in the areas of scalability, simplicity, value, performance, and support. If we could use one phrase to explain why Nutanix’ is winning over EMC customers in the enterprise market it would be “Ease of Everything.” Nutanix works, and works consistently with small and large clusters, in single and multiple datacenters, with specialist or generalist IT support, and across hypervisors.

The five generations of Nutanix products span many years of product innovation. Web-scale architecture has been the key to Nutanix platform’s enterprise capable performance, simplicity and scalability. Building technology like this requires years of innovation and focus and is not an add-on for existing products and architectures.

The modern data center is quickly changing. Extreme data growth and complexity are driving data center directors toward innovative technology that will grow with them. Given the benefits of Nutanix web-scale architecture – and the Ease of Everything – data center directors can confidently adopt Nutanix as their partner in data center transformation just as the following EMC customers did.

Publish date: 03/31/16
Profile

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16
Page 1 of 37 pages  1 2 3 >  Last ›