Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Systems

Includes Storage Arrays, NAS, File Systems, Clustered and Distributed File Systems, FC Switches/Directors, HBA, CNA, Routers, Components, Semiconductors, Server Blades.

Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All components that make up the SAN, FC-based or iSCSI-based, and all forms of file servers, including NAS systems based on clustered or distributed file systems, are covered soup to nuts. Our analysts have deep backgrounds in file systems area in particular. Components such as Storage Network Processors, SAS Expanders, FC Controllers are covered here as well. Server Blades coverage straddles this section as well as the Infrastructure Management section above.

Page 1 of 36 pages  1 2 3 >  Last ›
Profile

The HPE Solution to Backup Complexity and Scale: HPE Data Protector and StoreOnce

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HPE as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HPE is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HPE hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HPE people have taken to heart a “customer first” message to provide a truly solution-focused HPE experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HPE business unit components are in the “box”. And significantly, this approach elevates HPE from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HPE is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HPE StoreOnce and Data Protector products are truly game changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 01/15/16
Report

Nutanix Versus VCE: Web-Scale Versus Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix in late 2014 with updates in 2015. The Taneja Group analyzed the experiences of seven Nutanix Xtreme Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership (see an opinion of the DELL/EMC merger at end of this document). VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments. 

Publish date: 01/14/16
Profile

Array Efficient, VM-Centric Data Protection: HPE Data Protector and 3PAR StoreServ

One of the biggest storage trends we are seeing in our current research here at Taneja Group is that of storage buyers (and operators) looking for more functionality – and at the same time increased simplicity – from their storage infrastructure. For this and many other reasons, including TCO (both CAPEX and OPEX) and improved service delivery, functional “convergence” is currently a big IT theme. In storage we see IT folks wanting to eliminate excessive layers in their complex stacks of hardware and software that were historically needed to accomplish common tasks. Perhaps the biggest, most critical, and unfortunately onerous and unnecessarily complex task that enterprise storage folks have had to face is that of backup and recovery. As a key trusted vendor of both data protection and storage solutions, we note that HPE continues to invest in producing better solutions in this space.

HPE has diligently been working towards integrating data protection functionality natively within their enterprise storage solutions starting with the highly capable tier-1 3PAR StoreServ arrays. This isn’t to say that the storage array now turns into a single autonomous unit, becoming a chokepoint or critical point of failure, but rather that it becomes capable of directly providing key data services to downstream storage clients while being directed and optimized by intelligent management (which often has a system-wide or larger perspective). This approach removes excess layers of 3rd party products and the inefficient indirect data flows traditionally needed to provide, assure, and then accelerate comprehensive data protection schemes. Ultimately this evolution creates a type of “software-defined data protection” in which the controlling backup and recovery software, in this case HPE’s industry-leading Data Protector, directly manages application-centric array-efficient snapshots.

In this report we examine this disruptively simple approach and how HPE extends it to the virtual environment – converging backup capabilities between Data Protector and 3PAR StoreServ to provide hardware assisted agentless backup and recovery for virtual machines. With HPE’s approach, offloading VM-centric snapshots to the array while continuing to rely on the hypervisor to coordinate the physical resources of virtual machines, virtualized organizations gain on many fronts including greater backup efficiency, reduced OPEX, greater data protection coverage, immediate and fine-grained recovery, and ultimately a more resilient enterprise. We’ll also look at why HPE is in a unique position to offer this kind of “converging” market leadership, with a complete end-to-end solution stack including innovative research and development, sales, support, and professional services.

Publish date: 12/21/15
Profile

Now Big Data Works for Every Enterprise: Pepperdata Adds Missing Performance QoS to Hadoop

While a few well-publicized web 2.0 companies are taking great advantage of foundational big data solution that they have themselves created (e.g. Hadoop), most traditional enterprise IT shops are still thinking about how to practically deploy their first business-impacting big data applications – or have dived in and are now struggling mightily to effectively manage a large Hadoop cluster in the middle of their production data center. This has led to the common perception that realistic big data business value may yet be just out of reach for most organizations – especially those that need to run lean and mean on both staffing and resources.   

This new big data ecosystem consists of scale-out platforms, cutting-edge open source solutions, and massive storage that is inherently difficult for traditional IT shops to optimally manage in production – especially with still evolving ecosystem management capabilities. In addition, most organizations need to run large clusters supporting multiple users and applications to control both capital and operational costs. Yet there are no native ways to guarantee, control, or even gain visibility into workload-level performance within Hadoop. Even if there wasn’t a real high-end skills and deep expertise gap for most, there still isn’t any practical way that additional experts could tweak and tune mixed Hadoop workload environments to meet production performance SLA’s.

At the same time, the competitive game of mining of value from big data has moved from day-long batch ELT/ETL jobs feeding downstream BI systems, to more user interactive queries and business process “real time” applications. Live performance matters as much now in big data as it does in any other data center solution. Ensuring multi-tenant workload performance within Hadoop is why Pepperdata, a cluster performance optimization solution, is critical to the success of enterprise big data initiatives.

In this report we’ll look deeper into today’s Hadoop deployment challenges and learn how performance optimization capabilities are not only necessary for big data success in enterprise production environments, but can open up new opportunities to mine additional business value. We’ll look at Pepperdata’s unique performance solution that enables successful Hadoop adoption for the common enterprise. We’ll also examine how it inherently provides deep visibility and reporting into who is doing what/when for troubleshooting, chargeback and other management needs. Because Pepperdata’s function is essential and unique, not to mention its compelling net value, it should be a checklist item in any data center Hadoop implementation.

To read this full report please click here.

Publish date: 12/17/15
Report

Dell Storage Center Achieves Greater than Five Nines Availability at Mid-Range Cost

Dell Storage SC Series achieved a five nines (5 9s) availability rating years ago. Now the SC Series is displaying 5 9s and greater with technologies that are moving availability even farther up the scale. This is a big achievement based on real, measurable field data: the only numbers that really count.

Not every piece of data requires 5 9s capability. However, critical Tier 1 applications do need it. Outage costs vary by industry but easily total millions of dollars per hour in highly regulated and data-intensive industries. Some of the organizations in these verticals are enterprises, but many more are mid-sized businesses with exceptionally mission-critical data stores.

Consider such applications as e-commerce systems. Online customers are notorious for abandoning shopping carts even when the application is running smoothly. Downing an e-commerce system can easily cost millions of dollars in lost sales over a few days or hours, not to mention a loss of reputation. Other mission-critical applications that must be available include OLTP, CRM or even email systems.

Web applications present another HA mission. SaaS providers with sales support or finance software can hardly afford downtime. Streaming sites with subscribers also lose large amounts of future revenue if they go down. Many customers will ask for refunds or cancel their subscriptions and never return.

However, most highly available 5 9s systems have large purchase prices and high ongoing expenses. Many small enterprise and mid-sized business cannot afford these high-priced systems or the staff that goes with them. They know they need availability and try to save money and time by buying cheaper systems with 4 9s availability or lower. Their philosophy is that these systems are good enough. And they are good enough for general storage, but not for data whose unavailability quickly spirals up into the millions of dollars. Buying less than 5 9s in this type of environment is a false economy.

Still, even the risk of sub-par availability doesn’t raise the money that a business needs for high-end availability systems. This is where the story gets very interesting. Dell Storage SC Series offers 5 9s and higher availability – and it does it at a mid-range cost. Dell does not sacrifice high availability architecture for a lower CAPEX and OPEX but also provides dynamic scalability, management simplicity, redundant storage, space-saving snapshots and automatic tiering.  Thanks to the architecture behind Dell Storage SC Series, Dell has achieved a unique position in the high availability stakes. 

Publish date: 10/19/15
Free Reports

Abstract: Taneja Group Multi-Client Study on Storage Acceleration and Performance Technologies

Storage performance technology – solid state or high-scale storage designed for high performance – has long been a tricky and fragmented market. While the market for flash-based storage has been growing steadily over the past few years, it still represents well under 10% of total installed capacity in the enterprise. A variety of storage acceleration solutions—based in the array, server and network—are now available, and yet many enterprise buyers are still poorly educated about these options and how best to address the performance needs of their business-critical apps.

Taneja Group’s latest multi-client sponsored research study addresses the relatively young and rapidly evolving market for storage acceleration and performance solutions in the enterprise. This study provides vendor sponsors with key insights into the current uptake and usage of storage acceleration and performance technologies, along with user-perceived value of key features and capabilities. The study findings will help vendors understand how to overcome sales and deployment barriers, improve and sharpen the positioning of their products/solutions, and determine where they should invest going forward, based on the technologies and use cases that will be most important to enterprise buyers over the next 2-3 years.

The 70-page research report features results from 694 completed online surveys, plus in-depth discussions with 9 selected enterprise participants. The study respondents – primarily senior IT and infrastructure managers – come from a broad range of enterprise-level organizations and industries, providing a highly representative sample of customers in the sweet spot for storage acceleration solutions.

The report begins with a description of the market landscape, which provides our perspectives on how the storage performance market has developed and where it is headed. This leads into an in-depth analysis and discussion of survey findings, including a profile of the respondents themselves. We then identify and explore several key customer populations that rose to the surface in our analysis. Understanding these different types of buyers and users is more important than ever, as we find that the market is quite fragmented, with a number of contrasting populations looking at performance from distinctly different perspectives. By studying these populations and what makes them tick, vendors will be able to assess and optimize product and marketing strategies for different classes of customers, while honing their competitive differentiation.

This Taneja Group research report was provided to our primary research sponsors in early September 2015, and is now generally available for purchase by other vendors. If you have an interest in learning more about the market and how you can make your acceleration offerings stand out, please contact Jeff Byrne (jeff.byrne@tanejagroup.com) or Mike Matchett (mike.matchett@tanejagroup.com) at Taneja Group to put the insights in this report to work for you.

Publish date: 10/01/15
Page 1 of 36 pages  1 2 3 >  Last ›