Join Newsletter
Trusted Business Advisors, Expert Technology Analysts

Research Areas


Includes Storage Arrays, NAS, File Systems, Clustered and Distributed File Systems, FC Switches/Directors, HBA, CNA, Routers, Components, Semiconductors, Server Blades.

Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All components that make up the SAN, FC-based or iSCSI-based, and all forms of file servers, including NAS systems based on clustered or distributed file systems, are covered soup to nuts. Our analysts have deep backgrounds in file systems area in particular. Components such as Storage Network Processors, SAS Expanders, FC Controllers are covered here as well. Server Blades coverage straddles this section as well as the Infrastructure Management section above.

Page 1 of 35 pages  1 2 3 >  Last ›
Free Reports

Abstract: Taneja Group Multi-Client Study on Storage Acceleration and Performance Technologies

Storage performance technology – solid state or high-scale storage designed for high performance – has long been a tricky and fragmented market. While the market for flash-based storage has been growing steadily over the past few years, it still represents well under 10% of total installed capacity in the enterprise. A variety of storage acceleration solutions—based in the array, server and network—are now available, and yet many enterprise buyers are still poorly educated about these options and how best to address the performance needs of their business-critical apps.

Taneja Group’s latest multi-client sponsored research study addresses the relatively young and rapidly evolving market for storage acceleration and performance solutions in the enterprise. This study provides vendor sponsors with key insights into the current uptake and usage of storage acceleration and performance technologies, along with user-perceived value of key features and capabilities. The study findings will help vendors understand how to overcome sales and deployment barriers, improve and sharpen the positioning of their products/solutions, and determine where they should invest going forward, based on the technologies and use cases that will be most important to enterprise buyers over the next 2-3 years.

The 70-page research report features results from 694 completed online surveys, plus in-depth discussions with 9 selected enterprise participants. The study respondents – primarily senior IT and infrastructure managers – come from a broad range of enterprise-level organizations and industries, providing a highly representative sample of customers in the sweet spot for storage acceleration solutions.

The report begins with a description of the market landscape, which provides our perspectives on how the storage performance market has developed and where it is headed. This leads into an in-depth analysis and discussion of survey findings, including a profile of the respondents themselves. We then identify and explore several key customer populations that rose to the surface in our analysis. Understanding these different types of buyers and users is more important than ever, as we find that the market is quite fragmented, with a number of contrasting populations looking at performance from distinctly different perspectives. By studying these populations and what makes them tick, vendors will be able to assess and optimize product and marketing strategies for different classes of customers, while honing their competitive differentiation.

This Taneja Group research report was provided to our primary research sponsors in early September 2015, and is now generally available for purchase by other vendors. If you have an interest in learning more about the market and how you can make your acceleration offerings stand out, please contact Jeff Byrne ( or Mike Matchett ( at Taneja Group to put the insights in this report to work for you.

Publish date: 10/01/15

Enterprise Storage that Simply Runs and Runs: Infinidat Infinibox Delivers Incredible New Standard

Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.

Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.

Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.

In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.

Publish date: 09/29/15

Unitrends Enterprise Backup 9.0: Simple and Powerful Data Protection for the Whole Data Center

Backup and recovery, replication, recovery assurance: all are more crucial than ever in the light of massively growing data. But complexity has grown right alongside expanding data. Data centers and their managers strain under the burdens of legacy physical data protection, fast-growing virtual data requirements, backup decisions around local, remote and cloud sites, and the need for specialist IT to administer complex data protection processes.

In response, Unitrends has launched a compelling new version of Unitrends Enterprise Backup (UEB): Release 9.0. Its completely revamped user interface and experience significantly reduces management overhead and lets even new users easily perform sophisticated functions using the redesigned dashboard. And its key capabilities are second to none for modern data protection in physical and virtual environments.

One of UEB 9.0’s differentiating strengths (indeed, the entire Unitrends product line) is the fact that in today’s increasingly more virtualized world, they still offer deep support for physical as well as virtual environments. This is more important than it might at first appear. There is a huge installed base of legacy equipment in existence and a lot of it has still not been moved into a virtual environment; yet it all needs to be protected. Within this legacy base, there are many mission-critical applications still running on physical servers that remain high priority protection targets. In these environments, many admins are forced to purchase specialized tools for protecting virtual environments separate from physical ones, or to use point backup products for specific applications. Both options carry extra costs by buying multiple applications that do essentially the same thing, and by hiring multiple people trained to use them.

This is why no matter how virtualized an environment is, if there is even one critical application that is still physical, admins need to strongly consider a solution that protects both. This gives the data center maximum protection with lower operating costs, since they no longer need multiple data protection packages and trained staff to run them.

This is where Unitrends steps in. With its rich capabilities and intuitive interface, UEB 9.0 protects data throughout the data center, and does not require IT specialists. This Product in Depth assesses Unitrends Enterprise Backup 9.0, the latest version of Unitrends flagship data protection platform. We put the new user interface through its paces to see just how intuitive it is, what information it provides and how many clicks it takes to perform some basic operations. We also did a deep dive into the functionality provided by the backup engine itself, some of which is a carryover from earlier versions and some which are new for 9.0.

Publish date: 09/17/15

Converged IT Infrastructure’s Place in the Internet of Things

All of the trends leading towards the world-wide Internet of Things (IoT) – ubiquitous, embedded computing, mobile, organically distributed nodes, and far-flung networks tying them together - are also coming in full force into the IT data center. These solutions are taking the form of converged and hyperconverged modules of IT infrastructure. Organizations adopting such solutions gain from a simpler building-block way to architect and deploy IT, and forward-thinking vendors now have a unique opportunity to profit from subscription services that while delivering superior customer insight and support, also help build a trusted advisor relationship that promises an ongoing “win-win” scenario for both the client and the vendor.

There are many direct (e.g. revenue impacting) and indirect (e.g. customer satisfaction) benefits we mention in this report, but the key enabler to this opportunity is in establishing an IoT scale data analysis capability. Specifically, by approaching converged and hyperconverged solutions as an IoT “appliance”, and harvesting low-level component data on utilization, health, configuration, performance, availability, faults, and other end point metrics across the full worldwide customer base deployment of appliances, an IoT vendor can then analyze the resulting stream of data with great profit for both the vendor and each individual client. Top-notch analytics can feed support, drive product management, assure sales/account control, inform marketing, and even provide a revenue opportunity directly (e.g. offering a gold level of service to the end customer). 

An IoT data stream from a large pool of appliances is almost literally the definition of “big data” – non-stop machine data at large scale with tremendous variety (even within a single converged solution stack) – and operating and maintaining such a big data solution requires a significant amount of data wrangling, data science and ongoing maintenance to stay current. Unfortunately this means IT vendors looking to position IoT oriented solutions may have to invest a large amount of cash, staff and resources into building out and supporting such analytics. For many vendors, especially those with a varied or complex convergence solution portfolio or established as a channel partner building them from third-party reference architectures, these big data costs can be prohibitive. However, failing to provide these services may result in large friction selling and supporting converged solutions to clients now expecting to manage IT infrastructure as appliances.

In this report, we’ll look at the convergence and hyperconvergence appliance trend, and the increasing customer expectations for such solutions. In particular we’ll see how IT appliances in the market need to be treated as complete, commoditized products as ubiquitous and with the same end user expectations as emerging household IoT solutions. In this context, we’ll look at Glassbeam’s unique B2B SaaS SCALAR that converged and hyperconverged IT appliance vendors can immediately adopt to provide an IoT machine data analytic solution. We’ll see how Glassbeam can help differentiate amongst competing solutions, build a trusted client relationship, better manage and support clients, and even provide additional direct revenue opportunities.

Publish date: 08/18/15

Redefining the Economics of Enterprise Storage (2015 Update)

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers. Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 06/30/15

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Page 1 of 35 pages  1 2 3 >  Last ›