Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Technology

Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.

All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.

Page 1 of 45 pages  1 2 3 >  Last ›
Report

Nutanix Versus VCE: Web-Scale Versus Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix in late 2014 with updates in 2015. The Taneja Group analyzed the experiences of seven Nutanix Xtreme Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership (see an opinion of the DELL/EMC merger at end of this document). VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments. 

Publish date: 01/14/16
Profile

Full Database Protection Without the Full Backup Plan: Oracle’s Cloud-Scaled Zero Data Loss Recovery

Today’s tidal wave of big data isn’t just made up of loose unstructured documents – huge data growth is happening everywhere including in high-value structured datasets kept in databases like Oracle Database 12c. This data is any company’s most valuable core data that powers most key business applications – and it’s growing fast! According to Oracle, in 5 years (by 2020) most enterprises expect 50x data growth. As their scope and coverage grow, these key databases inherently become even more critical to our businesses. At the same time, the sheer number of database-driven applications and users is also multiplying – and they increasingly need to be online, globally, 24 x 7. Which all leads to the big burning question: How can we possibly protect all this critical data, data we depend on more and more even as it grows, all the time?

We just can’t keep taking more time out of the 24-hour day for longer and larger database backups. The traditional batch window backup approach is already often beyond practical limits and its problems are only getting worse with data growth – missed backup windows, increased performance degradation, unavailability, fragility, risk and cost. It’s now time for a new data protection approach that can do away with the idea of batch window backups, yet still provide immediate backup copies to recover from failures, corruption, and other disasters.

Oracle has stepped up in a big way, and marshaling expertise and technologies from across their engineered systems portfolio, has developed a new Zero Data Loss Recovery Appliance. Note the very intentional name that is focused on total recoverability – the Recovery Appliance is definitely not just another backup target. This new appliance eliminates the pains and risks of the full database backup window approach completely through a highly engineered continuous data protection solution for Oracle databases. It is now possible to immediately recover any database to any point in time desired, as the Recovery Appliance provides “virtual” full backups on demand and can scale to protect thousands of databases and petabytes of capacity. In fact, it offloads backup processes from production database servers which can increase performance in Oracle environments typically by  25%. Adopting this new backup and recovery solution will actually give CPU cycles back to the business.

In this report, we’ll briefly review why conventional data protection approaches based on the backup window are fast becoming obsolete. Then we’ll look into how Oracle has designed the new Recovery Appliance to provide a unique approach to ensuring data protection in real-time, at scale, for thousands of databases and PBs of data. We’ll see how zero data loss, incremental forever backups, continuous validation, and other innovations have completely changed the game of database data protection. For the first time there is now a real and practical way to fully protect a global corporation’s databases—on-premise and in the cloud—even in the face of today’s tremendous big data growth.

Publish date: 12/22/15
Profile

Now Big Data Works for Every Enterprise: Pepperdata Adds Missing Performance QoS to Hadoop

While a few well-publicized web 2.0 companies are taking great advantage of foundational big data solution that they have themselves created (e.g. Hadoop), most traditional enterprise IT shops are still thinking about how to practically deploy their first business-impacting big data applications – or have dived in and are now struggling mightily to effectively manage a large Hadoop cluster in the middle of their production data center. This has led to the common perception that realistic big data business value may yet be just out of reach for most organizations – especially those that need to run lean and mean on both staffing and resources.   

This new big data ecosystem consists of scale-out platforms, cutting-edge open source solutions, and massive storage that is inherently difficult for traditional IT shops to optimally manage in production – especially with still evolving ecosystem management capabilities. In addition, most organizations need to run large clusters supporting multiple users and applications to control both capital and operational costs. Yet there are no native ways to guarantee, control, or even gain visibility into workload-level performance within Hadoop. Even if there wasn’t a real high-end skills and deep expertise gap for most, there still isn’t any practical way that additional experts could tweak and tune mixed Hadoop workload environments to meet production performance SLA’s.

At the same time, the competitive game of mining of value from big data has moved from day-long batch ELT/ETL jobs feeding downstream BI systems, to more user interactive queries and business process “real time” applications. Live performance matters as much now in big data as it does in any other data center solution. Ensuring multi-tenant workload performance within Hadoop is why Pepperdata, a cluster performance optimization solution, is critical to the success of enterprise big data initiatives.

In this report we’ll look deeper into today’s Hadoop deployment challenges and learn how performance optimization capabilities are not only necessary for big data success in enterprise production environments, but can open up new opportunities to mine additional business value. We’ll look at Pepperdata’s unique performance solution that enables successful Hadoop adoption for the common enterprise. We’ll also examine how it inherently provides deep visibility and reporting into who is doing what/when for troubleshooting, chargeback and other management needs. Because Pepperdata’s function is essential and unique, not to mention its compelling net value, it should be a checklist item in any data center Hadoop implementation.

To read this full report please click here.

Publish date: 12/17/15
Report

Dell Storage Center Achieves Greater than Five Nines Availability at Mid-Range Cost

Dell Storage SC Series achieved a five nines (5 9s) availability rating years ago. Now the SC Series is displaying 5 9s and greater with technologies that are moving availability even farther up the scale. This is a big achievement based on real, measurable field data: the only numbers that really count.

Not every piece of data requires 5 9s capability. However, critical Tier 1 applications do need it. Outage costs vary by industry but easily total millions of dollars per hour in highly regulated and data-intensive industries. Some of the organizations in these verticals are enterprises, but many more are mid-sized businesses with exceptionally mission-critical data stores.

Consider such applications as e-commerce systems. Online customers are notorious for abandoning shopping carts even when the application is running smoothly. Downing an e-commerce system can easily cost millions of dollars in lost sales over a few days or hours, not to mention a loss of reputation. Other mission-critical applications that must be available include OLTP, CRM or even email systems.

Web applications present another HA mission. SaaS providers with sales support or finance software can hardly afford downtime. Streaming sites with subscribers also lose large amounts of future revenue if they go down. Many customers will ask for refunds or cancel their subscriptions and never return.

However, most highly available 5 9s systems have large purchase prices and high ongoing expenses. Many small enterprise and mid-sized business cannot afford these high-priced systems or the staff that goes with them. They know they need availability and try to save money and time by buying cheaper systems with 4 9s availability or lower. Their philosophy is that these systems are good enough. And they are good enough for general storage, but not for data whose unavailability quickly spirals up into the millions of dollars. Buying less than 5 9s in this type of environment is a false economy.

Still, even the risk of sub-par availability doesn’t raise the money that a business needs for high-end availability systems. This is where the story gets very interesting. Dell Storage SC Series offers 5 9s and higher availability – and it does it at a mid-range cost. Dell does not sacrifice high availability architecture for a lower CAPEX and OPEX but also provides dynamic scalability, management simplicity, redundant storage, space-saving snapshots and automatic tiering.  Thanks to the architecture behind Dell Storage SC Series, Dell has achieved a unique position in the high availability stakes. 

Publish date: 10/19/15
Free Reports

Abstract: Taneja Group Multi-Client Study on Storage Acceleration and Performance Technologies

Storage performance technology – solid state or high-scale storage designed for high performance – has long been a tricky and fragmented market. While the market for flash-based storage has been growing steadily over the past few years, it still represents well under 10% of total installed capacity in the enterprise. A variety of storage acceleration solutions—based in the array, server and network—are now available, and yet many enterprise buyers are still poorly educated about these options and how best to address the performance needs of their business-critical apps.

Taneja Group’s latest multi-client sponsored research study addresses the relatively young and rapidly evolving market for storage acceleration and performance solutions in the enterprise. This study provides vendor sponsors with key insights into the current uptake and usage of storage acceleration and performance technologies, along with user-perceived value of key features and capabilities. The study findings will help vendors understand how to overcome sales and deployment barriers, improve and sharpen the positioning of their products/solutions, and determine where they should invest going forward, based on the technologies and use cases that will be most important to enterprise buyers over the next 2-3 years.

The 70-page research report features results from 694 completed online surveys, plus in-depth discussions with 9 selected enterprise participants. The study respondents – primarily senior IT and infrastructure managers – come from a broad range of enterprise-level organizations and industries, providing a highly representative sample of customers in the sweet spot for storage acceleration solutions.

The report begins with a description of the market landscape, which provides our perspectives on how the storage performance market has developed and where it is headed. This leads into an in-depth analysis and discussion of survey findings, including a profile of the respondents themselves. We then identify and explore several key customer populations that rose to the surface in our analysis. Understanding these different types of buyers and users is more important than ever, as we find that the market is quite fragmented, with a number of contrasting populations looking at performance from distinctly different perspectives. By studying these populations and what makes them tick, vendors will be able to assess and optimize product and marketing strategies for different classes of customers, while honing their competitive differentiation.

This Taneja Group research report was provided to our primary research sponsors in early September 2015, and is now generally available for purchase by other vendors. If you have an interest in learning more about the market and how you can make your acceleration offerings stand out, please contact Jeff Byrne (jeff.byrne@tanejagroup.com) or Mike Matchett (mike.matchett@tanejagroup.com) at Taneja Group to put the insights in this report to work for you.

Publish date: 10/01/15
Profile

Enterprise Storage that Simply Runs and Runs: Infinidat Infinibox Delivers Incredible New Standard

Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.

Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.

Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.

In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.

Publish date: 09/29/15
Page 1 of 45 pages  1 2 3 >  Last ›