Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Report

Page 1 of 44 pages  1 2 3 >  Last ›
Report

Qumulo Tackles the Machine Data Challenge: Six Customers Explain How

We are moving into a new era of data storage. The traditional storage infrastructure that we know (and do not necessarily love) was designed to process and store input from human beings. People input emails, word processing documents and spreadsheets. They created databases and recorded business transactions. Data was stored on tape, workstation hard drives, and over the LAN.

In the second stage of data storage development, humans still produced most content but there was more and more of it, and file sizes got larger and larger. Video and audio, digital imaging, websites streaming entertainment content to millions of users; and no end to data growth. Storage capacity grew to encompass large data volumes and flash became more common in hybrid and all-flash storage systems.

Today, the storage environment has undergone another major change. The major content producers are no longer people, but machines. Storing and processing machine data offers tremendous opportunities: Seismic and weather sensors that may lead to meaningful disaster warnings. Social network diagnostics that display hard evidence of terrorist activity. Connected cars that could slash automotive fatalities. Research breakthroughs around the human brain thanks to advances in microscopy.

However, building storage systems that can store raw machine data and process it is not for the faint of heart. The best solution today is massively scale-out, general purpose NAS. This type of storage system has a single namespace capable of storing billions of differently sized files, linearly scales performance and capacity, and offers data-awareness and real-time analytics using extended metadata.

There are a very few vendors in the world today who offer this solution. One of them is Qumulo. Qumulo’s mission is to provide high volume storage to business and scientific environments that produce massive volumes of machine data.

To gauge how well Qumulo works in the real world of big data, we spoke with six customers from life sciences, media and entertainment, telco/cable/satellite, higher education and the automotive industries. Each customer deals with massive machine-generated data and uses Qumulo to store, manage, and curate mission-critical data volumes 24x7. Customers cited five major benefits to Qumulo: massive scalability, high performance, data-awareness and analytics, extreme reliability, and top-flight customer support.

Read on to see how Qumulo supports large-scale data storage and processing in these mission-critical, intensive machine data environments.

Publish date: 10/26/16
Report

5 9’s Availability in a Lower Cost Dell SC4020 Product? Yes, Really!

Every year Dell measures the availability level of its Storage Center Series of products by analyzing the actual failure data in the field. For the past few years Dell has asked Taneja Group to audit the results to ensure that these systems were indeed meeting the celebrated 5 9s availability levels. And they have. This year Dell asked us to audit the results specifically on the relatively new model, SC4020.

Even though the SC4020 is a lower cost member of the SC family, it meets 5 9s criteria just like its bigger family members. Dell did not cut costs by sacrificing availability, but by space-saving design like a single enclosure for media and controllers instead of two separate enclosures. Even with the smaller footprint – 2U to the SC8000’s 6U -- the SC4020 still achieves 5 9s using the same strict test measurement criteria.

Frankly, many vendors choose not to subject their lower cost models to 5 9s testing. The vendor may not have put a lot of development dollars into the lower cost product in an effort to reduce cost and maintain profitability on a lower-priced system.

Dell didn’t do it this way with the SC4020. Instead of watering it down by stripping features, they architected high efficiency into a smaller footprint. The resulting array is smaller and more affordable, and retains the SC Series enterprise features: high availability and reliability, performance, centralized management, not only across all SC models but also across the Dell EqualLogic PS and FS models. This level of availability and efficiency makes the SC4020 an economical and highly efficient system for the mid-market and the distributed enterprise.

Publish date: 08/31/16
Report

HPE StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.

Publish date: 06/30/16
Report

High Capacity SSDs are Driving the Shift to the All Flash Data Center

All Flash Arrays (AFAs) have had an impressive run of growth. From less than 5% of total array revenue in 2011, they’re expected to approach 50% of total revenue by the end of 2016, roughly a 60% CAGR. This isn’t surprising, really. Even though they’ve historically cost more on a $$/GB level (the gap is rapidly narrowing), they offer large advantages over hybrid and HDD-based arrays in every other area.

The most obvious advantage that SSDs have over HDDs is in performance. With no moving parts to slow them down, they can be over a thousand times faster than HDDs by some measures. Using them to eliminate storage bottlenecks, CIOs can squeeze more utility out of their servers. The high performance of SSD’s has allowed storage vendors to implement storage capacity optimization techniques such as thin deduplication within AFAs. Breathtaking performance combined with affordable capacity optimization has been the major driving force behind AFA market gains to date.

While people are generally aware that SSDs outperform HDDs by a large margin, they usually have less visibility into the other advantages that they bring to the table. SSDs are also superior to HDDs in the areas of reliability (and thus warranty), power consumption, cooling requirements and physical footprint. As we’ll see, these TCO advantages allow users to run at significantly lower OPEX levels when switching to AFAs from traditional, HDD-based arrays.

When looking at the total cost envelope, factoring in their superior performance, AFAs are already the intelligent purchase decision, particularly for Tier 1 mission critical workloads. Now, a new generation of high capacity SSDs is coming and it’s poised to accelerate the AFA takeover. We believe the Flash revolution in storage that started in 2011will outpace even the most optimistic forecast in 2016 easily eclipsing the 50% of total revenue predicted for external arrays. Let’s take a look at how and why.

Publish date: 06/10/16
Report

DP Designed for Flash - Better Together: HPE 3PAR StoreServ Storage and StoreOnce System

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HPE 3PAR StoreServ Storage, HPE StoreOnce System backup appliances, and HPE Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 06/06/16
Report

Hybrid Storage Accelerates IT Cloud Transformation: Customers find Microsoft Azure StorSimple

After conducting a number of in-depth field interviews with real world Microsoft Azure StorSimple users, we’ve discovered that the real StorSimple story is all about helping people transition smoothly from on-premises storage to an on-premises/cloud hybrid model. From there, it helps both IT and the business accelerate broader adoption of cloud-centric hybrid IT architecture. StorSimple not only simplifies on-premises storage challenges with fully integrated automated cloud-tiering and data protection (providing elastic capacity and cloud burstability), but also optimizes distributed file sharing and application storage (with cloud-based DR, centralized management, and extensibility).

However, it’s easy to talk about features – what a product does and how it does it. These are important things to know and we’ll highlight several key capabilities in this report. But the real proof of the successful product pudding is this: what do actual customers say? What are their challenges, their hopes, their needs? And how did their storage decisions serve those needs?

To answer these questions, we took an in-depth look at StorSimple through a customer lens. Real-life enterprise customers told us about their original journeys to StorSimple, and how Microsoft is helping them to move on more fully to the cloud. Ultimately, we noted five highly-valued critical advantages of StorSimple: native data protection and disaster recovery, deployment and management simplicity across multiple locations, a high return on investment, and a dynamic storage environment that unifies files and applications across the enterprise.

Publish date: 05/09/16
Page 1 of 44 pages  1 2 3 >  Last ›