Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Data Protection/Management

Includes Backup, Recovery, Replication, Archiving, Copy Data Management and Information Governance.

The goal of modern data protection/management is to help companies protect, understand and use their data. Functionality comes in many forms, ranging from array-based snapshots and standalone backup, replication and deduplication products to comprehensive data management platforms. Since the amount of data is increasing much faster than IT budgets, companies are focused on decreasing storage costs and simplifying data protection processes. As a result, vendors are moving to scale-out platforms that use commodity hardware and offer better storage efficiency and policy-based automation of administrative functions, such as system upgrades, data recovery and data migration for long-term data retention. Depending on operational needs, objectives outside of data protection may include orchestrating the lifecycle of on-demand test/dev environments, using secondary storage for file services and search/analytics to ensure data compliance.

Page 2 of 37 pages  < 1 2 3 4 >  Last ›
Profile

Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup

Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.

These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.

And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.

Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).

Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.

In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.

Publish date: 10/18/16
Profile

Towards the Ultimate Goal of IT Resilience: A Look at the Zerto Cloud Continuity Platform

We live in a digital world where online services, applications and data must always be available. Yet the modern data center remains very susceptible to interruptions. These opposing realities are challenging traditional backup applications and disaster recovery solutions and causing companies to rethink what is needed to ensure 100% uptime of their IT environments.

The need for availability goes well beyond recovering from disasters. Companies must be able to rapidly recover from many real world disruptions such as ransomware, device failures and power outages as well as natural disasters. Add to this the dynamic nature of virtualization and cloud computing, and it’s not hard to see the difficulty of providing continuous availability while managing a highly variable IT environment that is susceptible to trouble.

Some companies feel their backup devices will give them adequate data protection and others believe their disaster recovery solutions will help them restore normal business operations if an incident occurs. Regrettably, far too often these solutions fall short of meeting user expectations because they don’t provide the rapid recovery and agility needed for full business continuance.

Fortunately, there is a way to ensure a consistent experience in an inconsistent world. It’s called IT resilience. IT resilience is the ability to ensure business services are always on, applications are available and data is accessible no matter what human errors, events, failures or disasters occur. And true IT resilience goes a step further to provide continuous data protection (CDP), end-to-end recovery automation irrespective of the makeup of a company’s IT environment and the flexibility to evolve IT strategies and incorporate new technology.

Intrigued by the promise of IT resilience, companies are seeking data protection solutions that can withstand any disaster to enable a reliable online experience and excellent business performance. In a recent Taneja Group survey, nearly half the companies selected “high availability and resilient infrastructure” as one of their top two IT priorities. In the same survey, 67% of respondents also indicated that unplanned application downtime compromised their ability to satisfy customer needs, meet partner and supplier commitments and close new business.

This strong customer interest in IT resilience has many data protection vendors talking about “resilience.” Unfortunately, many backup and disaster recovery solutions don’t provide continuous data protection plus hardware independence, strong virtualization support and tight cloud integration. This is a tough combination and presents a big challenge for data protection vendors striving to provide enterprise-grade IT resilience.

There is however one data protection vendor that has replication and disaster recovery technologies designed from the ground up for IT resilience. The Zerto Cloud Continuity Platform built on Zerto Virtual Replication offers CDP, failover (for higher availability), end-to-end process automation, heterogeneous hypervisor support and native cloud integration. As a result, IT resilience with continuous availability, rapid recovery and agility is a core strength of the Zerto Cloud Continuity Platform.

This paper will explore the functionality needed to tackle modern data protection requirements. We will also discuss the challenges of traditional backup and disaster recovery solutions, outline the key aspects of IT resilience and provide an overview of the Zerto Cloud Continuity Platform as well as the hypervisor-based replication that Zerto pioneered.

Publish date: 09/30/16
Report

HPE StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.

Publish date: 06/30/16
Report

DP Designed for Flash - Better Together: HPE 3PAR StoreServ Storage and StoreOnce System

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HPE 3PAR StoreServ Storage, HPE StoreOnce System backup appliances, and HPE Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 06/06/16
Profile

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16
Report

Transforming the Data Center: SimpliVity Delivers Hyperconverged Platform with Native DP

Hyperconvergence has come a long way in the past five years. Growth rates are astronomical and customers are replacing traditional three-layer configurations with hyperconverged solutions at record numbers. But not all hyperconverged solutions in the market are alike. As the market matures, this fact is coming to light. Of course, all hyperconverged solutions tightly integrate compute and storage (that is par for the course) but beyond that similarities end quickly.

One of the striking differences between SimpliVity’s hyperconverged infrastructure architecture and others is the tight integration of data protection functionality. The DNA for that is built in from the very start: SimpliVity hyperconverged infrastructure systems perform inline deduplication and compression of data at the time of data creation. Thereafter, data is kept in the “reduced” state throughout its lifecycle. This has serious positive implications on latency, performance, and bandwidth but equally importantly, it transforms data protection and other secondary uses of data. 

At Taneja Group, we have been very aware of this differentiating feature of SimpliVity’s solution. So when we were asked to interview five SimpliVity customers to determine if they were getting tangible benefits (or not), we jumped at the opportunity.

This Field Report is about their experiences. We must state at the beginning that we focused primarily on their data protection experiences in this report. Hyperconvergence is all about simplicity and cost reduction. But SimpliVity’s hyperconverged infrastructure also eliminated another big headache: data protection. These customers may not have bought SimpliVity for data protection purposes, but the fact that they were essentially able to get rid of all their other data protection products was a very pleasant surprise for them. That was a big plus for these customers. To be sure, data protection is not simply backup and restore but also includes a number of other functions such as replication, DR, WAN optimization, and more. 

For a broader understanding of SimpliVity’s product capabilities, other Taneja Group write-ups are available. This one focuses on data protection. Read on for these five customers’ experiences.

Publish date: 02/01/16
Page 2 of 37 pages  < 1 2 3 4 >  Last ›