Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Data Protection/Management

Includes Backup, Recovery, Replication, Archiving, Copy Data Management and Information Governance.

The goal of modern data protection/management is to help companies protect, understand and use their data. Functionality comes in many forms, ranging from array-based snapshots and standalone backup, replication and deduplication products to comprehensive data management platforms. Since the amount of data is increasing much faster than IT budgets, companies are focused on decreasing storage costs and simplifying data protection processes. As a result, vendors are moving to scale-out platforms that use commodity hardware and offer better storage efficiency and policy-based automation of administrative functions, such as system upgrades, data recovery and data migration for long-term data retention. Depending on operational needs, objectives outside of data protection may include orchestrating the lifecycle of on-demand test/dev environments, using secondary storage for file services and search/analytics to ensure data compliance.

Page 2 of 37 pages  < 1 2 3 4 >  Last ›
Report

Cloud Object Storage for the Healthcare Data Blues

The healthcare industry continues to face tremendous cost challenges. The U.S. government estimates national health expenditures in the United States accounted for $3.2 trillion last year – nearly 18% of the country’s total GDP. There are many factors that drive up the cost of healthcare, such as the cost of new drug development and hospital readmissions. In addition, there’s compelling studies that show medical organizations will need to evolve their IT environment to curb healthcare costs and improve patient care in new ways, such as cloud-based healthcare models aimed at research community collaboration, coordinated care and remote healthcare delivery.

For example, Goldman Sachs recently predicted that the digital revolution can save $300 billion in spending in the healthcare sector by powering new patient options, such as home-based patient monitoring and patient self-management. Moreover, the most significant progress may come from a medical organization transforming their healthcare data infrastructure. Here’s why:

  • Advancements in digital medical imaging has resulted in an explosion of data that sits in  picture archiving and communications systems (PACS) and vendor neutral archives (VNAs).
  • Patient care initiatives such as personalized medicine and genomics require storing, sharing and analyzing massive amounts of unstructured data.
  • Regulations such as the Health Insur­ance Portability and Accountability Act (HIPAA) require organizations to have policies for long term image retention and business continuity.

Unfortunately, traditional file storage approaches aren’t well-suited to manage vast amounts of unstructured data and present several barriers to modernizing healthcare infrastructure. A recent Taneja Group survey found the top three challenges to be:

  • Lack of flexibility: Traditional file storage appliances require dedicated hardware and don’t offer tight integration with collaborative cloud storage environments.
  • Poor utilization: Traditional file storage requires too much storage capacity for system fault tolerance, which reduces usable storage.
  • Inability to scale: Traditional storage solutions such as RAID-based arrays are gated by controllers and simply aren’t designed to easily expand to petabyte storage levels.

As a result, healthcare organizations are moving to object storage solutions that offer an architecture inherently designed for web scale storage environments. Specifically, object storage offers healthcare organizations the following advantages:

  • Simplified management, hardware independence and a choice of deployment options – private, public or hybrid cloud – lowers operational and hardware storage costs
  • Web-scale storage platform provides scale as needed and enables a pay as you go model
  • Efficient fault tolerance protects against site failures, node failures and multiple disk failures
  • Built in security protects against digital and physical breeches
Publish date: 03/22/17
Report

IBM Cloud Object Storage Provides the Scale and Integration Needed for Modern Genomics Infra.

For hospitals and medical research institutes, the ability to interpret genomics data and identify relevant therapies is key to provide better patient care through personalized medicine. Many such organizations are racing forward, analyzing patients’ genomic profiles to match more clinically actionable treatments using artificial intelligence (AI).

These rapid advancements in genomic research and personalized medicine are very exciting, but they are creating enormous data challenges for healthcare and life sciences organizations. High-throughput DNA sequencing machines can now process a human genome in a matter of hours at a cost approaching one thousand dollars. This is a huge drop from a cost of ten million dollars ten years ago and means the decline in genome sequencing cost has outpaced Moore’s Law (see chart). The result is an explosion in genomic data – driving the need for solutions that can affordably and securely store, access, share, analyze and archive enormous amounts of data in a timely manner.

Challenges include moving large volumes of genomic data from cost-effective archival storage to low latency storage for analysis to reduce the time needed to analyze genetic data. Currently, it takes days to do a comprehensive DNA sequence analysis.

Sharing and interpreting vast amounts of unstructured data to find relationships between a patient’s genetic characteristics and potential therapies adds another layer of complexity. Determining connections requires evaluating data across numerous unstructured data sources, such as genomic sequencing data, medical articles, drug information and clinical trial data from multiple sources.

Unfortunately, the traditional file storage within most medical organizations doesn’t meet the needs of modern genomics. These systems can’t accommodate massive amounts of unstructured data and they don’t support both data archival and high-performance compute. They also don’t facilitate broad collaboration. Today, organizations require a new approach to genomics storage, one that enables:

  • Scalable and convenient cloud storage to accommodate rapid unstructured data growth
  • Seamless integration between affordable unstructured data storage, low latency storage, high performance compute, big data analytics and a cognitive healthcare platform to quickly analyze and find relationships among complex life science data types
  • A multi-tenant hybrid cloud to share and collaborate on sensitive patient data and findings
  • Privacy and protection to support regulatory compliance
Publish date: 03/22/17
Free Reports

Is Object Storage Right For Your Organization?

Is object storage right for your organization? Many companies are asking this question as they seek out storage solutions that support vast unstructured data growth throughout their organizations. Object storage is ideal for large-scale unstructured data storage because it easily scales to several petabytes and beyond by simply adding storage nodes. Object storage also provides high fault tolerance, simplified storage management and hardware independence – core capabilities that are essential to cost-effectively manage large-scale storage environments. Add to this built-in support for geographically distributed environments and it’s easy to see why object storage solutions are the preferred storage approach for multiple use cases such as cloud-native applications, highly scalable file backup, secure enterprise collaboration, active archival, content repositories and increasingly cognitive computing workloads such as Big Data analytics.

To help you decide if object storage is right for your company and to help you understand how to apply various storage technologies, we have created a table below that positions object storage relative to block storage and file storage.

As the table shows, there are several factors that differentiate block, file and object storage. An easy way to think about the differences is the following; block storage is necessary for critical applications where storage performance is the key consideration, file storage is well-suited for highly scalable shared file systems and object storage is ideal when cloud-scale capacity and convenience as well as reliability and geographically distributed access are the major storage requirements. 

Publish date: 12/30/16
Profile

Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup

Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.

These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.

And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.

Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).

Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.

In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.

Publish date: 10/18/16
Profile

Towards the Ultimate Goal of IT Resilience: A Look at the Zerto Cloud Continuity Platform

We live in a digital world where online services, applications and data must always be available. Yet the modern data center remains very susceptible to interruptions. These opposing realities are challenging traditional backup applications and disaster recovery solutions and causing companies to rethink what is needed to ensure 100% uptime of their IT environments.

The need for availability goes well beyond recovering from disasters. Companies must be able to rapidly recover from many real world disruptions such as ransomware, device failures and power outages as well as natural disasters. Add to this the dynamic nature of virtualization and cloud computing, and it’s not hard to see the difficulty of providing continuous availability while managing a highly variable IT environment that is susceptible to trouble.

Some companies feel their backup devices will give them adequate data protection and others believe their disaster recovery solutions will help them restore normal business operations if an incident occurs. Regrettably, far too often these solutions fall short of meeting user expectations because they don’t provide the rapid recovery and agility needed for full business continuance.

Fortunately, there is a way to ensure a consistent experience in an inconsistent world. It’s called IT resilience. IT resilience is the ability to ensure business services are always on, applications are available and data is accessible no matter what human errors, events, failures or disasters occur. And true IT resilience goes a step further to provide continuous data protection (CDP), end-to-end recovery automation irrespective of the makeup of a company’s IT environment and the flexibility to evolve IT strategies and incorporate new technology.

Intrigued by the promise of IT resilience, companies are seeking data protection solutions that can withstand any disaster to enable a reliable online experience and excellent business performance. In a recent Taneja Group survey, nearly half the companies selected “high availability and resilient infrastructure” as one of their top two IT priorities. In the same survey, 67% of respondents also indicated that unplanned application downtime compromised their ability to satisfy customer needs, meet partner and supplier commitments and close new business.

This strong customer interest in IT resilience has many data protection vendors talking about “resilience.” Unfortunately, many backup and disaster recovery solutions don’t provide continuous data protection plus hardware independence, strong virtualization support and tight cloud integration. This is a tough combination and presents a big challenge for data protection vendors striving to provide enterprise-grade IT resilience.

There is however one data protection vendor that has replication and disaster recovery technologies designed from the ground up for IT resilience. The Zerto Cloud Continuity Platform built on Zerto Virtual Replication offers CDP, failover (for higher availability), end-to-end process automation, heterogeneous hypervisor support and native cloud integration. As a result, IT resilience with continuous availability, rapid recovery and agility is a core strength of the Zerto Cloud Continuity Platform.

This paper will explore the functionality needed to tackle modern data protection requirements. We will also discuss the challenges of traditional backup and disaster recovery solutions, outline the key aspects of IT resilience and provide an overview of the Zerto Cloud Continuity Platform as well as the hypervisor-based replication that Zerto pioneered.

Publish date: 09/30/16
Report

HPE StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.

Publish date: 06/30/16
Page 2 of 37 pages  < 1 2 3 4 >  Last ›