Taneja Group | replication
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: replication

Profiles/Reports

Kashya Goes the Distance - Product Profile

Data protection technologies have basically remained unchanged for the last twenty years. For a variety of reasons, including what appears to be an unstoppable hunger for collecting more data, a 7×24×365 work environment and new uncertainties associated with the current geopolitical environment, data protection has taken on a more significant meaning than ever before.

 

Publish date: 10/01/03
Profiles/Reports

Topio - Product Profile

Businesses entering the 21st century are facing renewed challenges to protect their data in the event of an infrastructure failure or an entire site outage. Driven by internal business requirements and external regulations, enterprises must now design and implement bulletproof disaster recovery plans.

 

Publish date: 01/01/04
Profiles/Reports

Kashya KBX 5000 Data protection Appliance

We have seen several key advances in the past year in the technology category known as Continuous Data Technologies (CDT). Defined at the highest level, all CDT solutions enable very granular data capture and the subsequent presentation of application data across Any Point in Time (APIT). This new approach stands in sharp contrast to the historical Point in Time (PIT) based copy-creating technologies that have dominated the data center, including traditional snapshots.

Publish date: 06/01/06
Profiles/Reports

Compellent Remote Instant Play

More than ever, businesses are facing massive challenges to both protect and move their data across multiple sites. Most critically, this challenge presents itself in the event of an infrastructure failure or an entire site outage. However, with the addition of multiple data center sites, expanding WAN investments, and increasingly heterogeneous and distributed computing infrastructures, moving production data between geographies has also become a hot-button issue.

Publish date: 07/01/06
Profiles/Reports

EMC Replication Manager

Without any fanfare, data replicas have quietly become the lifeblood of enterprise storage infrastructures. In any given data center, a myriad of data replicas exist for every mission critical processes imaginable. For example, data protection, business continuity, testing, and environment upgrades now all depend entirely on sophisticated data replication technologies.

Publish date: 08/01/06
news / Blog

3PAR Simplifies Multi-Site Replication

Normally, the unveiling of a new storage console would not get my full attention. But 3PAR's announcement today of its latest InForm Management Console (IMC) definitely did.

  • Premiered: 08/04/10
  • Author: Jeff Byrne
Topic(s): 3PAR Disaster Recovery DR replication
Profiles/Reports

Dell Compellent: Fluid Storage for a Virtualized World

The enterprise datacenter was a very different place just a few years ago. Over the last decade, several macro trends have converged: rapid server consolidation enabled by virtualization, dramatic data proliferation and the rise of “big data,” solid-state drive technology advances, and an increasingly mobile and demanding workforce. In short, IT continues to consolidate, while business becomes more distributed. This tension drives the search for greater efficiency now at the heart of every IT decision. And no-where is this pressure felt more acutely than in the storage layer. Virtualized and consolidated work-loads create new types of storage I/O contention, which are costly to troubleshoot and repair. Storage costs continue to rise because capacity planning is harder in today’s dynamic business environment. Over time, performance limitations, wasted capacity, and complex operations eat into the bottom line and increase lifetime storage TCO. These realities drive the need for more intelligence in the storage layer. In this technology brief, we explore the ways in which Dell Compellent’s Storage Center is delivering such intelligence today.

Publish date: 12/08/11
Profiles/Reports

IBM ProtecTIER for Outstanding Data Protection

Modern backup and recovery needs much robust business continuity (BC) and DR than garden variety approaches can provide.  This is why IBM ProtecTIER with advanced replication is a vital piece of backup and recovery.

Publish date: 10/31/11
Profiles/Reports

Sepaton S2100 and the Database Backup Challenge

SEPATON purpose-built the S2100 platform as the only backup appliance dedicated to managing and optimizing large database backup and recovery in enterprise environments. The SEPATON S2100 appliance serves large enterprise storage environments with high backup and recovery performance, replication, deduplication and scalable capacity. This enables enterprise IT to cost-effectively meet critical database SLAs with a single unified storage system.

Publish date: 11/22/11
Resources

The role of remote replication in enterprise data storage

Remote replication used to be just a one-to-one volume replication technology for offsite disaster recovery. But the technology has matured and can now be used throughout the data center for myriad tasks, including continuous data protection (CDP) and host failover support. In this podcast, Jeff Boles, senior analyst and director, validation services at Hopkinton, Mass.-based Taneja Group, describes remote replication's role in enterprise data storage, the many uses of the technology and available replication products.
 

  • Premiered: 09/30/09 at OnDemand
  • Location: OnDemand
  • Speaker(s): Jeff Boles
  • Sponsor(s): TechTarget
Topic(s): TBA Topic(s): replication Topic(s): TBA Topic(s): remote replication Topic(s): TBA Topic(s): enterprise data
news / Blog

SEPATON is the First Vendor to Deliver Dedupe for Multi-streamed and Multi-plexed DBs

SEPATON Cuts Cost and Complexity of Backing up Oracle & SQL Data by Becoming the First Vendor to Deliver Deduplication for Multi-streamed, Multiplexed Databases

news / Blog

AMI StorTrends Optimizes WANs for Primary Data

AMI StorTrends already optimizes iSCSI primary storage for high performance over the LAN. Their iTX Data Storage Software optimizes data for fast transport over the WAN as well.

  • Premiered: 07/06/12
  • Author: Taneja Group
Topic(s): AMI StorTrends LAN WAN accelerator Optimization Storage Primary replication
news / Blog

Cost-Effective Disaster Recovery and StorTrends

StorTrends yields DR savings from several different features including bandwidth, commodity purchase price, and SAS/SATA combinations.

  • Premiered: 07/17/12
  • Author: Taneja Group
Topic(s): StorTrends AMI Data protection replication Disaster Recovery DR
news

Astute Expands ViSX Family of All Flash Performance Storage Appliances

Astute Networks™, Inc., the leading provider of performance storage appliances, today announced the expansion of the ViSX family of Performance Storage Appliances with enterprise-class data protection and a new MLC flash offering that will be particularly attractive to the small to medium enterprise (SME) and small-to-medium business (SMB) markets.

  • Premiered: 04/22/13
  • Author: Taneja Group
  • Published: MarketWire.com
Topic(s): TBA Astute Networks TBA SSD TBA replication TBA SME TBA SMB TBA Deduplication TBA flash storage
Profiles/Reports

HP StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe, or Dedupe 1.0 as it is sometimes referred to, is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

 Federating dedupe across systems goes a long way to solve that problem. HP StoreOnce extends consistent dedupe across the infrastructure. Only HP implements the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

 This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges, how HP is achieving its vision of federated dedupe with StoreOnce –- and what HP’s StoreOnce VSA announcement and achievement means to backup service providers, enterprise ROBO, and SMB customers. 
 

Publish date: 06/20/13
Profiles/Reports

Dell AppAssure 5: Unified Platform for Business Resiliency

Backup applications with large user bases have been vendor cash cows because their customers are reluctant to change such deeply embedded products. As long as the backup worked, it was out of sight and out of mind.
But the field is rapidly changing.

The push to virtualize applications saw traditional backup foundering. Traditional backup in the virtual arena suffered from heavy operational overhead on server, application host, network, and storage levels. The growing amount of VMs and virtualized data had a serious impact on storage resources. For example, each VMDK file represented an entire VM file system image, typically at least 2GB in size. File sizes led to issues for bandwidth, monitoring, and storage resources.

In response, some vendors developed innovative virtual backup products. They made virtual backup much more resource-efficient and easily manageable. Increased performance shrank backup window requirements, provided effective RPO and RTO, simplified the backup process and improved recovery integrity.  These tools changed the virtual data protection landscape for the better.

However, many of these startups offered limited solutions that only supported a single type of hypervisor and several physical machines. This left virtual and physical networks essentially siloed – not to mention the problem of multiple point products creating even more silos within both environments. Managing cross-domain data protection using a variety of point products became inefficient and costly for IT.

Traditional backup makers also scrambled to add virtualization backup support and succeeded to a point, but only a point. Their backup code base was written well before the mass appearance of the cloud and virtualization, and retrofitting existing applications only went so far to provide scalability and integration. There was also the inability to solve a problem that has plagued IT since the early days of backup tape – restore assurance. It has always been risky to find out after the fact that the backup you depended on is not usable for recovery. With data sets doubling every 18 months, the risk of data loss has significantly risen.

More modern backup solves some of these problems but causes new ones. Modern backup offers automated scheduling, manual operations, policy setting, multiple types of backup targets, replication schemes, application optimization, and more. These are useful features but they are also costly and resource-hungry: roughly 30% of storage costs go to IT operations alone. Another problem with these new features is their complexity. It is difficult to optimize and monitor the data protection environment, leading to a conservative estimate of about 20% failure in backup or recovery jobs. 

In addition, most data protection products offer average-to-poor awareness and integration into their backup tape and disk targets. This results in difficulty in setting and testing Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for business applications. The last thing that IT wants is to cripple application recovery, but it is challenging to set meaningful RTO and RPO settings across multiple environments and applications, and extremely difficult to test them.
Even newer VM backup products are inadequate for modern enterprise data centers with physical and virtual layers running critical applications. Combine this with complex and mixed IT environments and it presents a very serious challenge for IT professionals charged with protecting data and application productivity.

What we are seeing now is next generation data protection that protects both virtual and physical environments in one flexible platform. Dell AppAssure is a leading pioneer in this promising field. AppAssure is rewriting the data protection book from limited point products to a highly agile data center protection platform with continual backup, instantaneous restore, backup assurance and a host of additional benefits.
 

Publish date: 06/27/13
news

Goodbye LUN technology, you served us well

The era of LUNs and volumes, as we have known them for decades in the data storage industry, is quietly coming to an end. And if you ask me, it's for all the right reasons, even if storage administrators may feel threatened by the change.

  • Premiered: 10/25/13
  • Author: Arun Taneja
  • Published: Tech Target: Search Storage
Topic(s): TBA LUN TBA Storage TBA Volumes TBA server TBA replication TBA Compression TBA VM TBA Virtualization TBA VMWare TBA Hyper-V TBA Gridstore TBA Nimble Storage TBA Nutanix TBA Scale Computing TBA SimpliVity TBA Tintri TBA HP TBA StoreVirtual VSA TBA Virtual storage TBA VSA TBA Virsto TBA QoS
news

External storage might make sense for Hadoop

Using Hadoop to drive big data analytics doesn't necessarily mean building clusters of distributed storage -- good old external storage might be a better choice.

  • Premiered: 02/28/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Storage
Topic(s): TBA Hadoop TBA Big Data TBA analytics TBA SAN TBA NAS TBA scale-out TBA HDFS TBA MapReduce TBA DAS TBA RAID TBA replication TBA Sentry TBA Accumulo TBA scalability
Profiles/Reports

Data Defined Storage: Building on the Benefits of Software Defined Storage

At its core, Software Defined Storage decouples storage management from the physical storage system. In practice Software Defined Storage vendors implement the solution using a variety of technologies: orchestration layers, virtual appliances and server-side products are all in the market now. They are valuable for storage administrators who struggle to manage multiple storage systems in the data center as well as remote data repositories.

What Software Defined Storage does not do is yield more value for the data under its control, or address global information governance requirements. To that end, Data Defined Storage yields the benefits of Software Defined Storage while also reducing data risk and increasing data value throughout the distributed data infrastructure. In this report we will explore how Tarmin’s GridBank Data Management Platform provides Software Defined Storage benefits and also drives reduced risk and added business value for distributed unstructured data with Data Defined Storage. 

Publish date: 03/17/14
Profiles/Reports

Redefining the Economics of Enterprise Storage

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers.

Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 05/05/14