Taneja Group | cluster
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: cluster

news

Scale Computing weighs in with converged storage, compute, VM cluster

SAN FRANCISCO -- Scale Computing today unveiled the HC3, a scale-out converged storage system combining compute, server virtualization and capacity in one box.

  • Premiered: 08/27/12
  • Author: Taneja Group
  • Published: TechTarget: SearchVirtualStorage.com
Topic(s): TBA Scale Computing TBA VM TBA cluster TBA converged storage TBA Scale
Profiles/Reports

Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
Profiles/Reports

TVS: Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
news

New scale-out NexentaEdge storage supports object, block

Nexenta's new scale-out object storage product supports object and block services and inline deduplication across a petabyte-scale cluster.

  • Premiered: 05/18/15
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Nexenta TBA NexentaEdge TBA scale-out TBA Storage TBA cluster TBA object TBA Block TBA Inline TBA Deduplication TBA OpenStack TBA Amazon S3 TBA Amazon TBA EMC TBA IBM TBA NetApp TBA Cleversafe TBA Caringo TBA Exablox TBA Scality TBA Ceph TBA Red Hat TBA ZFS TBA API TBA iSCSI TBA object storage TBA Jeff Byrne
news

Big data analytics applications impact storage systems

Analytics applications for big data have placed extensive demands on storage systems, which Mike Matchett says often requires new or modified storage structures.

  • Premiered: 09/03/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Mike Matchett TBA Big Data TBA analytics TBA Storage TBA Primary Storage TBA scalability TBA Business Intelligence TBA BI TBA AWS TBA Amazon AWS TBA S3 TBA HPC TBA High Performance Computing TBA High Performance TBA ETL TBA HP Haven TBA HP TBA Hadoop TBA Vertica TBA convergence TBA converged TBA IOPS TBA Capacity TBA latency TBA scale-out TBA software-defined TBA software-defined storage TBA SDS TBA YARN TBA Spark
news

VMware vSphere 6 release good news for storage admins

VMware's vSphere 6 release shows that the vendor is aiming for a completely software-defined data center with a fully virtualized infrastructure.

  • Premiered: 10/05/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VMware vSphere TBA vSphere TBA vSphere 6 TBA software-defined TBA Software-Defined Data Center TBA SDDC TBA Virtualization TBA virtualized infrastructure TBA VSAN TBA VVOLs TBA VMware VVOLs TBA Virtual Volumes TBA VMotion TBA high availability TBA Security TBA scalability TBA Data protection TBA replication TBA VMware PEX TBA Fault Tolerance TBA Virtual Machine TBA VM TBA Provisioning TBA Storage Management TBA SLA TBA 3D Flash TBA FT TBA vCPU TBA CPU
news

The New Era of Secondary Storage HyperConvergence

The rise of hyperconverged infrastructure platforms has driven tremendous change in the primary storage space, perhaps even greater than the move from direct attached to networked storage in decades past.

  • Premiered: 10/22/15
  • Author: Jim Whalen
  • Published: Enterprise Storage Forum
Topic(s): TBA Storage TBA secondary storage TBA Primary Storage TBA hyperconverged TBA hyperconverged infrastructure TBA hyperconvergence TBA DR TBA Disaster Recovery TBA SATA TBA RTO TBA RPO TBA Data protection TBA DP TBA Virtualization TBA Snapshots TBA VM TBA Virtual Machine TBA Disaster Recovery as a Service TBA DRaaS TBA DevOps TBA Hadoop TBA cluster TBA Actifio TBA Zerto TBA replication TBA Data Domain TBA HP TBA 3PAR TBA StoreServ TBA StoreOnce
Resources

Optimizing Big Data Clusters in Production - Performance, Capability, and Cost

Come join us as we learn how to tackle and manage big data application performance. First, Taneja Group Sr. Analyst Mike Matchett will present his take on how enterprise IT is now being challenged to support big data applications in real production environments. He'll discuss why too many enterprises haven't been as successful as they should in taking advantage of their big data opportunities - in many cases losing out to competitors. He'll explore what agile IT/devops really needs to do to not only effectively host, but deliver top-notch, consistent big data performance with the smallest infrastructure cost. 
Then Sean Suchter, co-founder and CEO at Pepperdata will present their compelling approach to solving big data cluster performance challenges. He'll demonstrate how Pepperdata's dynamic run-time optimizations can guarantee consistent performance SLA's in a shared multi-tenant Hadoop cluster. Because Pepperdata delivers detailed visibility into Hadoop cluster activity , the software becomes invaluable for cluster troubleshooting, reporting/chargeback, capacity planning, and other management and optimization requirements. With Pepperdata, IT can now effectively, efficiently, and reliably support all the business-empowering big data applications of an organization. This webcast will be 45 minutes with time reserved for Q+A.

Speakers:
Mike Matchett - Senior Analyst & Consultant; Taneja Group (host)
Sean Suchter - Co-founder and CEO, Pepperdata

  • Premiered: 11/18/15
  • Location: OnDemand
  • Speaker(s): Mike Matchett, Taneja Group; Sean Suchter, Pepperdata
Topic(s): TBA Topic(s): Mike Matchett Topic(s): TBA Topic(s): Pepperdata Topic(s): TBA Topic(s): Big Data Topic(s): TBA Topic(s): Hadoop Topic(s): TBA Topic(s): cluster Topic(s): TBA Topic(s): Optimization
Profiles/Reports

Multiplying the Value of All Existing IT Solutions

Decades of constantly advancing computing solutions have changed the world in tremendous ways, but interestingly, the IT folks running the show have long been stuck with only piecemeal solutions for managing and optimizing all that blazing computing power. Sometimes it seems like IT is a pit crew servicing a modern racing car with nothing but axes and hammers – highly skilled but hampered by their legacy tools.

While that may be a slight exaggeration, there is a serious lack of interoperability or opportunity to create joint insight between the highly varied perspectives that individual IT tools produce (even if  each is useful in its own purpose). There simply has never been a widely adopted standard for creating, storing or sharing system management data, much less a cross-vendor way to holistically merge heterogeneously collected or produced management data together – even for the beneficial use of harried and often frustrated IT owners that might own dozens or more differently sourced system management solutions. That is until now.

OpsDataStore has brought the IT management game to a new level with an easy to deploy, centralized, intelligent – and big data enabled – management data “service”.  It readily sucks in all the lowest level, fastest streaming management data from a plethora of tools (several ready to go at GA, but easily extended to any data source), automatically and intelligently relates data from disparate sources into a single unified “agile” model, directly provides fundamental visualization and analysis, and then can serve that unified and related data back out to enlightened and newly comprehensive downstream management workflows. OpsDataStore drops in and serves as the new systems management “nexus” between formerly disparate vendor and domain management solutions. 

If you have ever been in IT, you’ve no doubt written scripts, fiddled with logfiles, created massive spreadsheets, or otherwise attempted to stitch together some larger coherent picture by marrying and merging data from two (or 18) different management data sources. The more sources you might have, the more the problem (or opportunity) grows non-linearly. OpsDataStore promises to completely fill in this gap, enabling IT to automatically multiply the value of their existing management solutions.

Publish date: 12/03/15
news

Can your cluster management tools pass muster?

The right designs and cluster management tools ensure your clusters don't become a cluster, er, failure.

  • Premiered: 11/17/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA cluster TBA Cluster Management TBA Cluster Server TBA Storage TBA Cloud TBA Public Cloud TBA Private Cloud TBA Virtual Infrastructure TBA Virtualization TBA hyperconvergence TBA hyper-convergence TBA software-defined TBA software-defined storage TBA SDS TBA Big Data TBA scale-up TBA CAPEX TBA IT infrastructure TBA OPEX TBA Hypervisor TBA Migration TBA QoS TBA Virtual Machine TBA VM TBA VMWare TBA VMware VVOLs TBA VVOLs TBA Virtual Volumes TBA cloud infrastructure TBA OpenStack
Profiles/Reports

Now Big Data Works for Every Enterprise: Pepperdata Adds Missing Performance QoS to Hadoop

While a few well-publicized web 2.0 companies are taking great advantage of foundational big data solution that they have themselves created (e.g. Hadoop), most traditional enterprise IT shops are still thinking about how to practically deploy their first business-impacting big data applications – or have dived in and are now struggling mightily to effectively manage a large Hadoop cluster in the middle of their production data center. This has led to the common perception that realistic big data business value may yet be just out of reach for most organizations – especially those that need to run lean and mean on both staffing and resources.   

This new big data ecosystem consists of scale-out platforms, cutting-edge open source solutions, and massive storage that is inherently difficult for traditional IT shops to optimally manage in production – especially with still evolving ecosystem management capabilities. In addition, most organizations need to run large clusters supporting multiple users and applications to control both capital and operational costs. Yet there are no native ways to guarantee, control, or even gain visibility into workload-level performance within Hadoop. Even if there wasn’t a real high-end skills and deep expertise gap for most, there still isn’t any practical way that additional experts could tweak and tune mixed Hadoop workload environments to meet production performance SLA’s.

At the same time, the competitive game of mining of value from big data has moved from day-long batch ELT/ETL jobs feeding downstream BI systems, to more user interactive queries and business process “real time” applications. Live performance matters as much now in big data as it does in any other data center solution. Ensuring multi-tenant workload performance within Hadoop is why Pepperdata, a cluster performance optimization solution, is critical to the success of enterprise big data initiatives.

In this report we’ll look deeper into today’s Hadoop deployment challenges and learn how performance optimization capabilities are not only necessary for big data success in enterprise production environments, but can open up new opportunities to mine additional business value. We’ll look at Pepperdata’s unique performance solution that enables successful Hadoop adoption for the common enterprise. We’ll also examine how it inherently provides deep visibility and reporting into who is doing what/when for troubleshooting, chargeback and other management needs. Because Pepperdata’s function is essential and unique, not to mention its compelling net value, it should be a checklist item in any data center Hadoop implementation.

To read this full report please click here.

Publish date: 12/17/15
news

Hyperconvergence for ROBOs and the Datacenter

Remote/branch office management is more important, and complicated, than ever. Here are some survival tips.

  • Premiered: 12/17/15
  • Author: Mike Matchett
  • Published: Virtualization Review
Topic(s): TBA ROBO TBA hyperconvergence TBA hyperconverged TBA converged TBA convergence TBA Virtualization TBA Data Center TBA Datacenter TBA Compute TBA cluster TBA clusters TBA SAN TBA Storage TBA WANO TBA WAN Optimization TBA Cloud TBA cloud gateway TBA Backup TBA Scale TBA VCE TBA Dell TBA HP TBA IBM TBA SimpliVity TBA Nutanix TBA Pivot3 TBA Scale Computing TBA CAPEX TBA OPEX TBA TCO
news

NetApp acquisition of SolidFire expands all-flash portfolio

NetApp spent $870 million to buy SolidFire and scrapped work on FlashRay, its storage system designed exclusively for flash. FlashRay never made it to general availability.

  • Premiered: 12/23/15
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA NetApp TBA SolidFire TBA FlashRay TBA Flash TBA SSD TBA Storage TBA all-flash TBA AFA TBA all flash array TBA FAS TBA web-scale TBA cluster TBA clustered data TBA ONTAP TBA NoSQL TBA Hadoop TBA DevOps TBA Cloud TBA EMC TBA XtremeIO TBA scale-out
news

Concurrent app management tools work on Hadoop and Spark

If Hadoop and Spark are to sneak into the enterprise, they will need to be manageable. With Driven, Concurrent Inc. takes a stab at the problem.

  • Premiered: 12/09/15
  • Author: Taneja Group
  • Published: TechTarget: Search Data Management
Topic(s): TBA Hadoop TBA Spark TBA Driven TBA Concurrent TBA manageability TBA Big Data TBA Performance TBA Performance Management TBA Mike Matchett TBA Hive TBA MapReduce TBA SLA TBA service level agreement TBA software TBA high-fidelity TBA HiFi TBA cluster TBA Pepperdata TBA Oracle TBA IBM TBA CA
news

Mobile gaming company plays new Hadoop cluster management card

Chartboost, which operates a platform for mobile games, turned to new cluster management software in an effort to overcome problems in controlling the use of its Hadoop processing resources.

  • Premiered: 01/05/16
  • Author: Taneja Group
  • Published: TechTarget: Search Data Management
Topic(s): TBA Chartboost TBA mobile TBA cluster TBA Cluster Management TBA Hadoop TBA processing TBA data processing TBA analytics TBA Big Data TBA MapReduce TBA Hive TBA Spark TBA Optimization TBA Cloudera TBA AWS TBA Amazon TBA Cloud TBA YARN TBA Pepperdata TBA Memory TBA CPU TBA Application TBA Concurrent TBA SLA TBA service-level agreement TBA HBase TBA application performance TBA application performance management TBA Mike Matchett
news

Evaluating Data Protection for Hyperconverged Infrastructure

Hyperconvergence is a still-evolving trend, and the number of vendors in the space is making the evaluation of hyperconverged infrastructure complex. One criterion to consider in any infrastructure review is data protection.

  • Premiered: 02/02/16
  • Author: Jim Whalen
  • Published: InfoStor
Topic(s): TBA Jim Whalen TBA Storage TBA hyperconverged TBA hyperconvergence TBA hyperconverged infrastructure TBA Data protection TBA DP TBA Backup TBA replication TBA DR TBA Disaster Recovery TBA SimpliVity TBA Deduplication TBA Compression TBA WAN TBA WANO TBA WAN Optimization TBA VM TBA Virtual Machine TBA VM-centric TBA VM-centricity TBA Compute TBA Networking TBA Hypervisor TBA Virtualization TBA scale-out TBA IOPS TBA Pivot3 TBA Gridstore TBA converged
news

Plexistor debuts with software to converge memory and storage

Plexistor claims its 'software-defined memory' platform lets in-memory databases and traditional enterprise workloads run without dedicated compute-storage clusters.

  • Premiered: 01/28/16
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA Plexistor TBA software-defined TBA Software-Defined Memory TBA SDM TBA In-Memory TBA cluster TBA Compute TBA Storage TBA nonvolatile memory TBA Primary Storage TBA high capacity TBA SATA TBA NVMe TBA Flash TBA scalability TBA scalable TBA Low latency TBA Performance TBA MongoDB TBA Kafka TBA Cassandra TBA NVDIMM TBA Arun Taneja TBA Data Center TBA datacenter management TBA DCM TBA tiering TBA Virtualization
news

Datrium DVX storage takes novel approach for VMware, flash

The Datrium DVX storage system for VMware virtual machines is generally available and drawing interest with its server-powered architecture, flash-boosted performance and low cost.

  • Premiered: 02/01/16
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Datrium TBA DVX TBA Storage TBA VMWare TBA Flash TBA flash storage TBA SSD TBA VM TBA Performance TBA flash performance TBA VMware vSphere TBA vSphere TBA SAN TBA Virtual Machine TBA LUN TBA Hypervisor TBA Deduplication TBA Compression TBA RAM TBA SAS TBA Capacity TBA virtual desktop TBA Virtual Desktop Infrastructure TBA VDI TBA vCenter TBA CPU TBA ESX TBA all-flash TBA all flash array TBA AFA
news

Caringo Swarm upgraded with search engine for data analysis

Caringo Swarm now has a built-in search engine for cloud object storage data analysis. Caringo also added file-level protocols and a new management portal to the software.

  • Premiered: 02/16/16
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Caringo TBA Caringo Swarm TBA Cloud TBA cloud object storage TBA Storage TBA data analytics TBA object versioning TBA Metadata TBA cluster TBA indexing TBA Active Directory TBA Security TBA Amazon S3 TBA Amazon TBA Simple Storage Service TBA NFS TBA Arun Taneja TBA object-scale TBA FileFly TBA Windows TBA NTFS TBA object storage
news

Scale-out architecture and new data protection capabilities in 2016

What are the next big things for the data center in 2016? Applications will pilot the course to better data protection and demand more resources from scale-out architecture.

  • Premiered: 02/17/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA scale-out TBA Data protection TBA DP TBA scale-out architecture TBA analysis TBA Data Center TBA data lake TBA Hadoop TBA hadoop cluster TBA cluster TBA Backup TBA Talena TBA HPE TBA 3PAR TBA flat backup TBA Snapshot TBA Snapshots TBA StoreOnce TBA Oracle TBA Oracle ZDLRA TBA ZDLRA TBA Zero Data Loss Recovery Appliance TBA converged TBA convergence TBA Cloud TBA backup server TBA Virtualization TBA Storage TBA Big Data TBA Lustre