Taneja Group | StoreServ
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: StoreServ

Profiles/Reports

HP 3PAR StoreServ Double Density: Delivering twice the VM density versus traditional arrays

Why does storage performance matter? There are few storage administrators who have not witnessed an unhappy moment where lack of storage performance brought an application to its knees. But quantifying how storage performance matters, aside from serving as insurance to avoid an application crippling moment, is a tricky proposition.

One way is in terms of how storage performance determines the efficiency of the infrastructure. When all else is equal and well tuned – hypervisors, servers, networks – then storage becomes a precious commodity that can determine just how many workloads or applications a given infrastructure can support, and therefore how efficiently it can operate at scale.

When it comes to virtualization, this workload density has serious impacts, but fortunately is even easier to assess. A number of years ago, Taneja Group started out testing workload density among competitive hypervisors, and we labeled it VM density – a measure of how many VMs similar systems can run without compromises in performance or usability. Our net findings clearly indicated how big of an impact a small difference in VM density can have. When poor VM density makes it necessary to add more servers and hypervisors, the cost of servers is costly enough, but licensing tools like vSphere suites can add up to many thousands of dollars and dwarf hardware costs. Meanwhile, with more servers/hypervisors comes more complexity and management overhead, along with operational data center costs (power, cooling, floorspace). The consequences can easily run to tens of thousands of dollars per server. At large scale, superior VM density can easily add up to hundreds of thousands or millions of dollars for a business.

We’ve long suggested that storage performance is one of the biggest contributors to VM density, but few vendors have been brave enough to step forward and declare that their own storage system can take VM density farther than anyone else’s. Well times have changed.

Approximately 6 months ago, HP made a bold promise with their 3PAR storage systems – guaranteeing that customers moving to HP 3PAR storage, from traditional legacy storage, will double the VM density of their existing server infrastructure. Better yet, HP’s promise wasn’t about more spindles, but rather the efficiency and power of their 3PAR StoreServ storage controller architecture – they made this promise even when the customer’s old storage system contained the same disk spindles. Our minds were immediately beset with questions – could a storage system replacement really double the workload density of most virtual infrastructures? We knew from experience that most customers are indeed IO constrained, we’ve had at least several hundred conversations with customers fighting virtual infrastructure performance problems where at the end of the day, the issue is storage performance. But a promise to double the density of these infrastructures is indeed aggressive.

It goes without saying that at Taneja Group, we were more than eager to put this claim to the test. We approached HP in mid-2012 with a proposal to do just that by standing up a 3PAR StoreServ storage array against a similar class, traditional architecture storage array. The test proved more interesting when HP provided us with only the smallest of their 3PAR StoreServ product line at that time (an F200) and a competitive system made up of the biggest controllers available in another popular but traditional architecture mid-range array. Moreover, that traditional system, while first entering the market in 2007/2008, could still be purchased at the time of testing, and is only on the edge of representing the more dated systems envisioned by HP’s Double Density promise. 

Our Findings: After having this equipment made available for our exclusive use during several months of 2012, we have a clear and undeniable conclusion: HP 3PAR StoreServ storage is quite definitely capable of doubling VM density versus typical storage systems.

Publish date: 12/03/12
Profiles/Reports

HP 3PAR StoreServ 7000 - Enterprise for the Mid-range

Over the past couple of years the mid-range storage market has become a hotbed of on-going innovation. We’ve watched as products have entered the market from all sides – large vendors as well as the newest, hottest startups. Yet many of the apparently pioneering announcements leave the mid-market customer facing on-going challenges. A storage system is a foundational element of the IT infrastructure on top of which the rest of a business’s IT is deployed. For many, even the cutting edge solutions just coming to market lack in one or more critical dimensions. Typically, those shortcomings arise around critical elements of scalability or resiliency, forcing the organization to pursue complex workarounds to compensate. Customers too often settle for capacity at the cost of resiliency, and in turn weave together multiple systems and parts, such as replication, management, and even heterogeneous virtualization tools to try to offset the shortcomings. Other times, the mid-range customer has been undone simply by a demand for capabilities that bring with them far more complexity than the mid-range customer can take on. Even just ensuring that a system can perform sufficiently and scale with demand often mandates complex underlying configuration semantics, and significant expertise in managing disk groupings, extents, RAID levels, and more.

In reality, the mid-market demands enterprise capability – performance, resiliency, and features – but with an innovative level of simplification and a unique level of adaptability. Moreover, the mid-market is tremendously big, with a wider diversity of customers and needs than in the large enterprise. Building the biggest possible storage has long satisfied the sweet spot of enterprise customers. But in the mid-range, one customer may need smaller than a vendor can profitably build, while the next customer needs something bigger. More importantly, each of those customers may grow and change fast, and they need a storage system that can do likewise. It isn’t a sufficient answer to offer ten different products that each must be periodically replaced.

When we survey mid-range storage systems, we look at 5 key criteria (Table 1) that represent this intersection of challenges. Together, they make a complex recipe for established or new-coming vendors alike. But the bigger part of the challenge by far rests in carefully balancing that recipe without compromising any particular dimension, and then delivering the right price point to suit the mid-market of SMB and SME customers. An enterprise-class mix really requires enterprise-class architecture, but such an architecture can hardly be delivered at the right price points.

In an unusual twist, HP recently announced a new mid-market storage system built on 3PAR technology and preserving the same foundational enterprise-class architecture found beneath other HP 3PAR storage arrays that have long supported some of the largest IT infrastructures around. By preserving this architecture, advancing their reputational ease of use even further, and then packaging features and capabilities for the mid-market, HP has set their sights on delivering just what the mid-market demands: enterprise-class, but packaged for the mid-market customer. In this product-in-depth, we’ll take a look at the underpinnings of the HP 3PAR StoreServ 7000 family, and how and whether it comes together to deliver against these fundamental mid-market criteria.

Publish date: 12/03/12
Profiles/Reports

Ensuring Business Continuity of SAN Storage With the HP 3PAR StoreServ 7000 Family

In this paper, we look at what business continuity means in a storage system, and then focus on the HP 3PAR StoreServ 7000 array, in which business continuity has been architected into the system from the ground up. We look at the features and capabilities in this system that collectively ensure a high level of business continuity.

Publish date: 12/11/12
Profiles/Reports

Why VM Density Matters: HP Innovation Delivers Validated "2x" Advantage

Server virtualization brings a vast array of benefits ranging from direct cost savings to indirect improvements in business agility and client satisfaction. But for the IT investment decision-maker, it’s those measurable “hard” costs that matter most. Fortunately, virtualized environments deliver a quantifiably lower Total Cost of Ownership (TCO) compared to legacy physical infrastructures. Since we have all experienced the economic imperative to minimize TCO, it’s easy to understand why virtualization has been driven across significant percentages of modern data centers. Virtualization today is a proven, cost-effective, and nearly ubiquitous IT solution.

But the further challenge for IT investors now is to choose the best virtualization solution to get the “biggest bang for the buck”. Unfortunately the traditional cost per infrastructure (server CPU, storage GB, et.al.) metrics used to judge physical hardware are not sufficient buying criteria in a virtual environment. In a virtualized and cloudy world, cost per application comes closer to capturing the true value of a virtual infrastructure investment. For example, the more virtual machines (VMs) that can be hosted within the same size investment, the lower the cost per application. Therefore a key comparison metric between virtualization solutions is virtual machine (VM) density. All other things being equal (e.g. applications, choice of hypervisor, specific allocations and policies), an infrastructure supporting a higher VM density provides a better value.

As virtualization deployments grow to include active production workloads, they greatly stress and challenge traditional IT infrastructure. The virtualization hypervisor “blends up” client IO workloads and condenses IO intensive activities (i.e. migration, snapshots, backups), with the result that the underlying storage often presents the biggest constraint on effective VM density. Therefore it’s critically important when selecting storage for virtual environments to get past general marketing and focus on validated claims of proven VM density.

Publish date: 07/31/13
news

HP says its all-flash array now matches fast disks on price

On Monday at its Discover conference in Las Vegas, HP announced updates to the all-flash HP 3Par StoreServ 7450 Storage array that can push its cost below $2 per usable gigabyte, according to the company.

  • Premiered: 06/10/14
  • Author: Taneja Group
  • Published: TechWorld
Topic(s): TBA HP TBA SSD TBA StoreServ TBA 3PAR TBA Flash TBA Skyera TBA Kaminario TBA EMC TBA VNX TBA Deduplication TBA Arun Taneja
Profiles/Reports

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio oSDDCffering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 09/02/14
Profiles/Reports

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio offering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 10/16/14
news

Figuring out the real price of flash technology

Sometimes comparing the costs of flash arrays is an apples-to-oranges affair -- interesting, but not very helpful.

  • Premiered: 11/04/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Solid State Storage
Topic(s): TBA Mike Matchett TBA Flash TBA SSD TBA Storage TBA hybrid flash TBA Hybrid Array TBA TCO TBA latency TBA bandwith TBA Protection TBA Data protection TBA Load DynamiX TBA Violin Memory TBA Pure Storage TBA NAND TBA HP TBA VDI TBA Virtual Desktop Infrastructure TBA Deduplication TBA Compression TBA EMC TBA XtremeIO TBA StoreServ TBA IBM TBA IBM FlashSystem TBA FlashSystem TBA Kaminario TBA Nimble Storage TBA Tegile
news / Blog

HP is making flash arrays mainstream for the enterprise

HP announced enhancements to their flagship 3PAR StoreServ 7000 and 10000 models that continue to improve the value proposition for turning to flash for all tier 1 workloads for almost any enterprise.

  • Premiered: 12/01/14
  • Author: Jeff Kato
Topic(s): HP Flash Storage Hybrid Array Hybrid Performance StoreServ
Profiles/Reports

Unified Storage Array Efficiency: HP 3PAR StoreServ 7400c versus EMC VNX 5600 (TVS)

The IT industry is in the middle of a massive transition toward simplification and efficiency around managing on-premise infrastructure at today’s enterprise data centers. In the past few years there has been a rampant onset of technology clearly focused at simplifying and radically changing the economics of traditional enterprise infrastructure. These technologies include Public/Private Clouds, Converged Infrastructure, and Integrated Systems to name a few. All of these technologies are geared to provide more efficiency of resources, take less time to administer, all at a reduced TCO. However, these technologies all rely on efficiency and simplicity of the underlying technologies of Compute, Network, and Storage. Often times the overall solution is only as good as the weakest link in the chain. The storage tier of the traditional infrastructure stack is often considered the most complex to manage.

This technology validation focuses on measuring the efficiency and management simplicity by comparing two industry leading mid-range external storage arrays configured in the use case of unified storage. Unified storage has been a popular approach to storage subsystems that consolidates both file access and block access within a single external array thus being able to share the same precious drive capacity resources across both protocols simultaneously. Businesses value the ability to send server workloads down a high performance low latency block protocol while still taking advantage of simplicity and ease of sharing file protocols to various clients. In the past businesses would have either setup a separate file server in front of their block array or buy completely separate NAS devices, thus possibly over buying storage resource and adding complexity. Unified storage takes care of this by providing ease of managing one storage device for all business workload needs. In this study we compared the attributes of storage efficiency and ease of managing and monitoring an EMC VNX unified array versus an HP 3PAR StoreServ unified array. The approach we used was to setup two arrays side-by-side and recorded the actual complexity of managing each array for file and block access, per the documents and guides provided for each product. We also went through the exercise of sizing various arrays via publicly available configuration guides to see what the expected storage density efficiency would be for some typically configured systems.

Our conclusion was nothing short of astonishment. In the case of the EMC VNX2 technology, the approach to unification more closely resembles a hardware packaging and management veneer approach than what would have been expected for a second generation unified storage system. HP 3PAR StoreServ on the other hand, in its second generation of unified storage has transitioned the file protocol services from external controllers to completely converged block and file services within the common array controllers. In addition, all the data path and control plumbing is completely internal as well with no need to wire loop back cables between controllers. HP has also made the investment to create a totally new management paradigm based on the HP OneView management architecture, which radically simplifies the administrative approach to managing infrastructure. After performing this technology validation we can state with confidence that HP 3PAR StoreServ 7400c is 2X easier to provision, 2X easier to monitor, and up to 2X more data density efficient than a similarly configured EMC VNX 5600. 

Publish date: 12/03/14
news

A First Look at the HP StoreServ Management Console

Recently, I had a chance to spend some time working with the new HP StoreServ Management Console (SSMC) that was unveiled at the HP Discover EMEA show on Dec. 2, 2014. SSMC is the interface to its unified (file and block) storage solution.

  • Premiered: 12/15/14
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA HP TBA StoreServ TBA File Storage TBA Storage TBA NFS TBA HP Discover EMEA TBA Block Storage TBA Performance TBA Capacity TBA 3PAR TBA object storage
news

General Purpose Disk Array Buying Guide

The disk array remains the core element of any storage infrastructure. So it’s appropriate that we delve into it in a lot more detail.

  • Premiered: 02/17/15
  • Author: Taneja Group
  • Published: Enterprise Storage Forum
Topic(s): TBA Disk TBA Storage TBA VDI TBA virtual desktop TBA NAS TBA Network Attached Storage TBA VCE TBA VNX TBA HDS TBA EMC TBA HP TBA NetApp TBA Hitachi Data Systems TBA Hitachi TBA IBM TBA Dell TBA Syncplicity TBA VSPEX TBA IBM SVC TBA SAN Volume Controller TBA software-defined TBA Software Defined Storage TBA SDS TBA Storwize TBA Storwize V7000 TBA replication TBA Automated Tiering TBA tiering TBA Virtualization TBA 3PAR
Profiles/Reports

HP ConvergedSystem: Solution Positioning for HP ConvergedSystem Hyper-Converged Products

Converged infrastructure systems – the integration of compute, networking, and storage - have rapidly become the preferred foundational building block adopted by businesses of all shapes and sizes. The success of these systems has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the effort and time to custom build their infrastructure from best of breed D-I-Y components. Purpose built converged infrastructure systems have been optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI).

Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, integrated at the rack level gave businesses the flexibility to cover the widest range of solution workload requirements while still using well-known infrastructure components. Emerging onto the scene recently has been a more modular approach to convergence using what we term Hyper-Convergence. With hyper-convergence, the three-tier architecture has been collapsed into a single system appliance that is purpose-built for virtualization with hypervisor, compute, and storage with advanced data services all integrated into an x86 industry-standard building block.

In this paper we will examine the ideal solution environments where Hyper-Converged products have flourished. We will then give practical guidance on solution positioning for HP’s latest ConvergedSystem Hyper-Converged product offerings.

Publish date: 05/07/15
news

Flat backup grows into viable tool for data protection

Flat backups reduce license fees and improve recovery point objectives and recovery time objectives, making them useful for data protection.

  • Premiered: 07/08/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Data Backup
Topic(s): TBA Backup TBA Data protection TBA DP TBA Recovery TBA Snapshots TBA flat backup TBA NetApp TBA HP TBA EMC TBA 3PAR TBA StoreServ TBA StoreOnce TBA VMAX TBA Data Domain TBA RPO TBA RTO TBA COW TBA ROW TBA Disaster Recovery TBA DR TBA WAN TBA Microsoft TBA Oracle TBA SAP TBA SnapProtect TBA RMC TBA Recovery Manager Central TBA ProtectPoint TBA Virtualization TBA VMWare
Profiles/Reports

DP Designed for Flash - Better Together: HPE 3PAR StoreServ Storage and StoreOnce System

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HPE 3PAR StoreServ Storage, HPE StoreOnce System backup appliances, and HPE Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 06/06/16
Profiles/Reports

Free Report - Better Together: HP 3PAR StoreServ Storage and StoreOnce System Opinion

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HP 3PAR StoreServ Storage, HP StoreOnce System backup appliances, and HP StoreOnce Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 09/25/15
news

The New Era of Secondary Storage HyperConvergence

The rise of hyperconverged infrastructure platforms has driven tremendous change in the primary storage space, perhaps even greater than the move from direct attached to networked storage in decades past.

  • Premiered: 10/22/15
  • Author: Jim Whalen
  • Published: Enterprise Storage Forum
Topic(s): TBA Storage TBA secondary storage TBA Primary Storage TBA hyperconverged TBA hyperconverged infrastructure TBA hyperconvergence TBA DR TBA Disaster Recovery TBA SATA TBA RTO TBA RPO TBA Data protection TBA DP TBA Virtualization TBA Snapshots TBA VM TBA Virtual Machine TBA Disaster Recovery as a Service TBA DRaaS TBA DevOps TBA Hadoop TBA cluster TBA Actifio TBA Zerto TBA replication TBA Data Domain TBA HP TBA 3PAR TBA StoreServ TBA StoreOnce
news

HPE gives 3PAR 3D NAND, Oracle acceleration, XIV migration

HPE joined the 3D NAND bandwagon for 3PAR StoreServ flash arrays, added flash acceleration for Oracle databases with EMC VMAX and added a new StoreOnce lineup.

  • Premiered: 11/17/15
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA HP TBA HPE TBA NAND TBA Oracle TBA EMC TBA VMAX TBA EMC VMAX TBA StoreOnce TBA 3PAR TBA StoreServ TBA Storage TBA IBM TBA Flash TBA SSD TBA Acceleration TBA Performance TBA Database Performance TBA HDS TBA Hitachi TBA Hitachi Data Systems TBA Backup TBA Capacity TBA Deduplication TBA flat backup TBA Microsoft TBA SQL
Profiles/Reports

Array Efficient, VM-Centric Data Protection: HPE Data Protector and 3PAR StoreServ

One of the biggest storage trends we are seeing in our current research here at Taneja Group is that of storage buyers (and operators) looking for more functionality – and at the same time increased simplicity – from their storage infrastructure. For this and many other reasons, including TCO (both CAPEX and OPEX) and improved service delivery, functional “convergence” is currently a big IT theme. In storage we see IT folks wanting to eliminate excessive layers in their complex stacks of hardware and software that were historically needed to accomplish common tasks. Perhaps the biggest, most critical, and unfortunately onerous and unnecessarily complex task that enterprise storage folks have had to face is that of backup and recovery. As a key trusted vendor of both data protection and storage solutions, we note that HPE continues to invest in producing better solutions in this space.

HPE has diligently been working towards integrating data protection functionality natively within their enterprise storage solutions starting with the highly capable tier-1 3PAR StoreServ arrays. This isn’t to say that the storage array now turns into a single autonomous unit, becoming a chokepoint or critical point of failure, but rather that it becomes capable of directly providing key data services to downstream storage clients while being directed and optimized by intelligent management (which often has a system-wide or larger perspective). This approach removes excess layers of 3rd party products and the inefficient indirect data flows traditionally needed to provide, assure, and then accelerate comprehensive data protection schemes. Ultimately this evolution creates a type of “software-defined data protection” in which the controlling backup and recovery software, in this case HPE’s industry-leading Data Protector, directly manages application-centric array-efficient snapshots.

In this report we examine this disruptively simple approach and how HPE extends it to the virtual environment – converging backup capabilities between Data Protector and 3PAR StoreServ to provide hardware assisted agentless backup and recovery for virtual machines. With HPE’s approach, offloading VM-centric snapshots to the array while continuing to rely on the hypervisor to coordinate the physical resources of virtual machines, virtualized organizations gain on many fronts including greater backup efficiency, reduced OPEX, greater data protection coverage, immediate and fine-grained recovery, and ultimately a more resilient enterprise. We’ll also look at why HPE is in a unique position to offer this kind of “converging” market leadership, with a complete end-to-end solution stack including innovative research and development, sales, support, and professional services.

Publish date: 12/21/15
news / Blog

What NetApp's Acquisition of SolidFire Means

Last month NetApp acquired SolidFire for $870M. It is by far the biggest acquisition in NetApp’s history.