Taneja Group | HPE
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: HPE

news

HPE gives 3PAR 3D NAND, Oracle acceleration, XIV migration

HPE joined the 3D NAND bandwagon for 3PAR StoreServ flash arrays, added flash acceleration for Oracle databases with EMC VMAX and added a new StoreOnce lineup.

  • Premiered: 11/17/15
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA HP TBA HPE TBA NAND TBA Oracle TBA EMC TBA VMAX TBA EMC VMAX TBA StoreOnce TBA 3PAR TBA StoreServ TBA Storage TBA IBM TBA Flash TBA SSD TBA Acceleration TBA Performance TBA Database Performance TBA HDS TBA Hitachi TBA Hitachi Data Systems TBA Backup TBA Capacity TBA Deduplication TBA flat backup TBA Microsoft TBA SQL
news

Nimboxx folds among growing crop of hyper-converged vendors

Nimboxx execs won't confirm the company's status, but signs point to the hyper-convergence startup pulling the plug on its technology 18 months after launch.

  • Premiered: 12/17/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Nimboxx TBA hyperconverged TBA hyperconvergence TBA Storage TBA KVM TBA Ethernet TBA Nutanix TBA SimpliVity TBA Dell TBA HPE TBA EMC
Profiles/Reports

Array Efficient, VM-Centric Data Protection: HPE Data Protector and 3PAR StoreServ

One of the biggest storage trends we are seeing in our current research here at Taneja Group is that of storage buyers (and operators) looking for more functionality – and at the same time increased simplicity – from their storage infrastructure. For this and many other reasons, including TCO (both CAPEX and OPEX) and improved service delivery, functional “convergence” is currently a big IT theme. In storage we see IT folks wanting to eliminate excessive layers in their complex stacks of hardware and software that were historically needed to accomplish common tasks. Perhaps the biggest, most critical, and unfortunately onerous and unnecessarily complex task that enterprise storage folks have had to face is that of backup and recovery. As a key trusted vendor of both data protection and storage solutions, we note that HPE continues to invest in producing better solutions in this space.

HPE has diligently been working towards integrating data protection functionality natively within their enterprise storage solutions starting with the highly capable tier-1 3PAR StoreServ arrays. This isn’t to say that the storage array now turns into a single autonomous unit, becoming a chokepoint or critical point of failure, but rather that it becomes capable of directly providing key data services to downstream storage clients while being directed and optimized by intelligent management (which often has a system-wide or larger perspective). This approach removes excess layers of 3rd party products and the inefficient indirect data flows traditionally needed to provide, assure, and then accelerate comprehensive data protection schemes. Ultimately this evolution creates a type of “software-defined data protection” in which the controlling backup and recovery software, in this case HPE’s industry-leading Data Protector, directly manages application-centric array-efficient snapshots.

In this report we examine this disruptively simple approach and how HPE extends it to the virtual environment – converging backup capabilities between Data Protector and 3PAR StoreServ to provide hardware assisted agentless backup and recovery for virtual machines. With HPE’s approach, offloading VM-centric snapshots to the array while continuing to rely on the hypervisor to coordinate the physical resources of virtual machines, virtualized organizations gain on many fronts including greater backup efficiency, reduced OPEX, greater data protection coverage, immediate and fine-grained recovery, and ultimately a more resilient enterprise. We’ll also look at why HPE is in a unique position to offer this kind of “converging” market leadership, with a complete end-to-end solution stack including innovative research and development, sales, support, and professional services.

Publish date: 12/21/15
news

What's the future of data storage in 2016?

Mike Matchett takes a closer look at the future of data storage technology in 2016 based on research from the Taneja Group.

  • Premiered: 01/06/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Storage TBA Data Storage TBA software-defined TBA Flash TBA SSD TBA Performance TBA Density TBA all-flash TBA all flash array TBA AFA TBA Hybrid TBA Hybrid Array TBA hybrid storage TBA OPEX TBA Auto-Tiering TBA Optimization TBA Capacity TBA CAPEX TBA QoS TBA EMC TBA Dell TBA HPE TBA NetApp TBA IBM TBA NAS TBA 3PAR TBA StoreOnce TBA data protector TBA Oracle TBA ZDLRA
Profiles/Reports

The HPE Solution to Backup Complexity and Scale: HPE Data Protector and StoreOnce

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HPE as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HPE is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HPE hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HPE people have taken to heart a “customer first” message to provide a truly solution-focused HPE experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HPE business unit components are in the “box”. And significantly, this approach elevates HPE from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HPE is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HPE StoreOnce and Data Protector products are truly game changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 01/15/16
Profiles/Reports

HP Converges to Mine Big Value from Big Data

The promise of Big Data is engaging the imagination of corporations everywhere, even before looking to big data solutions to help handle the accelerated pressures of proliferating new data sources or in managing tremendously increasing amounts of raw and unstructured data. Corporations have long been highly competitive about analytically extracting value from their structured transactional data streams, but are now trying to competitively differentiate with new big data applications that span multiple kinds of data types, run in business interactive timeframes, and deliver more operational-focused (even transactional) values based on multiple types of processing.

This has led to some major re-thinking about the best approach, or journey, to success with Big Data. As mainstream enterprises are learning how and where their inevitable Big Data opportunities lie (and they all have them – ignoring them is simply not a viable strategy), they are also finding that wholesale adoption of a completely open source approach can lead to many unexpected pitfalls, like data islands, batch-analytical timeframes, multiplying scope, and constrained application value. Most of all, IT simply cannot completely halt existing processes and overnight transition to a different core business model or data platform.

But big data is already here. Companies must figure out how to process different kinds of data, stay on top of their big data “deluge”, remain agile, mine value, and yet hopefully leverage existing staff, resources and analytical investments. Some of the important questions include:

1.How to build the really exciting and valuable applications that contain multiple analytical and machine learning processing across multiple big data types?

2.How to avoid setting up two, three, or more parallel environments that require many copies of big data, complex dataflows and far too many new highly skilled experts?

We find that HP Haven presents an intriguing, proven, and enterprise-ready approach by converging structured, unstructured, machine-generated and other kinds of analytical solutions, many already proven world-class existing solutions on their own, into a single big data processing platform. This enables leveraging existing data, applications and existing experts while offering opportunities to analyze data sets in multiple ways. With this solution it’s possible to build applications that can take advantage of multiple data sources, multiple proven solutions, and easily “mash-up” whatever might be envisioned. However, the HP Haven approach doesn’t force a monolithic adoption but rather can be deployed and built-up as a customer’s big data journey progresses.

To help understand the IT challenges of big data and explore this new kind of enterprise data center platform opportunity, we’ve created this special vendor spotlight report. We start with a significant extract from the premium Taneja Group Enterprise Hadoop Infrastructure Market Landscape report to help understand the larger Hadoop market perspective. Then within that context we will review the HP Haven solution for Big Data and look at how it addresses key challenges while presenting a platform on which enterprises can develop their new big data opportunities.

Publish date: 03/16/15
news

Scale-out architecture and new data protection capabilities in 2016

What are the next big things for the data center in 2016? Applications will pilot the course to better data protection and demand more resources from scale-out architecture.

  • Premiered: 02/17/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA scale-out TBA Data protection TBA DP TBA scale-out architecture TBA analysis TBA Data Center TBA data lake TBA Hadoop TBA hadoop cluster TBA cluster TBA Backup TBA Talena TBA HPE TBA 3PAR TBA flat backup TBA Snapshot TBA Snapshots TBA StoreOnce TBA Oracle TBA Oracle ZDLRA TBA ZDLRA TBA Zero Data Loss Recovery Appliance TBA converged TBA convergence TBA Cloud TBA backup server TBA Virtualization TBA Storage TBA Big Data TBA Lustre
Profiles/Reports

The Mainstream Adoption of All-Flash Storage: HPE Customers Show How Everyone Can Leverage Flash

Flash storage offers higher performance, lower power consumption, decreased footprint and increased reliability over spinning media. It would be the rare IT shop today that doesn’t have some flash acceleration deployed in performance hot spots. But many IT folks are still on the sidelines watching and waiting for the right time to jump into a bigger adoption of flash-based shared storage.

When will flash costs (CAPEX) drop to make it affordable to switch – and for which workloads does all-flash make sense? How much better does it need to be to overcome the pain and cost (OPEX) of adopting and migrating to a whole new storage solution? How much more complex and costly is it to run a mixed storage environment with some all-flash, some tiered, and some capacity arrays?

In this insightful field report we’ve interviewed a half dozen real-world IT storage groups who faced those challenges, and with HPE 3PAR StoreServ have been able to easily transition important performance-sensitive workloads to all-flash storage. By staying within the 3PAR StoreServ family for their larger storage needs, they’ve been able to steer a clear, cost-effective and rewarding course to flash performance.

In each interview, we explore how they executed their transition from HDD to hybrid to all-flash under their real world IT initiatives that included consolidation, data center transformation, and performance acceleration. We’ll learn about the particular business values they are each successfully producing, and we will present some recommendations for others making all-flash storage decisions.

Publish date: 02/24/16
news

Cisco HyperFlex helps play hyper-converged catch-up

Cisco's HyperFlex hyper-converged strategy depends largely on startup Springpath's file system performing better than those from Nutanix, VMware and other competitors.

  • Premiered: 04/11/16
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Cisco TBA HyperFlex TBA hyperconverged TBA hyperconvergence TBA Springpath TBA Nutanix TBA VMWare TBA hyperconverged infrastructure TBA OEM TBA HCI TBA Networking TBA UCS TBA HPE TBA SimpliVity TBA StorMagic TBA cluster TBA SSD TBA Flash TBA all-flash TBA Microsoft TBA Hyper-V TBA EMC TBA Gridstore TBA Arun Taneja TBA Storage TBA SAN TBA WhipTail TBA AFA TBA vBlock TBA VSPEX
news

Hyper-converged vendors offer new use cases, products

Hyper-converged market systems show they are ready to branch out beyond primary storage applications. In fact, it's happening now.

  • Premiered: 04/15/16
  • Author: Arun Taneja
  • Published: TechTarget: Search Converged IT
Topic(s): TBA hyper-converged TBA hyperconverged TBA hyperconvergence TBA Primary Storage TBA Storage TBA Nutanix TBA SimpliVity TBA scale-out TBA scale-out architecture TBA converged TBA convergence TBA HPE TBA ConvergedSystem TBA NetApp TBA FlexPod TBA VCE TBA Compute TBA Hypervisor TBA Virtualization TBA Mobility TBA Cloud TBA cloud integration TBA web-scale TBA web-scale storage TBA secondary storage TBA VM TBA VM centricity TBA VM-centric TBA Virtual Machine TBA WAN Optimization
news / Blog

Micron’s Solution Engineering Lab Reveals a New Level of Customer Commitment

I recently attended a Micron launch event where they unveiled a new solution engineering lab in Austin, Texas. As expected, Micron also announced a broad portfolio of PCIe solid state drives (SSDs) that leverage the high-performance NVMe protocol.

Resources

Emerging Technologies in Storage: Disaggregation of Storage Function

Join us for a fast-paced and informative 60-minute roundtable as we discuss one of the newest trends in storage: Disaggregation of traditional storage functions. A major trend within IT is to leverage server and server-side resources to the maximum extent possible. Hyper-scale architectures have led to the commoditization of servers and flash technology is now ubiquitous and is often times most affordable as a server-side component. Underutilized compute resources exist in many datacenters as the growth in CPU power has outpaced other infrastructure elements. One current hot trend — software-defined-storage—advocates collocating all storage functions to the server side but also relies on local, directly attached storage to create a shared pool of storage. That limits the server’s flexibility in terms of form factor and compute scalability. 
Now some vendors are exploring a new, optimally balanced approach. New forms of storage are emerging that first smartly modularize storage functions, and then intelligently host components in different layers of the infrastructure. With the help of a lively panel of experts we will unpack this topic and explore how their innovative approach to intelligently distributing storage functions can bring about better customer business outcomes.

Moderator:
Jeff Kato, Senior Analyst & Consultant, Taneja Group

Panelists:
Brian Biles, Founder & CEO, Datrium
Kate Davis, Senior Marketing Manager, HPE
Nutanix

  • Premiered: 05/19/16
  • Location: OnDemand
  • Speaker(s): Jeff Kato, Taneja Group; Brian Biles, Datrium; Kate Davis, HPE; Nutanix
Topic(s): TBA Topic(s): Jeff Kato Topic(s): TBA Topic(s): Datrium Topic(s): TBA Topic(s): HPE Topic(s): TBA Topic(s): Nutanix Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): hyper scale Topic(s): TBA Topic(s): hyper-scale Topic(s): TBA Topic(s): hyperscale Topic(s): TBA Topic(s): software-defined Topic(s): TBA Topic(s): software-defined storage Topic(s): TBA Topic(s): SDS Topic(s): TBA Topic(s): software defined Topic(s): TBA Topic(s): Software Defined Storage
Profiles/Reports

DP Designed for Flash - Better Together: HPE 3PAR StoreServ Storage and StoreOnce System

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HPE 3PAR StoreServ Storage, HPE StoreOnce System backup appliances, and HPE Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 06/06/16
Profiles/Reports

High Capacity SSDs are Driving the Shift to the All Flash Data Center

All Flash Arrays (AFAs) have had an impressive run of growth. From less than 5% of total array revenue in 2011, they’re expected to approach 50% of total revenue by the end of 2016, roughly a 60% CAGR. This isn’t surprising, really. Even though they’ve historically cost more on a $$/GB level (the gap is rapidly narrowing), they offer large advantages over hybrid and HDD-based arrays in every other area.

The most obvious advantage that SSDs have over HDDs is in performance. With no moving parts to slow them down, they can be over a thousand times faster than HDDs by some measures. Using them to eliminate storage bottlenecks, CIOs can squeeze more utility out of their servers. The high performance of SSD’s has allowed storage vendors to implement storage capacity optimization techniques such as thin deduplication within AFAs. Breathtaking performance combined with affordable capacity optimization has been the major driving force behind AFA market gains to date.

While people are generally aware that SSDs outperform HDDs by a large margin, they usually have less visibility into the other advantages that they bring to the table. SSDs are also superior to HDDs in the areas of reliability (and thus warranty), power consumption, cooling requirements and physical footprint. As we’ll see, these TCO advantages allow users to run at significantly lower OPEX levels when switching to AFAs from traditional, HDD-based arrays.

When looking at the total cost envelope, factoring in their superior performance, AFAs are already the intelligent purchase decision, particularly for Tier 1 mission critical workloads. Now, a new generation of high capacity SSDs is coming and it’s poised to accelerate the AFA takeover. We believe the Flash revolution in storage that started in 2011will outpace even the most optimistic forecast in 2016 easily eclipsing the 50% of total revenue predicted for external arrays. Let’s take a look at how and why.

Publish date: 06/10/16
news / Blog

Large Capacity SSDs will be the tipping point for the all flash data center

The massive 15.36 TB drives that were first announced at last year’s Flash Memory Summit are now showing up in All Flash Arrays (AFAs). Both NetApp and Hewlett Packard Enterprise (HPE) have introduced these drives into their respective flagship AFAs.

news

Enterprise SSDs: The Case for All-Flash Data Centers

Adding small amounts of flash as cache or dedicated storage is certainly a good way to accelerate a key application or two, but enterprises are increasingly adopting shared all-flash arrays to increase performance for every primary workload in the data center.

  • Premiered: 06/23/16
  • Author: Mike Matchett
  • Published: Enterprise Storage Forum
Topic(s): TBA Flash TBA SSD TBA Mike Matchett TBA Storage TBA AFA TBA all-flash TBA all flash array TBA ROI TBA HDD TBA IOPS TBA IO performance TBA flash storage TBA Datacenter TBA Data Center TBA HPE TBA NetApp TBA Capacity TBA simplicity TBA CAPEX TBA scalability TBA scalable TBA OPEX TBA resiliency TBA VDI TBA Dedupe TBA Deduplication TBA Pure Storage TBA Kaminario TBA HPE 3PAR TBA 3PAR
Profiles/Reports

HPE StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.

Publish date: 06/30/16
news

Spark speeds up adoption of big data clusters and clouds

Infrastructure that supports big data comes from both the cloud and clusters. Enterprises can mix and match these seven infrastructure choices to meet their needs.

  • Premiered: 07/19/16
  • Author: Mike Matchett
  • Published: TechTarget: Search IT Operations
Topic(s): TBA Apache Spark TBA Spark TBA Mike Matchett TBA Cloud TBA cloud cluster TBA cluster TBA Big Data TBA big data analytics TBA MapReduce TBA Business Intelligence TBA BI TBA MLlib TBA High Performance TBA hadoop cluster TBA HDFS TBA Hadoop Distributed File System TBA IBM TBA Hortonworks TBA Cloudera TBA capacity management TBA Performance Management TBA API TBA SAN TBA storage area networks TBA CAPEX TBA DataDirect Networks TBA HPC TBA Lustre TBA Virtualization TBA VM
news

What Pat Gelsinger Should Announce at VMworld 2016

Experts put themselves in the CEO's shoes for the keynote address.

  • Premiered: 07/21/16
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA Mike Matchett TBA VMWorld TBA VMworld 2016 TBA EMC TBA Dell TBA Virtualization TBA vSphere TBA VMWare TBA Microsoft TBA Hyper-V TBA VMware VSAN TBA VSAN TBA ScaleIO TBA software-defined TBA software-defined storage TBA SDS TBA Hypervisor TBA SanDisk TBA FlashSoft TBA VAIO TBA Cloud TBA ESX TBA NSX TBA vRealize TBA Cloud Management TBA cluster TBA cloud cluster TBA Hybrid Cloud TBA cloud migration TBA VMotion
news

Disaggregation marks an evolution in hyper-convergence

Hyper-convergence vendors are pushing forward with products that will offer disaggregation, the latest entry into the data center paradigm.

  • Premiered: 08/03/16
  • Author: Arun Taneja
  • Published: TechTarget: Search Storage
Topic(s): TBA disaggregated storage TBA Storage TBA disaggregation TBA hyperconverged TBA hyperconvergence TBA hyper-converged TBA hyper-convergence TBA Datacenter TBA Data Center TBA converged TBA convergence TBA Moore's Law TBA Hypervisor TBA software-defined TBA DataCore TBA EMC TBA ScaleIO TBA HPE TBA StoreVirtual TBA StoreVirtual VSA TBA VSA TBA VMWare TBA VMware VSAN TBA Virtual SAN TBA SAN TBA Virtualization TBA cluster TBA Nutanix TBA SimpliVity TBA VxRack