Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 1 of 42 pages  1 2 3 >  Last ›
Profile

Got mid-sized workloads? Storwize family to the rescue

Myths prevail in the IT industry just as they do in every other facet of life. One common myth is that mid-sized workloads, exemplified by smaller versions of mission critical applications, are only to be found in mid-size companies. The reality is mid-sized workloads exist in businesses of all sizes. Another common fallacy is that small and mid-size enterprises (SMEs) or departments within large organizations and Remote/Branch Offices (ROBOs) have lesser storage requirements than their larger enterprise counterparts. The reality is companies and groups of every size have business-critical applications and these workloads require enterprise-grade storage solutions that offer high-performance, reliability and strong security. The only difference is IT groups managing mid-sized workloads frequently have significant budget constraints. This is a tough combination and presents a big challenge for storage vendors striving to satisfy mid-sized workload needs.

A recent survey conducted by Taneja Group showed mid-size and enterprise needs for high-performance storage were best met by highly virtualized systems that minimize disruption to their current environment. Storage virtualization is key because it abstracts away all the differences of various storage boxes to create 1) a single virtualized storage pool 2) a common set of data services and 3) a common interface to manage storage resources. These storage virtualization capabilities are beneficial to the overall enterprise storage market and they are especially attractive to mid-sized storage customers because storage virtualization is the core underlying capability that drives efficiency and affordability.

The combination of affordability, manageability and enterprise-grade functionality is the core strength of the IBM Storwize family built upon IBM Spectrum Virtualize, the quintessential virtualization software that has been hardened for over a decade with IBM SAN Volume Controller (SVC). Simply stated – few enterprise storage solutions match IBM Storwize’s ability to deliver enterprise-grade functionality at such a low cost. From storage virtualization and auto-tiering to real-time data compression and proven reliability, Storwize with Spectrum Virtualize offers an end-to-end storage footprint and centralized management that delivers highly efficient storage for mid-sized workloads, regardless of whether they exist in small or large companies.

In this paper, we will look at the key requirements for mid-sized storage and we will evaluate IBM Storwize with Spectrum Virtualize’s ability to tackle mid-sized workload requirements. We will also present an overview of IBM Storwize family and provide a comparison of the various models in the Storwize portfolio.

Publish date: 06/24/16
Profile

Flash Virtualization System: Powerful but Cost-Effective Acceleration for VMware Workloads

Server virtualization can bring your business significant benefits, especially in the initial stages of deployment. Companies we speak with in the early stages of adoption often cite more flexible and automated management of both infrastructure and apps, along with CAPEX and OPEX savings resulting from workload consolidation.  However, as an increasing number of apps are virtualized, many of these organizations encounter significant storage performance challenges. As more virtualized workloads are consolidated on a given host, aggregate IO demands put tremendous pressure on shared storage, server and networking resources, with the strain further exacerbated by the IO blender effect, in which IO streams processed by the hypervisor become random and unpredictable. Together, these conditions reduce host productivity—e.g. by lowering data and transactional throughput and increasing application response time—and may prevent you from meeting performance requirements for your business-critical applications.

How can you best address these storage performance challenges in your virtual infrastructure? Adding solid-state or flash storage will provide a significant performance boost, but where should it be deployed to give your critical applications the biggest improvement per dollar spent? How can you ensure that the additional storage fits effortlessly into your existing environment, without requiring disruptive and costly changes to your infrastructure, applications, or management capabilities?

We believe that server-side acceleration provides the best answer to all of these questions. In particular, we like server solutions that combine intelligent caching with high-performance PCIe memory, which are tightly integrated with the virtualization platform, and enable sharing of storage across multiple hosts or an entire cluster. The Flash Virtualization System from SanDisk is an outstanding example of such a solution. As we’ll see, Flash Virtualization enables a shared cache resource across a cluster of hosts in a VMware environment, improving application performance and response time without disrupting primary storage or host servers. This solution will allow you to satisfy SLAs and keep your users happy, without breaking the bank.

Publish date: 06/14/16
Profile

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16
Profile

The HPE Solution to Backup Complexity and Scale: HPE Data Protector and StoreOnce

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HPE as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HPE is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HPE hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HPE people have taken to heart a “customer first” message to provide a truly solution-focused HPE experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HPE business unit components are in the “box”. And significantly, this approach elevates HPE from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HPE is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HPE StoreOnce and Data Protector products are truly game changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 01/15/16
Profile

Full Database Protection Without the Full Backup Plan: Oracle’s Cloud-Scaled Zero Data Loss Recovery

Today’s tidal wave of big data isn’t just made up of loose unstructured documents – huge data growth is happening everywhere including in high-value structured datasets kept in databases like Oracle Database 12c. This data is any company’s most valuable core data that powers most key business applications – and it’s growing fast! According to Oracle, in 5 years (by 2020) most enterprises expect 50x data growth. As their scope and coverage grow, these key databases inherently become even more critical to our businesses. At the same time, the sheer number of database-driven applications and users is also multiplying – and they increasingly need to be online, globally, 24 x 7. Which all leads to the big burning question: How can we possibly protect all this critical data, data we depend on more and more even as it grows, all the time?

We just can’t keep taking more time out of the 24-hour day for longer and larger database backups. The traditional batch window backup approach is already often beyond practical limits and its problems are only getting worse with data growth – missed backup windows, increased performance degradation, unavailability, fragility, risk and cost. It’s now time for a new data protection approach that can do away with the idea of batch window backups, yet still provide immediate backup copies to recover from failures, corruption, and other disasters.

Oracle has stepped up in a big way, and marshaling expertise and technologies from across their engineered systems portfolio, has developed a new Zero Data Loss Recovery Appliance. Note the very intentional name that is focused on total recoverability – the Recovery Appliance is definitely not just another backup target. This new appliance eliminates the pains and risks of the full database backup window approach completely through a highly engineered continuous data protection solution for Oracle databases. It is now possible to immediately recover any database to any point in time desired, as the Recovery Appliance provides “virtual” full backups on demand and can scale to protect thousands of databases and petabytes of capacity. In fact, it offloads backup processes from production database servers which can increase performance in Oracle environments typically by  25%. Adopting this new backup and recovery solution will actually give CPU cycles back to the business.

In this report, we’ll briefly review why conventional data protection approaches based on the backup window are fast becoming obsolete. Then we’ll look into how Oracle has designed the new Recovery Appliance to provide a unique approach to ensuring data protection in real-time, at scale, for thousands of databases and PBs of data. We’ll see how zero data loss, incremental forever backups, continuous validation, and other innovations have completely changed the game of database data protection. For the first time there is now a real and practical way to fully protect a global corporation’s databases—on-premise and in the cloud—even in the face of today’s tremendous big data growth.

Publish date: 12/22/15
Profile

Array Efficient, VM-Centric Data Protection: HPE Data Protector and 3PAR StoreServ

One of the biggest storage trends we are seeing in our current research here at Taneja Group is that of storage buyers (and operators) looking for more functionality – and at the same time increased simplicity – from their storage infrastructure. For this and many other reasons, including TCO (both CAPEX and OPEX) and improved service delivery, functional “convergence” is currently a big IT theme. In storage we see IT folks wanting to eliminate excessive layers in their complex stacks of hardware and software that were historically needed to accomplish common tasks. Perhaps the biggest, most critical, and unfortunately onerous and unnecessarily complex task that enterprise storage folks have had to face is that of backup and recovery. As a key trusted vendor of both data protection and storage solutions, we note that HPE continues to invest in producing better solutions in this space.

HPE has diligently been working towards integrating data protection functionality natively within their enterprise storage solutions starting with the highly capable tier-1 3PAR StoreServ arrays. This isn’t to say that the storage array now turns into a single autonomous unit, becoming a chokepoint or critical point of failure, but rather that it becomes capable of directly providing key data services to downstream storage clients while being directed and optimized by intelligent management (which often has a system-wide or larger perspective). This approach removes excess layers of 3rd party products and the inefficient indirect data flows traditionally needed to provide, assure, and then accelerate comprehensive data protection schemes. Ultimately this evolution creates a type of “software-defined data protection” in which the controlling backup and recovery software, in this case HPE’s industry-leading Data Protector, directly manages application-centric array-efficient snapshots.

In this report we examine this disruptively simple approach and how HPE extends it to the virtual environment – converging backup capabilities between Data Protector and 3PAR StoreServ to provide hardware assisted agentless backup and recovery for virtual machines. With HPE’s approach, offloading VM-centric snapshots to the array while continuing to rely on the hypervisor to coordinate the physical resources of virtual machines, virtualized organizations gain on many fronts including greater backup efficiency, reduced OPEX, greater data protection coverage, immediate and fine-grained recovery, and ultimately a more resilient enterprise. We’ll also look at why HPE is in a unique position to offer this kind of “converging” market leadership, with a complete end-to-end solution stack including innovative research and development, sales, support, and professional services.

Publish date: 12/21/15
Page 1 of 42 pages  1 2 3 >  Last ›