Join Newsletter
Trusted Business Advisors, Expert Technology Analysts



Page 1 of 42 pages  1 2 3 >  Last ›

Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup

Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.

These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.

And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.

Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).

Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.

In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.

Publish date: 10/18/16

FlashSoft 4 for vSphere 6: Acceleration Technology Tailor-Made for VMware Environments

For all the gains server virtualization has brought in compute utilization, flexibility and efficiency, it has created an equally weighty set of challenges on the storage side, particularly in traditional storage environments. As servers become more consolidated, virtualized workloads must increasingly contend for scarce storage and IO resources, preventing them from consistently meeting throughput and response time objectives. On top of that, there is often no way to ensure that the most critical apps or virtual machines can gain priority access to data storage as needed, even in lightly consolidated environments. With a majority (70+%) of all workloads now running virtualized, it can be tough to achieve strong and predictable app performance with traditional shared storage.

To address these challenges, many VMware customers are now turning to server-side acceleration solutions, in which the flash storage resource can be placed closer to the application. But server-side acceleration is not a panacea. While some point solutions have been adapted to work in virtualized infrastructures, they generally lack enterprise features, and are often not well integrated with vSphere and the vCenter management platform. Such offerings are at best band-aid treatments, and at worst second-class citizens in the virtual infrastructure, proving difficult to scale, deploy and manage. To provide true enterprise value, a solution should seamlessly deliver performance to your critical VMware workloads, but without compromising availability, workload portability, or ease of deployment and management.

This is where FlashSoft 4 for VMware vSphere 6 comes in. FlashSoft is an intelligent, software-defined caching solution that accelerates your critical VMware workloads as an integrated vSphere data service, while still allowing you to take full advantage of all the vSphere enterprise capabilities you use today.

In this paper we examine the technology underlying the FlashSoft 4 for vSphere 6 solution, describe the features and capabilities it enables, and articulate the benefits customers can expect to realize upon deploying the solution.

Publish date: 08/31/16

Got mid-sized workloads? Storwize family to the rescue

Myths prevail in the IT industry just as they do in every other facet of life. One common myth is that mid-sized workloads, exemplified by smaller versions of mission critical applications, are only to be found in mid-size companies. The reality is mid-sized workloads exist in businesses of all sizes. Another common fallacy is that small and mid-size enterprises (SMEs) or departments within large organizations and Remote/Branch Offices (ROBOs) have lesser storage requirements than their larger enterprise counterparts. The reality is companies and groups of every size have business-critical applications and these workloads require enterprise-grade storage solutions that offer high-performance, reliability and strong security. The only difference is IT groups managing mid-sized workloads frequently have significant budget constraints. This is a tough combination and presents a big challenge for storage vendors striving to satisfy mid-sized workload needs.

A recent survey conducted by Taneja Group showed mid-size and enterprise needs for high-performance storage were best met by highly virtualized systems that minimize disruption to their current environment. Storage virtualization is key because it abstracts away all the differences of various storage boxes to create 1) a single virtualized storage pool 2) a common set of data services and 3) a common interface to manage storage resources. These storage virtualization capabilities are beneficial to the overall enterprise storage market and they are especially attractive to mid-sized storage customers because storage virtualization is the core underlying capability that drives efficiency and affordability.

The combination of affordability, manageability and enterprise-grade functionality is the core strength of the IBM Storwize family built upon IBM Spectrum Virtualize, the quintessential virtualization software that has been hardened for over a decade with IBM SAN Volume Controller (SVC). Simply stated – few enterprise storage solutions match IBM Storwize’s ability to deliver enterprise-grade functionality at such a low cost. From storage virtualization and auto-tiering to real-time data compression and proven reliability, Storwize with Spectrum Virtualize offers an end-to-end storage footprint and centralized management that delivers highly efficient storage for mid-sized workloads, regardless of whether they exist in small or large companies.

In this paper, we will look at the key requirements for mid-sized storage and we will evaluate IBM Storwize with Spectrum Virtualize’s ability to tackle mid-sized workload requirements. We will also present an overview of IBM Storwize family and provide a comparison of the various models in the Storwize portfolio.

Publish date: 06/24/16

Flash Virtualization System: Powerful but Cost-Effective Acceleration for VMware Workloads

Server virtualization can bring your business significant benefits, especially in the initial stages of deployment. Companies we speak with in the early stages of adoption often cite more flexible and automated management of both infrastructure and apps, along with CAPEX and OPEX savings resulting from workload consolidation.  However, as an increasing number of apps are virtualized, many of these organizations encounter significant storage performance challenges. As more virtualized workloads are consolidated on a given host, aggregate IO demands put tremendous pressure on shared storage, server and networking resources, with the strain further exacerbated by the IO blender effect, in which IO streams processed by the hypervisor become random and unpredictable. Together, these conditions reduce host productivity—e.g. by lowering data and transactional throughput and increasing application response time—and may prevent you from meeting performance requirements for your business-critical applications.

How can you best address these storage performance challenges in your virtual infrastructure? Adding solid-state or flash storage will provide a significant performance boost, but where should it be deployed to give your critical applications the biggest improvement per dollar spent? How can you ensure that the additional storage fits effortlessly into your existing environment, without requiring disruptive and costly changes to your infrastructure, applications, or management capabilities?

We believe that server-side acceleration provides the best answer to all of these questions. In particular, we like server solutions that combine intelligent caching with high-performance PCIe memory, which are tightly integrated with the virtualization platform, and enable sharing of storage across multiple hosts or an entire cluster. The Flash Virtualization System from SanDisk is an outstanding example of such a solution. As we’ll see, Flash Virtualization enables a shared cache resource across a cluster of hosts in a VMware environment, improving application performance and response time without disrupting primary storage or host servers. This solution will allow you to satisfy SLAs and keep your users happy, without breaking the bank.

Publish date: 06/14/16

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16

The HPE Solution to Backup Complexity and Scale: HPE Data Protector and StoreOnce

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HPE as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HPE is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HPE hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HPE people have taken to heart a “customer first” message to provide a truly solution-focused HPE experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HPE business unit components are in the “box”. And significantly, this approach elevates HPE from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HPE is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HPE StoreOnce and Data Protector products are truly game changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 01/15/16
Page 1 of 42 pages  1 2 3 >  Last ›