Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 2 of 43 pages  < 1 2 3 4 >  Last ›
Profile

Towards the Ultimate Goal of IT Resilience: A Look at the Zerto Cloud Continuity Platform

We live in a digital world where online services, applications and data must always be available. Yet the modern data center remains very susceptible to interruptions. These opposing realities are challenging traditional backup applications and disaster recovery solutions and causing companies to rethink what is needed to ensure 100% uptime of their IT environments.

The need for availability goes well beyond recovering from disasters. Companies must be able to rapidly recover from many real world disruptions such as ransomware, device failures and power outages as well as natural disasters. Add to this the dynamic nature of virtualization and cloud computing, and it’s not hard to see the difficulty of providing continuous availability while managing a highly variable IT environment that is susceptible to trouble.

Some companies feel their backup devices will give them adequate data protection and others believe their disaster recovery solutions will help them restore normal business operations if an incident occurs. Regrettably, far too often these solutions fall short of meeting user expectations because they don’t provide the rapid recovery and agility needed for full business continuance.

Fortunately, there is a way to ensure a consistent experience in an inconsistent world. It’s called IT resilience. IT resilience is the ability to ensure business services are always on, applications are available and data is accessible no matter what human errors, events, failures or disasters occur. And true IT resilience goes a step further to provide continuous data protection (CDP), end-to-end recovery automation irrespective of the makeup of a company’s IT environment and the flexibility to evolve IT strategies and incorporate new technology.

Intrigued by the promise of IT resilience, companies are seeking data protection solutions that can withstand any disaster to enable a reliable online experience and excellent business performance. In a recent Taneja Group survey, nearly half the companies selected “high availability and resilient infrastructure” as one of their top two IT priorities. In the same survey, 67% of respondents also indicated that unplanned application downtime compromised their ability to satisfy customer needs, meet partner and supplier commitments and close new business.

This strong customer interest in IT resilience has many data protection vendors talking about “resilience.” Unfortunately, many backup and disaster recovery solutions don’t provide continuous data protection plus hardware independence, strong virtualization support and tight cloud integration. This is a tough combination and presents a big challenge for data protection vendors striving to provide enterprise-grade IT resilience.

There is however one data protection vendor that has replication and disaster recovery technologies designed from the ground up for IT resilience. The Zerto Cloud Continuity Platform built on Zerto Virtual Replication offers CDP, failover (for higher availability), end-to-end process automation, heterogeneous hypervisor support and native cloud integration. As a result, IT resilience with continuous availability, rapid recovery and agility is a core strength of the Zerto Cloud Continuity Platform.

This paper will explore the functionality needed to tackle modern data protection requirements. We will also discuss the challenges of traditional backup and disaster recovery solutions, outline the key aspects of IT resilience and provide an overview of the Zerto Cloud Continuity Platform as well as the hypervisor-based replication that Zerto pioneered.

Publish date: 09/30/16
Profile

FlashSoft 4 for vSphere 6: Acceleration Technology Tailor-Made for VMware Environments

For all the gains server virtualization has brought in compute utilization, flexibility and efficiency, it has created an equally weighty set of challenges on the storage side, particularly in traditional storage environments. As servers become more consolidated, virtualized workloads must increasingly contend for scarce storage and IO resources, preventing them from consistently meeting throughput and response time objectives. On top of that, there is often no way to ensure that the most critical apps or virtual machines can gain priority access to data storage as needed, even in lightly consolidated environments. With a majority (70+%) of all workloads now running virtualized, it can be tough to achieve strong and predictable app performance with traditional shared storage.

To address these challenges, many VMware customers are now turning to server-side acceleration solutions, in which the flash storage resource can be placed closer to the application. But server-side acceleration is not a panacea. While some point solutions have been adapted to work in virtualized infrastructures, they generally lack enterprise features, and are often not well integrated with vSphere and the vCenter management platform. Such offerings are at best band-aid treatments, and at worst second-class citizens in the virtual infrastructure, proving difficult to scale, deploy and manage. To provide true enterprise value, a solution should seamlessly deliver performance to your critical VMware workloads, but without compromising availability, workload portability, or ease of deployment and management.

This is where FlashSoft 4 for VMware vSphere 6 comes in. FlashSoft is an intelligent, software-defined caching solution that accelerates your critical VMware workloads as an integrated vSphere data service, while still allowing you to take full advantage of all the vSphere enterprise capabilities you use today.

In this paper we examine the technology underlying the FlashSoft 4 for vSphere 6 solution, describe the features and capabilities it enables, and articulate the benefits customers can expect to realize upon deploying the solution.

Publish date: 08/31/16
Profile

Dell SC7000 Series Unified Storage Platform

The challenge for mid-sized business is that they have smaller IT staffs and smaller budgets than the enterprise, yet still need high availability, high performance, and robust capacity on their storage systems. Every storage system will deliver parts of the solution but very, very few will deliver simplicity, efficiency, performance, availability, and capacity on a low-cost system.

We’re not blaming the storage system makers, since it’s hard to offer a storage system with all of these benefits and still maintain acceptable profit. It has been difficult for the storage manufacturers to design enterprise storage features into an affordable mid-ranged storage system that is enterprise-capable yet still yield enough profit to sustain the research and development needed to keep the product viable.

Dell is a master at this game with its Storage Center Intel-based portfolio. SC series range from an entry-level model up to enterprise datacenter class, with most of the middle line devoted to delivering enterprise features for the mid-market business. A few months ago Taneja Group reviewed and validated high availability features across the economical SC line. Dell is able to deliver those features because the SC operating system (SCOS) and FluidFS software stacks operate across every system in the SC family. Features are developed in such a way that a broad range of products can be deployed with enterprise data services each with highly tuned cost versus performance balance.

Dell’s new SC7000 series carries on with this successful game plan as the first truly unified storage platform for the popular SC line. Starting with the SC7020, this series now unifies block and file data in an extremely efficient and affordable architecture. And like all SC family members, the SC7020 comes with enterprise capabilities including high performance and availability, centralized management, storage efficiencies and more; all at mid-market pricing.

What distinguishes the SC7020 though is the level of efficiency and affordability that is rare for enterprise capable systems. Simple and efficient deployment, consistent management across all Storage Center platforms and investment protection through in-chassis upgrades (SC series can support multiple types of media within the same enclosure), makes the SC7020 an ideal choice for mid-market businesses. Add on auto-tiering (effectively right-sizing most frequently used data to the fastest media tier), built-in compression and multi-protocol support, provides these customers with a storage solution that evolves with their business needs

In this Solution Profile, Taneja Group explores how the cost-effective SC7020 delivers enterprise features to the data-intensive mid-market, and how Dell’s approach mitigates tough customer challenges. 

Publish date: 08/30/16
Profile

Got mid-sized workloads? Storwize family to the rescue

Myths prevail in the IT industry just as they do in every other facet of life. One common myth is that mid-sized workloads, exemplified by smaller versions of mission critical applications, are only to be found in mid-size companies. The reality is mid-sized workloads exist in businesses of all sizes. Another common fallacy is that small and mid-size enterprises (SMEs) or departments within large organizations and Remote/Branch Offices (ROBOs) have lesser storage requirements than their larger enterprise counterparts. The reality is companies and groups of every size have business-critical applications and these workloads require enterprise-grade storage solutions that offer high-performance, reliability and strong security. The only difference is IT groups managing mid-sized workloads frequently have significant budget constraints. This is a tough combination and presents a big challenge for storage vendors striving to satisfy mid-sized workload needs.

A recent survey conducted by Taneja Group showed mid-size and enterprise needs for high-performance storage were best met by highly virtualized systems that minimize disruption to their current environment. Storage virtualization is key because it abstracts away all the differences of various storage boxes to create 1) a single virtualized storage pool 2) a common set of data services and 3) a common interface to manage storage resources. These storage virtualization capabilities are beneficial to the overall enterprise storage market and they are especially attractive to mid-sized storage customers because storage virtualization is the core underlying capability that drives efficiency and affordability.

The combination of affordability, manageability and enterprise-grade functionality is the core strength of the IBM Storwize family built upon IBM Spectrum Virtualize, the quintessential virtualization software that has been hardened for over a decade with IBM SAN Volume Controller (SVC). Simply stated – few enterprise storage solutions match IBM Storwize’s ability to deliver enterprise-grade functionality at such a low cost. From storage virtualization and auto-tiering to real-time data compression and proven reliability, Storwize with Spectrum Virtualize offers an end-to-end storage footprint and centralized management that delivers highly efficient storage for mid-sized workloads, regardless of whether they exist in small or large companies.

In this paper, we will look at the key requirements for mid-sized storage and we will evaluate IBM Storwize with Spectrum Virtualize’s ability to tackle mid-sized workload requirements. We will also present an overview of IBM Storwize family and provide a comparison of the various models in the Storwize portfolio.

Publish date: 06/24/16
Profile

Flash Virtualization System: Powerful but Cost-Effective Acceleration for VMware Workloads

Server virtualization can bring your business significant benefits, especially in the initial stages of deployment. Companies we speak with in the early stages of adoption often cite more flexible and automated management of both infrastructure and apps, along with CAPEX and OPEX savings resulting from workload consolidation.  However, as an increasing number of apps are virtualized, many of these organizations encounter significant storage performance challenges. As more virtualized workloads are consolidated on a given host, aggregate IO demands put tremendous pressure on shared storage, server and networking resources, with the strain further exacerbated by the IO blender effect, in which IO streams processed by the hypervisor become random and unpredictable. Together, these conditions reduce host productivity—e.g. by lowering data and transactional throughput and increasing application response time—and may prevent you from meeting performance requirements for your business-critical applications.

How can you best address these storage performance challenges in your virtual infrastructure? Adding solid-state or flash storage will provide a significant performance boost, but where should it be deployed to give your critical applications the biggest improvement per dollar spent? How can you ensure that the additional storage fits effortlessly into your existing environment, without requiring disruptive and costly changes to your infrastructure, applications, or management capabilities?

We believe that server-side acceleration provides the best answer to all of these questions. In particular, we like server solutions that combine intelligent caching with high-performance PCIe memory, which are tightly integrated with the virtualization platform, and enable sharing of storage across multiple hosts or an entire cluster. The Flash Virtualization System from SanDisk is an outstanding example of such a solution. As we’ll see, Flash Virtualization enables a shared cache resource across a cluster of hosts in a VMware environment, improving application performance and response time without disrupting primary storage or host servers. This solution will allow you to satisfy SLAs and keep your users happy, without breaking the bank.

Publish date: 06/14/16
Profile

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16
Page 2 of 43 pages  < 1 2 3 4 >  Last ›