Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 1 of 38 pages  1 2 3 >  Last ›
Profile

Converging Branch IT Infrastructure the Right Way: Riverbed SteelFusion

Companies with significant non-data center and often widely distributed IT infrastructure requirements are faced with many challenges. It can be difficult enough to manage tens or hundreds if not thousands of remote or branch office locations, but many of these can also be located in dirty or dangerous environments that are simply not suited for standard data center infrastructure. It is also hard if not impossible to forward deploy the necessary IT experience to manage any locally placed resources. The key challenge then, and one that can be competitively differentiating on cost alone, is to simplify branch IT as much as possible while supporting branch business.

Converged solutions have become widely popular in the data center, particularly in virtualized environments. By tightly integrating multiple functionalities into one package, there are fewer separate moving parts for IT to manage while at the same time optimizing capabilities based on tightly intimately integrating components. IT becomes more efficient and in many ways gains more control over the whole environment. In addition to obviously increasing IT simplicity there are many other cascading benefits. The converged infrastructure can perform better, is more resilient and available, and offers better security than separately assembled silos of components. And a big benefit is a drastically lowered TCO.

Yet for a number of reasons, data center convergence approaches haven’t translated as usefully to beneficial convergence in the branch. No matter how tightly integrated a “branch in a box” is, if it’s just an assemblage of the usual storage, server, and networking silo components it will still suffer from traditional branch infrastructure challenges – second-class performance, low reliability, high OPEX, and difficult to protect and recover. Branches have unique needs and data center infrastructure, converged or otherwise, isn’t designed to meet those needs. This is where Riverbed has pioneered a truly innovative converged infrastructure designed explicitly for the branch which provides simplified deployment and provisioning, resiliency in the face of network issues, improved protection and recovery from the central data center, optimization and acceleration for remote performance, and a greatly lowered OPEX.

In this paper we will review Riverbed’s SteelFusion (formerly known as Granite) branch converged infrastructure solution, and see how it marries together multiple technical advances including WAN optimization, stateless compute, and “projected” datacenter storage to solve those branch challenges and bring the benefits of convergence out to branch IT. We’ll see how SteelFusion is not only fulfilling the promise of a converged “branch” infrastructure that supports distributed IT, but also accelerates the business based on it.

Publish date: 04/15/14
Profile

Software-Driven Mid-Range Storage: Customer Value and the Software-driven IBM Storwize V5000

Whether a customer is making their first foray into external storage technology, or buying their 100th storage array, there is little doubt in most customers' minds that storage can be hard. Specialized storage technology, combined with significant cost and the critical nature of stored data, mix together to make storage one of the riskiest endeavors most IT practitioners will undertake.

Over the past two years, the storage market has exploded with offerings that provide more storage system choices than ever before. In part, this is due to the recent and rapid introduction of technologies like flash storage that have enabled new companies to bring to market fairly competent storage systems with significantly less engineering effort.

There is little doubt that the resulting competition and choice are a boon to the customer, as this can drive down prices, and compel vendors to innovate and deliver new features more aggressively. But sometimes, new technologies may leave lingering surprises for the customer - especially for those customers trying to build a long term and lasting storage strategy. Moreover, storage technology is changing in multiple dimensions. There is a revolutionary shift toward software-defined capabilities, while simultaneously media, controller architectures, virtual infrastructure integrations, and workload patterns are all simultaneously changing. In the midst of such change, it is more important than ever to be attentive to what really matters, and in a changing market, what matters is not always clear. In our view, the consideration of the storage practitioner must broaden, and consider a careful balancing act that considers both new capabilities - like agility and cost-optimizing software-defined functionality - and foundational storage underpinnings that are too easy to take for granted. In this product profile, we've turned our sights on a recent product introduction from IBM - the Storwize V5000 - to consider how IBM is integrating a broad swatch of new capabilities while building those capabilities on a field proven and deeply architected storage foundation.

Publish date: 04/01/14
Profile

Software Storage Solutions for Virtualization

Storage has long been a major source of operational and architectural challenges for IT practitioners, but today these challenges are most felt in the virtual infrastructure. The challenges are sprung from the physicality of storage – while the virtual infrastructure has made IT entirely more agile and adaptable than ever before, storage still depends upon digital bits that are permanently stored on a physical device somewhere within the data center.

For practitioners who have experienced the pain caused by this – configuration hurdles, painful data migrations, and even disasters – the idea of software-defined storage likely sounds somewhat ludicrous. But the term also holds tremendous potential to change the way IT is done by tackling this one last vestige of the traditional, inflexible IT infrastructure.

The reality is that software-defined storage isn’t that far away. In the virtual infrastructure, a number of vendors have long offered Virtual Storage Appliances (VSAs) that can make storage remarkably close to software-defined. These solutions allow administrators to easily and rapidly deploy storage controllers within the virtual infrastructure, and equip either networked storage pools or the direct-attached storage within a server with enterprise-class storage features that are consistent and easily managed by the virtual administrator, irrespective of where the virtual infrastructure is (in the cloud, or on premise). Such solutions can make comprehensive storage functionality available in places where it could never be had before, allow for higher utilization of stranded pools of storage (such as local disk in the server), and enable a homogeneous management approach even across many distributed locations.

The 2012-2013 calendar years have brought an increasing amount of attention and energy in the VSA marketplace. While the longest established, major vendor VSA solution in the marketplace has been HP’s StoreVirtual VSA, in 2013 an equally major vendor – VMware – introduced a similar, software-based, scale-out storage solution for the virtual infrastructure – VSAN. While VMware’s VSAN does not directly carry a VSA moniker, and in fact stands separate from VMware’s own vSphere Storage Appliance, VSAN has an architecture very similar to the latest generation of HP’s own StoreVirtual VSA. Both of these products are scale-out storage software solutions that are deployed in the virtual infrastructure and contain solid-state caching/tiering capabilities that enhance performance and make them enterprise-ready for production workloads. VMware’s 2013 announcement finally meant HP is no longer the sole major vendor (Fortune 500) with a primary storage VSA approach. This only adds validation to other vendors who have long offered VSA-based solutions, vendors like FalconStor, Nexenta, and StorMagic.

We’ve turned to a high level assessment of five market leaders who are today offering VSA or software storage in the virtual infrastructure. We’ve assessed these solutions here with an eye toward how they fit as primary storage for the virtual infrastructure. In this landscape, we’ve profiled the key characteristics and capabilities critical to storage systems fulfilling this role. At the end of our assessment, clearly each solution has a place in the market, but not all VSA solutions are ready for primary storage. Those that are, may stand to reinvent the practice of storage in customer data centers. 

Publish date: 01/03/14
Profile

Hybrid Cloud Storage from Microsoft: Leveraging Windows Azure and StorSimple

Cloud computing does some things very well. It delivers applications and upgrades. It runs analysis on cloud-based big data. It connects distributed groups sharing communications and files. It provides a great environment for developing web applications and running test/dev processes.

But public cloud storage is a different story. The cloud does deliver long-term, cost-effective storage for inactive backup and archives. Once the backup and archive data streams are scheduled and running they can use relatively low bandwidth as long as they are deduping on-site before transport. (And as long as they do not have to be rehydrated pre-upload, which is another story.) This alone helps to save on-premises storage capacity and can replace off-site tape vaulting.

But cloud storage users want more. They want to have the cost and agility advantages of the public cloud without incurring the huge expense of building one. They want to keep using the public cloud for cost-effective backup and archive, but they also want to use it for more active – i.e. primary – data. This is especially true for workloads with rapidly growing data sets that quickly age like collaboration and file shares. Some of this data needs to reside locally but the majority can be moved, or tiered, to public cloud storage.

What does the cloud need to work for this enterprise wish list? Above all it needs to make public cloud storage an integral part of the on-premises primary storage architecture. This requires intelligent and automated storage tiering, high performance for baseline uploads and continual snapshots, no geographical lock-in, and a central storage management console that integrates cloud and on-premises storage.

Hybrid cloud storage, or HCS, meets this challenge. HCS turns the public cloud into a true active storage tier for less active production data that is not ready to be put out to backup pasture. Hybrid cloud storage integrates on-premises storage with public cloud storage services: not as another backup target but as integrated storage infrastructure. The storage system uses both the on-premises array and scalable cloud storage resources for primary data, expanding that data and data protection to a cost-effective cloud storage tier.

Microsoft’s innovative and broad set of technology enables a true, integrated solution for hybrid cloud storage for business and government organizations – not just a heterogeneous combination of private cloud and public cloud storage offerings. Comprised of StorSimple cloud-integrated storage and the Windows Azure Storage service, HCS from Microsoft well serves the demanding enterprise storage environment, enabling customers to realize huge data management efficiencies in their Microsoft applications and Windows and VMware environments.

This paper will discuss how the Microsoft solution for hybrid cloud storage, consisting of Windows Azure and StorSimple, is different from traditional storage, best practices for leveraging it, and the real world results from multiple customer deployment examples.

Publish date: 08/31/13
Profile

Why VM Density Matters: HP Innovation Delivers Validated “2x” Advantage

Server virtualization brings a vast array of benefits ranging from direct cost savings to indirect improvements in business agility and client satisfaction. But for the IT investment decision-maker, it’s those measurable “hard” costs that matter most. Fortunately, virtualized environments deliver a quantifiably lower Total Cost of Ownership (TCO) compared to legacy physical infrastructures. Since we have all experienced the economic imperative to minimize TCO, it’s easy to understand why virtualization has been driven across significant percentages of modern data centers. Virtualization today is a proven, cost-effective, and nearly ubiquitous IT solution.

But the further challenge for IT investors now is to choose the best virtualization solution to get the “biggest bang for the buck”. Unfortunately the traditional cost per infrastructure (server CPU, storage GB, et.al.) metrics used to judge physical hardware are not sufficient buying criteria in a virtual environment. In a virtualized and cloudy world, cost per application comes closer to capturing the true value of a virtual infrastructure investment. For example, the more virtual machines (VMs) that can be hosted within the same size investment, the lower the cost per application. Therefore a key comparison metric between virtualization solutions is virtual machine (VM) density. All other things being equal (e.g. applications, choice of hypervisor, specific allocations and policies), an infrastructure supporting a higher VM density provides a better value.

As virtualization deployments grow to include active production workloads, they greatly stress and challenge traditional IT infrastructure. The virtualization hypervisor “blends up” client IO workloads and condenses IO intensive activities (i.e. migration, snapshots, backups), with the result that the underlying storage often presents the biggest constraint on effective VM density. Therefore it’s critically important when selecting storage for virtual environments to get past general marketing and focus on validated claims of proven VM density.

Publish date: 07/31/13
Profile

Dell AppAssure 5: Unified Platform for Business Resiliency

Backup applications with large user bases have been vendor cash cows because their customers are reluctant to change such deeply embedded products. As long as the backup worked, it was out of sight and out of mind.
But the field is rapidly changing.

The push to virtualize applications saw traditional backup foundering. Traditional backup in the virtual arena suffered from heavy operational overhead on server, application host, network, and storage levels. The growing amount of VMs and virtualized data had a serious impact on storage resources. For example, each VMDK file represented an entire VM file system image, typically at least 2GB in size. File sizes led to issues for bandwidth, monitoring, and storage resources.

In response, some vendors developed innovative virtual backup products. They made virtual backup much more resource-efficient and easily manageable. Increased performance shrank backup window requirements, provided effective RPO and RTO, simplified the backup process and improved recovery integrity.  These tools changed the virtual data protection landscape for the better.

However, many of these startups offered limited solutions that only supported a single type of hypervisor and several physical machines. This left virtual and physical networks essentially siloed – not to mention the problem of multiple point products creating even more silos within both environments. Managing cross-domain data protection using a variety of point products became inefficient and costly for IT.

Traditional backup makers also scrambled to add virtualization backup support and succeeded to a point, but only a point. Their backup code base was written well before the mass appearance of the cloud and virtualization, and retrofitting existing applications only went so far to provide scalability and integration. There was also the inability to solve a problem that has plagued IT since the early days of backup tape – restore assurance. It has always been risky to find out after the fact that the backup you depended on is not usable for recovery. With data sets doubling every 18 months, the risk of data loss has significantly risen.

More modern backup solves some of these problems but causes new ones. Modern backup offers automated scheduling, manual operations, policy setting, multiple types of backup targets, replication schemes, application optimization, and more. These are useful features but they are also costly and resource-hungry: roughly 30% of storage costs go to IT operations alone. Another problem with these new features is their complexity. It is difficult to optimize and monitor the data protection environment, leading to a conservative estimate of about 20% failure in backup or recovery jobs. 

In addition, most data protection products offer average-to-poor awareness and integration into their backup tape and disk targets. This results in difficulty in setting and testing Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for business applications. The last thing that IT wants is to cripple application recovery, but it is challenging to set meaningful RTO and RPO settings across multiple environments and applications, and extremely difficult to test them.
Even newer VM backup products are inadequate for modern enterprise data centers with physical and virtual layers running critical applications. Combine this with complex and mixed IT environments and it presents a very serious challenge for IT professionals charged with protecting data and application productivity.

What we are seeing now is next generation data protection that protects both virtual and physical environments in one flexible platform. Dell AppAssure is a leading pioneer in this promising field. AppAssure is rewriting the data protection book from limited point products to a highly agile data center protection platform with continual backup, instantaneous restore, backup assurance and a host of additional benefits.
 

Publish date: 06/27/13
Page 1 of 38 pages  1 2 3 >  Last ›