Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 1 of 38 pages  1 2 3 >  Last ›
Profile

Memory is the Hidden Secret to Success with Big Data: GridGain’s In-Memory Hadoop Accelerator

Two big trends are driving IT today. One, of course, is big data. The growth in big data IT is tremendous, both in terms of data and in number of analytical apps developing in new architectures like Hadoop. The second is the well-documented long-term trend for critical resources like CPU and memory to get cheaper and denser over time. It seems a happy circumstance that these two trends accommodate each other to some extend; as data sets grow, resources are also growing. It's not surprising to see traditional scale-up databases with new in-memory options coming to the broader market for moderately-sized structured databases. What is not so obvious is that today an in-memory scale-out grid can cost-effectively accelerate both larger scale databases as well as those new big data analytical applications.

A robust in-memory distributed grid combines the speed of memory with massive horizontal scale-out and enterprise features previously reserved for disk-orienting systems. By transitioning data processing onto what's really now an in-memory data management platform, performance can be competitively accelerated across the board for all applications and all data types. For example, GridGain's In-Memory Computing Platform can functionally replace both slower disk-based SQL databases and accelerate unstructured big data processing to the point where formerly "batch" Hadoop-based apps can handle both streaming data and interactive analysis.

While IT shops may be generally familiar with traditional in-memory databases - and IT resource economics are shifting rapidly in favor of in-memory options - less known about how an in-memory approach is a game-changing enabler to big data efforts. In this report, we'll first briefly examine Hadoop and it's fundamental building blocks to see why high performance big data projects, those that are more interactive, real-time, streaming, and operationally focused have needed to continue to look for yet newer solutions. Then, much like the best in-memory database solutions, we'll see how GridGain's In-Memory Hadoop Accelerator can simply "plug-and-play" into Hadoop, immediately and transparently accelerating big data analysis by orders of magnitude. We'll finish by evaluating GridGain's enterprise robustness, performance and scalability, and consider how it enables a whole new set of competitive solutions unavailable over native databases and batch-style Hadoop.

Publish date: 07/08/14
Profile

Violin Concerto 7000 All Flash Array: Performance Packed with Data Services

All Flash Arrays (AFAs) are plentiful in the market. At one level all AFAs deliver phenomenal performance compared to an HDD array. But comparing AFAs to an HDD-based system is like comparing a Ford Focus to a Lamborghini. The comparison has to be inter-AFAs and when one looks under the hood one finds the AFAs in the market vary in performance, resiliency, consistency of performance, density, scalability and almost every dimension one can think of.

An AFA has to be viewed as a business transformation technology. A well-designed AFA, applied to the right applications will not only speed up application performance but by doing so enable you to make fundamental changes to your business. It may enable you to offer new services to your customers. Or serve your current customers faster and better. Or improve internal procedures in a way that improves employee morale and productivity. To not view an AFA through the business lens would be missing the point.

In this Product Profile we describe all the major criteria that should be used to evaluate AFAs and then look at Violin’s new entry, Concerto 7000 All Flash Array to see how it fares against these measures.

Publish date: 06/24/14
Profile

The HP Solution to Backup Complexity and Scale

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HP as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HP is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce storage and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HP hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HP people have taken to heart a “customer first” message to provide a truly solution-focused HP experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HP business unit components are in the “box”. And significantly, this approach elevates HP from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HP is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HP StoreOnce and Data Protector products are truly game-changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 05/30/14
Profile

Converging Branch IT Infrastructure the Right Way: Riverbed SteelFusion

Companies with significant non-data center and often widely distributed IT infrastructure requirements are faced with many challenges. It can be difficult enough to manage tens or hundreds if not thousands of remote or branch office locations, but many of these can also be located in dirty or dangerous environments that are simply not suited for standard data center infrastructure. It is also hard if not impossible to forward deploy the necessary IT experience to manage any locally placed resources. The key challenge then, and one that can be competitively differentiating on cost alone, is to simplify branch IT as much as possible while supporting branch business.

Converged solutions have become widely popular in the data center, particularly in virtualized environments. By tightly integrating multiple functionalities into one package, there are fewer separate moving parts for IT to manage while at the same time optimizing capabilities based on tightly intimately integrating components. IT becomes more efficient and in many ways gains more control over the whole environment. In addition to obviously increasing IT simplicity there are many other cascading benefits. The converged infrastructure can perform better, is more resilient and available, and offers better security than separately assembled silos of components. And a big benefit is a drastically lowered TCO.

Yet for a number of reasons, data center convergence approaches haven’t translated as usefully to beneficial convergence in the branch. No matter how tightly integrated a “branch in a box” is, if it’s just an assemblage of the usual storage, server, and networking silo components it will still suffer from traditional branch infrastructure challenges – second-class performance, low reliability, high OPEX, and difficult to protect and recover. Branches have unique needs and data center infrastructure, converged or otherwise, isn’t designed to meet those needs. This is where Riverbed has pioneered a truly innovative converged infrastructure designed explicitly for the branch which provides simplified deployment and provisioning, resiliency in the face of network issues, improved protection and recovery from the central data center, optimization and acceleration for remote performance, and a greatly lowered OPEX.

In this paper we will review Riverbed’s SteelFusion (formerly known as Granite) branch converged infrastructure solution, and see how it marries together multiple technical advances including WAN optimization, stateless compute, and “projected” datacenter storage to solve those branch challenges and bring the benefits of convergence out to branch IT. We’ll see how SteelFusion is not only fulfilling the promise of a converged “branch” infrastructure that supports distributed IT, but also accelerates the business based on it.

Publish date: 04/15/14
Profile

Software-Driven Mid-Range Storage: Customer Value and the Software-driven IBM Storwize V5000

Whether a customer is making their first foray into external storage technology, or buying their 100th storage array, there is little doubt in most customers' minds that storage can be hard. Specialized storage technology, combined with significant cost and the critical nature of stored data, mix together to make storage one of the riskiest endeavors most IT practitioners will undertake.

Over the past two years, the storage market has exploded with offerings that provide more storage system choices than ever before. In part, this is due to the recent and rapid introduction of technologies like flash storage that have enabled new companies to bring to market fairly competent storage systems with significantly less engineering effort.

There is little doubt that the resulting competition and choice are a boon to the customer, as this can drive down prices, and compel vendors to innovate and deliver new features more aggressively. But sometimes, new technologies may leave lingering surprises for the customer - especially for those customers trying to build a long term and lasting storage strategy. Moreover, storage technology is changing in multiple dimensions. There is a revolutionary shift toward software-defined capabilities, while simultaneously media, controller architectures, virtual infrastructure integrations, and workload patterns are all simultaneously changing. In the midst of such change, it is more important than ever to be attentive to what really matters, and in a changing market, what matters is not always clear. In our view, the consideration of the storage practitioner must broaden, and consider a careful balancing act that considers both new capabilities - like agility and cost-optimizing software-defined functionality - and foundational storage underpinnings that are too easy to take for granted. In this product profile, we've turned our sights on a recent product introduction from IBM - the Storwize V5000 - to consider how IBM is integrating a broad swatch of new capabilities while building those capabilities on a field proven and deeply architected storage foundation.

Publish date: 04/01/14
Profile

Software Storage Solutions for Virtualization

Storage has long been a major source of operational and architectural challenges for IT practitioners, but today these challenges are most felt in the virtual infrastructure. The challenges are sprung from the physicality of storage – while the virtual infrastructure has made IT entirely more agile and adaptable than ever before, storage still depends upon digital bits that are permanently stored on a physical device somewhere within the data center.

For practitioners who have experienced the pain caused by this – configuration hurdles, painful data migrations, and even disasters – the idea of software-defined storage likely sounds somewhat ludicrous. But the term also holds tremendous potential to change the way IT is done by tackling this one last vestige of the traditional, inflexible IT infrastructure.

The reality is that software-defined storage isn’t that far away. In the virtual infrastructure, a number of vendors have long offered Virtual Storage Appliances (VSAs) that can make storage remarkably close to software-defined. These solutions allow administrators to easily and rapidly deploy storage controllers within the virtual infrastructure, and equip either networked storage pools or the direct-attached storage within a server with enterprise-class storage features that are consistent and easily managed by the virtual administrator, irrespective of where the virtual infrastructure is (in the cloud, or on premise). Such solutions can make comprehensive storage functionality available in places where it could never be had before, allow for higher utilization of stranded pools of storage (such as local disk in the server), and enable a homogeneous management approach even across many distributed locations.

The 2012-2013 calendar years have brought an increasing amount of attention and energy in the VSA marketplace. While the longest established, major vendor VSA solution in the marketplace has been HP’s StoreVirtual VSA, in 2013 an equally major vendor – VMware – introduced a similar, software-based, scale-out storage solution for the virtual infrastructure – VSAN. While VMware’s VSAN does not directly carry a VSA moniker, and in fact stands separate from VMware’s own vSphere Storage Appliance, VSAN has an architecture very similar to the latest generation of HP’s own StoreVirtual VSA. Both of these products are scale-out storage software solutions that are deployed in the virtual infrastructure and contain solid-state caching/tiering capabilities that enhance performance and make them enterprise-ready for production workloads. VMware’s 2013 announcement finally meant HP is no longer the sole major vendor (Fortune 500) with a primary storage VSA approach. This only adds validation to other vendors who have long offered VSA-based solutions, vendors like FalconStor, Nexenta, and StorMagic.

We’ve turned to a high level assessment of five market leaders who are today offering VSA or software storage in the virtual infrastructure. We’ve assessed these solutions here with an eye toward how they fit as primary storage for the virtual infrastructure. In this landscape, we’ve profiled the key characteristics and capabilities critical to storage systems fulfilling this role. At the end of our assessment, clearly each solution has a place in the market, but not all VSA solutions are ready for primary storage. Those that are, may stand to reinvent the practice of storage in customer data centers. 

Publish date: 01/03/14
Page 1 of 38 pages  1 2 3 >  Last ›