Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 1 of 38 pages  1 2 3 >  Last ›
Profile

IBM FlashSystem V840: Transforming the Traditional Datacenter

Within the past few months IBM announced a new member of its FlashSystem family of all-flash storage platforms – the IBM FlashSystem V840. FlashSystem V840 adds a rich set of storage virtualization features to the baseline FlashSystem 840 model. V840 combines two venerable technology heritages: the hardware hails from the long lineage of Texas Memory Systems flash storage arrays, and the storage services feature set for FlashSystem V840 is inherited from the IBM storage virtualization software that powers the SAN Volume Controller (SVC). One was created to deliver the highest performance out of flash technology and the other was a forerunner of what is being termed software defined storage. Together, these two technology streams represent decades of successful customer deployments in a wide variety of enterprise environments.

It is easy to be impressed with the performance and the tight integration of SVC functionality built into the FlashSystem V840. It is also easy to appreciate the wide variety of storage services built on top of SVC that are now an integral part of FlashSystem V840. But we believe the real impact of FlashSystem V840 is understood when one views how this product affects the cost of flash appliances, and more generally how this new cost profile will undoubtedly affect traditional data center architecture and deployment strategies. This Solution Profile will discuss how IBM FlashSystem V840 combines software-defined storage with the extreme performance of flash, and why the cost profile of this new product – equivalent essentially to current high performance disk storage – will have a major positive impact on data center storage architecture and the businesses that these data centers support.

Publish date: 09/16/14
Profile

Memory is the Hidden Secret to Success with Big Data: GridGain’s In-Memory Hadoop Accelerator

Two big trends are driving IT today. One, of course, is big data. The growth in big data IT is tremendous, both in terms of data and in number of analytical apps developing in new architectures like Hadoop. The second is the well-documented long-term trend for critical resources like CPU and memory to get cheaper and denser over time. It seems a happy circumstance that these two trends accommodate each other to some extend; as data sets grow, resources are also growing. It's not surprising to see traditional scale-up databases with new in-memory options coming to the broader market for moderately-sized structured databases. What is not so obvious is that today an in-memory scale-out grid can cost-effectively accelerate both larger scale databases as well as those new big data analytical applications.

A robust in-memory distributed grid combines the speed of memory with massive horizontal scale-out and enterprise features previously reserved for disk-orienting systems. By transitioning data processing onto what's really now an in-memory data management platform, performance can be competitively accelerated across the board for all applications and all data types. For example, GridGain's In-Memory Computing Platform can functionally replace both slower disk-based SQL databases and accelerate unstructured big data processing to the point where formerly "batch" Hadoop-based apps can handle both streaming data and interactive analysis.

While IT shops may be generally familiar with traditional in-memory databases - and IT resource economics are shifting rapidly in favor of in-memory options - less known about how an in-memory approach is a game-changing enabler to big data efforts. In this report, we'll first briefly examine Hadoop and it's fundamental building blocks to see why high performance big data projects, those that are more interactive, real-time, streaming, and operationally focused have needed to continue to look for yet newer solutions. Then, much like the best in-memory database solutions, we'll see how GridGain's In-Memory Hadoop Accelerator can simply "plug-and-play" into Hadoop, immediately and transparently accelerating big data analysis by orders of magnitude. We'll finish by evaluating GridGain's enterprise robustness, performance and scalability, and consider how it enables a whole new set of competitive solutions unavailable over native databases and batch-style Hadoop.

Publish date: 07/08/14
Profile

Violin Concerto 7000 All Flash Array: Performance Packed with Data Services

All Flash Arrays (AFAs) are plentiful in the market. At one level all AFAs deliver phenomenal performance compared to an HDD array. But comparing AFAs to an HDD-based system is like comparing a Ford Focus to a Lamborghini. The comparison has to be inter-AFAs and when one looks under the hood one finds the AFAs in the market vary in performance, resiliency, consistency of performance, density, scalability and almost every dimension one can think of.

An AFA has to be viewed as a business transformation technology. A well-designed AFA, applied to the right applications will not only speed up application performance but by doing so enable you to make fundamental changes to your business. It may enable you to offer new services to your customers. Or serve your current customers faster and better. Or improve internal procedures in a way that improves employee morale and productivity. To not view an AFA through the business lens would be missing the point.

In this Product Profile we describe all the major criteria that should be used to evaluate AFAs and then look at Violin’s new entry, Concerto 7000 All Flash Array to see how it fares against these measures.

Publish date: 06/24/14
Profile

The HP Solution to Backup Complexity and Scale

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HP as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HP is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce storage and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HP hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HP people have taken to heart a “customer first” message to provide a truly solution-focused HP experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HP business unit components are in the “box”. And significantly, this approach elevates HP from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HP is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HP StoreOnce and Data Protector products are truly game-changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 05/30/14
Profile

Converging Branch IT Infrastructure the Right Way: Riverbed SteelFusion

Companies with significant non-data center and often widely distributed IT infrastructure requirements are faced with many challenges. It can be difficult enough to manage tens or hundreds if not thousands of remote or branch office locations, but many of these can also be located in dirty or dangerous environments that are simply not suited for standard data center infrastructure. It is also hard if not impossible to forward deploy the necessary IT experience to manage any locally placed resources. The key challenge then, and one that can be competitively differentiating on cost alone, is to simplify branch IT as much as possible while supporting branch business.

Converged solutions have become widely popular in the data center, particularly in virtualized environments. By tightly integrating multiple functionalities into one package, there are fewer separate moving parts for IT to manage while at the same time optimizing capabilities based on tightly intimately integrating components. IT becomes more efficient and in many ways gains more control over the whole environment. In addition to obviously increasing IT simplicity there are many other cascading benefits. The converged infrastructure can perform better, is more resilient and available, and offers better security than separately assembled silos of components. And a big benefit is a drastically lowered TCO.

Yet for a number of reasons, data center convergence approaches haven’t translated as usefully to beneficial convergence in the branch. No matter how tightly integrated a “branch in a box” is, if it’s just an assemblage of the usual storage, server, and networking silo components it will still suffer from traditional branch infrastructure challenges – second-class performance, low reliability, high OPEX, and difficult to protect and recover. Branches have unique needs and data center infrastructure, converged or otherwise, isn’t designed to meet those needs. This is where Riverbed has pioneered a truly innovative converged infrastructure designed explicitly for the branch which provides simplified deployment and provisioning, resiliency in the face of network issues, improved protection and recovery from the central data center, optimization and acceleration for remote performance, and a greatly lowered OPEX.

In this paper we will review Riverbed’s SteelFusion (formerly known as Granite) branch converged infrastructure solution, and see how it marries together multiple technical advances including WAN optimization, stateless compute, and “projected” datacenter storage to solve those branch challenges and bring the benefits of convergence out to branch IT. We’ll see how SteelFusion is not only fulfilling the promise of a converged “branch” infrastructure that supports distributed IT, but also accelerates the business based on it.

Publish date: 04/15/14
Profile

Software-Driven Mid-Range Storage: Customer Value and the Software-driven IBM Storwize V5000

Whether a customer is making their first foray into external storage technology, or buying their 100th storage array, there is little doubt in most customers' minds that storage can be hard. Specialized storage technology, combined with significant cost and the critical nature of stored data, mix together to make storage one of the riskiest endeavors most IT practitioners will undertake.

Over the past two years, the storage market has exploded with offerings that provide more storage system choices than ever before. In part, this is due to the recent and rapid introduction of technologies like flash storage that have enabled new companies to bring to market fairly competent storage systems with significantly less engineering effort.

There is little doubt that the resulting competition and choice are a boon to the customer, as this can drive down prices, and compel vendors to innovate and deliver new features more aggressively. But sometimes, new technologies may leave lingering surprises for the customer - especially for those customers trying to build a long term and lasting storage strategy. Moreover, storage technology is changing in multiple dimensions. There is a revolutionary shift toward software-defined capabilities, while simultaneously media, controller architectures, virtual infrastructure integrations, and workload patterns are all simultaneously changing. In the midst of such change, it is more important than ever to be attentive to what really matters, and in a changing market, what matters is not always clear. In our view, the consideration of the storage practitioner must broaden, and consider a careful balancing act that considers both new capabilities - like agility and cost-optimizing software-defined functionality - and foundational storage underpinnings that are too easy to take for granted. In this product profile, we've turned our sights on a recent product introduction from IBM - the Storwize V5000 - to consider how IBM is integrating a broad swatch of new capabilities while building those capabilities on a field proven and deeply architected storage foundation.

Publish date: 04/01/14
Page 1 of 38 pages  1 2 3 >  Last ›