Items Tagged: convergence
IBM PureSystems seems to be a major convergence play. Let’s take a quick look at convergence history to see why IBM might be a very different take on a long-term industry development.
The term “convergence” may have been overused by IT marketers, but it’s an important concept that should represent an ongoing design principle and can maximize storage investments.
Recently we posted a new market assessment of InfiniBand and its growing role in enterprise data centers. Here is a more philosophical thought about the optimized design of InfiniBand and its role as data center communication virtualization...
SimpliVity: Transforming the Data Center with Virtualization and Storage Convergence
The data center has fast been transformed by the emergence and mainstream adoption of virtualization over merely the past decade. Virtualization has changed the ability of IT to deploy and manage workloads, and lent tremendous power to the administrator for manipulating those workloads in clever ways. At onset, virtualization first became appealing to most users because it promised to homogenize a rather difficult part of the physical data center through a layer of software abstraction. Such abstraction would make configuration and deployment difficulties associated with physical server hardware and fat operating systems melt away. The rewards from this undertaking were tremendous – untold buckets of operational dollars were saved by avoiding time and effort intensive rack, power, install, and configure cycles that would occur repeatedly to facilitate application development, testing, production deployment, cycle replacements, break/fix, and more. The changes have infused the business with a new ability to leverage IT, and to do so at lower cost and with less risk of disruption.
The transformation is not yet complete, as the full promise of virtualization is inhibited by the underlying physical infrastructure of the data center. Realization of the need to address the infrastructure complexity problem is driving a flurry of innovation in the market. With an eye on what we call hyperconvergence, we’ll briefly survey innovations emerging in response to on-going virtualization challenges, evaluate how these technologies will impact the datacenter over the next few years, and highlight one vendor as an early innovator bringing these changes to the market: SimpliVity. Unlike early players who have converged a few aspects of the IT Infrastructure stack – rudimentary storage and server functionality – SimpliVity has assimilated all the functionality of the IT Infrastructure on to a single platform. Each such unit – OmniCube, as SimpliVity calls it – offers a complete set of data center infrastructure functionality at a fraction of the acquisition and operating costs of separate IT systems. But hyperconvergence will bring about change far bigger than costs, and may well transform how IT is done. Let’s take a look.
SimpliVity Corp. came out of stealth today, promising an integrated stack of storage, compute and networking in 2U appliances that can be managed through VMware's vCenter console.
EMC Corp. (NYSE: EMC) has helped to define the concept of "IT convergence" by offering its hardware paired with components from Cisco (Nasdaq: CSCO) and its VMware (NYSE: VMW) arm, in a bid to simplify data center infrastructure.
Over the past decade the data center has been transformed by the emergence and mainstream adoption of virtualization. Today, the data center is a far different creature than any architect would have imagined prior to the year 2000.
Convergence -- the bundling of storage, compute, network and virtualization -- is already evolving with new products that redefine ease of use.
Looking back over 2012, it has really been the year of convergence. IT resource domain silos have been breaking down in favor of more coherent, cohesive, and unified architectures ... Power and cooling have always been integral to the data center, but have been managed disparately from the occupying compute and storage infrastructures. However, there are emerging technologies driven by green initiatives and by cost efficiency projects that... will enable a level of convergence between facilities and systems management.
Could a Riverbed and HP partnership on a converged infrastructure platform be the next big thing for Riverbed?
I recently had the chance at VMware PEX to spend some extended in-depth time with HP in a bootcamp session intended for their field and channel. I've blogged a bit already on a couple of observations stemming from this, but there is at least one other observation to be had.
Local, Shared, Cloud-Based, & Beyond.
Once upon a time as IT shops grew and matured, infrastructure subgroups would form to focus on complex domain-specific technologies. Servers, storage and networking all required deep subject matter expertise and a single-minded focus to keep up with the varying intricacies of implementation, operations and management.
As an unusual exception to precedent, Taneja Group finds ourselves at HP Discover this year. HP Discover tends to be a more in-depth and longer event than Taneja Group can historically cover, but the HP innovation portfolio continues to revolve around storage to such a degree it can't be missed. With feet on the ground in Las Vegas, what do we think we have to look forward to at Discover 2013?
HP StoreOnce Boldly Goes Where No Deduplication Has Gone Before
Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.
However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.
Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe, or Dedupe 1.0 as it is sometimes referred to, is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.
A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.
Federating dedupe across systems goes a long way to solve that problem. HP StoreOnce extends consistent dedupe across the infrastructure. Only HP implements the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.
This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges, how HP is achieving its vision of federated dedupe with StoreOnce –- and what HP’s StoreOnce VSA announcement and achievement means to backup service providers, enterprise ROBO, and SMB customers.
Why VM Density Matters: HP Innovation Delivers Validated "2x" Advantage
Server virtualization brings a vast array of benefits ranging from direct cost savings to indirect improvements in business agility and client satisfaction. But for the IT investment decision-maker, it’s those measurable “hard” costs that matter most. Fortunately, virtualized environments deliver a quantifiably lower Total Cost of Ownership (TCO) compared to legacy physical infrastructures. Since we have all experienced the economic imperative to minimize TCO, it’s easy to understand why virtualization has been driven across significant percentages of modern data centers. Virtualization today is a proven, cost-effective, and nearly ubiquitous IT solution.
But the further challenge for IT investors now is to choose the best virtualization solution to get the “biggest bang for the buck”. Unfortunately the traditional cost per infrastructure (server CPU, storage GB, et.al.) metrics used to judge physical hardware are not sufficient buying criteria in a virtual environment. In a virtualized and cloudy world, cost per application comes closer to capturing the true value of a virtual infrastructure investment. For example, the more virtual machines (VMs) that can be hosted within the same size investment, the lower the cost per application. Therefore a key comparison metric between virtualization solutions is virtual machine (VM) density. All other things being equal (e.g. applications, choice of hypervisor, specific allocations and policies), an infrastructure supporting a higher VM density provides a better value.
As virtualization deployments grow to include active production workloads, they greatly stress and challenge traditional IT infrastructure. The virtualization hypervisor “blends up” client IO workloads and condenses IO intensive activities (i.e. migration, snapshots, backups), with the result that the underlying storage often presents the biggest constraint on effective VM density. Therefore it’s critically important when selecting storage for virtual environments to get past general marketing and focus on validated claims of proven VM density.
There are plenty of technologies touted as the next big thing. Big data, flash, high-performance computing, in-memory processing, NoSQL, virtualization, convergence, software-defined whatever all represent wild new forces that could bring real disruption but big opportunities to your local data center.
- Premiered: 03/19/14
- Author: Mike Matchett
- Published: Tech Target: Search Data Center
With today's rebranding of Riverbed Granite as SteelFusion, Riverbed is prodding all branch IT owners to step up and consider what branch IT should ideally look like. Instead of a disparate package of network optimization, remote servers and storage arrays, difficult if not foresworn data protection approaches, and independently maintained branch applications... converged SteelFusion edge appliances sit in the branch to provide local computing performance but work on "projected" data... A big part of the unique magic in SteelFusion is that it enables branch locations to leverage enterprise storage actually located in the data center as if it were local high performance storage....
Converging Branch IT Infrastructure the Right Way: Riverbed SteelFusion
Companies with significant non-data center and often widely distributed IT infrastructure requirements are faced with many challenges. It can be difficult enough to manage tens or hundreds if not thousands of remote or branch office locations, but many of these can also be located in dirty or dangerous environments that are simply not suited for standard data center infrastructure. It is also hard if not impossible to forward deploy the necessary IT experience to manage any locally placed resources. The key challenge then, and one that can be competitively differentiating on cost alone, is to simplify branch IT as much as possible while supporting branch business.
Converged solutions have become widely popular in the data center, particularly in virtualized environments. By tightly integrating multiple functionalities into one package, there are fewer separate moving parts for IT to manage while at the same time optimizing capabilities based on tightly intimately integrating components. IT becomes more efficient and in many ways gains more control over the whole environment. In addition to obviously increasing IT simplicity there are many other cascading benefits. The converged infrastructure can perform better, is more resilient and available, and offers better security than separately assembled silos of components. And a big benefit is a drastically lowered TCO.
Yet for a number of reasons, data center convergence approaches haven’t translated as usefully to beneficial convergence in the branch. No matter how tightly integrated a “branch in a box” is, if it’s just an assemblage of the usual storage, server, and networking silo components it will still suffer from traditional branch infrastructure challenges – second-class performance, low reliability, high OPEX, and difficult to protect and recover. Branches have unique needs and data center infrastructure, converged or otherwise, isn’t designed to meet those needs. This is where Riverbed has pioneered a truly innovative converged infrastructure designed explicitly for the branch which provides simplified deployment and provisioning, resiliency in the face of network issues, improved protection and recovery from the central data center, optimization and acceleration for remote performance, and a greatly lowered OPEX.
In this paper we will review Riverbed’s SteelFusion (formerly known as Granite) branch converged infrastructure solution, and see how it marries together multiple technical advances including WAN optimization, stateless compute, and “projected” datacenter storage to solve those branch challenges and bring the benefits of convergence out to branch IT. We’ll see how SteelFusion is not only fulfilling the promise of a converged “branch” infrastructure that supports distributed IT, but also accelerates the business based on it.
SteelFusion radically improves branch economics by fusing servers, storage, virtualization and networking into a single solution speeding provisioning by up to 30x and recovery by up to 96x