Includes WAFS, Wide Area Data Services or WDS, Replication.
Remote Offices have been the bane of IT for decades. Issues surrounding performance, collaboration, bandwidth management and latency abound. In this area of research Taneja Group analysts define a variety of new technologies and their pros and cons, the issues they solve, the players that are active and how they differentiate themselves from each other. Taneja Group analysts also provide guidance to IT as it struggles to evaluate new technologies and implement these new technologies in a coherent manner.
Myths prevail in the IT industry just as they do in every other facet of life. One common myth is that mid-sized workloads, exemplified by smaller versions of mission critical applications, are only to be found in mid-size companies. The reality is mid-sized workloads exist in businesses of all sizes. Another common fallacy is that small and mid-size enterprises (SMEs) or departments within large organizations and Remote/Branch Offices (ROBOs) have lesser storage requirements than their larger enterprise counterparts. The reality is companies and groups of every size have business-critical applications and these workloads require enterprise-grade storage solutions that offer high-performance, reliability and strong security. The only difference is IT groups managing mid-sized workloads frequently have significant budget constraints. This is a tough combination and presents a big challenge for storage vendors striving to satisfy mid-sized workload needs.
A recent survey conducted by Taneja Group showed mid-size and enterprise needs for high-performance storage were best met by highly virtualized systems that minimize disruption to their current environment. Storage virtualization is key because it abstracts away all the differences of various storage boxes to create 1) a single virtualized storage pool 2) a common set of data services and 3) a common interface to manage storage resources. These storage virtualization capabilities are beneficial to the overall enterprise storage market and they are especially attractive to mid-sized storage customers because storage virtualization is the core underlying capability that drives efficiency and affordability.
The combination of affordability, manageability and enterprise-grade functionality is the core strength of the IBM Storwize family built upon IBM Spectrum Virtualize, the quintessential virtualization software that has been hardened for over a decade with IBM SAN Volume Controller (SVC). Simply stated – few enterprise storage solutions match IBM Storwize’s ability to deliver enterprise-grade functionality at such a low cost. From storage virtualization and auto-tiering to real-time data compression and proven reliability, Storwize with Spectrum Virtualize offers an end-to-end storage footprint and centralized management that delivers highly efficient storage for mid-sized workloads, regardless of whether they exist in small or large companies.
In this paper, we will look at the key requirements for mid-sized storage and we will evaluate IBM Storwize with Spectrum Virtualize’s ability to tackle mid-sized workload requirements. We will also present an overview of IBM Storwize family and provide a comparison of the various models in the Storwize portfolio.
Hyperconvergence is one of the hottest IT trends going in to 2016. In a recent Taneja Group survey of senior enterprise IT folks we found that over 25% of organizations are looking to adopt hyperconvergence as their primary data center architecture. Yet the centralized enterprise datacenter may just be the tip of the iceberg when it comes to the vast opportunity for hyperconverged solutions. Where there are remote or branch office (ROBO) requirements demanding localized computing, some form of hyperconvergence would seem the ideal way to address the scale, distribution, protection and remote management challenges involved in putting IT infrastructure “out there” remotely and in large numbers.
However, most of today’s popular hyperconverged appliances were designed as data center infrastructure, converging data center IT resources like servers, storage, virtualization and networking into Lego™ like IT building blocks. While these at first might seem ideal for ROBOs – the promise of dropping in “whole” modular appliances precludes any number of onsite integration and maintenance challenges, ROBOs have different and often more challenging requirements than a datacenter. A ROBO does not often come with trained IT staff or a protected datacenter environment. They are, by definition, located remotely across relatively unreliable networks. And they fan out to the thousands (or tens of thousands) of locations.
Certainly any amount of convergence simplifies infrastructure making easier to deploy and maintain. But in general popular hyperconvergence appliances haven’t been designed to be remotely managed en masse, don’t address unreliable networks, and converge storage locally and directly within themselves. Persisting data in the ROBO is a recipe leading to a myriad of ROBO data protection issues. In ROBO scenarios, the datacenter form of hyperconvergence is not significantly better than simple converged infrastructure (e.g. pre-configured rack or blades in a box).
Riverbed’s SteelFusion we feel has brought full hyperconvergence benefits to the ROBO edge of the organization. They’ve married their world-class WANO technologies, virtualization, and remote storage “projection” to create what we might call “Edge Hyperconvergence”. We see the edge hyperconverged SteelFusion as purposely designed for companies with any number of ROBO’s that each require local IT processing.
The era of the software-defined data center is upon us. The promise of a software-defined strategy is a virtualized data center created from compute, network and storage building blocks. A Software-Defined Data Center (SDDC) moves the provisioning, management, and other advanced features into the software layer so that the entire system delivers improved agility and greater cost savings. This tectonic shift in the data center is as great as the shift to virtualized servers during the last decade and may prove to be greater in the long run.
This approach to IT infrastructure started over a decade ago when compute virtualization - through use of hypervisors - turned compute and server platforms into software objects. This same approach to virtualizing resources is now gaining acceptance in networking and storage architectures. When combined with overarching automation software, a business can now virtualize and manage an entire data center. The abstraction, pooling and running of compute, storage and networking functions, virtually, on shared hardware brings unprecedented agility and flexibility to the data center while driving costs down.
In this paper, Taneja Group takes an in-depth look at the capital expenditure (CapEx) savings that can be achieved by creating a state-of-the-art SDDC, based on currently available technology. We performed a comparative cost study of two different environments: one using the latest software solutions from VMware running on industry standard and white label hardware components; and the other running a more typical VMware virtualization environment, on mostly traditional, feature rich, hardware components, which we will describe as the Hardware-Dependent Data Center (HDDC). The CapEx saving we calculated were based on creating brand new (Greenfield) data centers for each scenario (an additional comparison for upgrading an existing data center is included at the end of this white paper).
Our analysis indicates that a dramatic cost savings, up to 49%, can be realized when using today’s SDDC capabilities combined with low cost white-label hardware, compared to a best in class HDDC. In addition, just by adopting VMware Virtual SAN and NSX in their current virtualized environment users can lower CapEx by 32%. By investing in SDDC technology, businesses can be assured their data center solution can be more easily upgraded and enhanced over the life of the hardware, providing considerable investment protection. Rapidly improving SDDC software capabilities, combined with declining hardware prices, promise to reduce total costs even further as complex embedded hardware features are moved into a more agile and flexible software environment.
Depending on customers’ needs and the choice of deployment model, an SDDC architecture offers a full spectrum of savings. VMware Virtual SAN is software-defined storage that pools inexpensive hard drives and common solid state drives installed in the virtualization hosts to lower capital expenses and simplify the overall storage architecture. VMware NSX aims to make these same advances for network virtualization by moving security and network functions to a software layer that can run on top of any physical network equipment. An SDDC approach is to “virtualize everything” along with data center automation that enables a private cloud with connectors to the public cloud if needed.
Companies with significant non-data center and often widely distributed IT infrastructure requirements are faced with many challenges. It can be difficult enough to manage tens or hundreds if not thousands of remote or branch office locations, but many of these can also be located in dirty or dangerous environments that are simply not suited for standard data center infrastructure. It is also hard if not impossible to forward deploy the necessary IT experience to manage any locally placed resources. The key challenge then, and one that can be competitively differentiating on cost alone, is to simplify branch IT as much as possible while supporting branch business.
Converged solutions have become widely popular in the data center, particularly in virtualized environments. By tightly integrating multiple functionalities into one package, there are fewer separate moving parts for IT to manage while at the same time optimizing capabilities based on tightly intimately integrating components. IT becomes more efficient and in many ways gains more control over the whole environment. In addition to obviously increasing IT simplicity there are many other cascading benefits. The converged infrastructure can perform better, is more resilient and available, and offers better security than separately assembled silos of components. And a big benefit is a drastically lowered TCO.
Yet for a number of reasons, data center convergence approaches haven’t translated as usefully to beneficial convergence in the branch. No matter how tightly integrated a “branch in a box” is, if it’s just an assemblage of the usual storage, server, and networking silo components it will still suffer from traditional branch infrastructure challenges – second-class performance, low reliability, high OPEX, and difficult to protect and recover. Branches have unique needs and data center infrastructure, converged or otherwise, isn’t designed to meet those needs. This is where Riverbed has pioneered a truly innovative converged infrastructure designed explicitly for the branch which provides simplified deployment and provisioning, resiliency in the face of network issues, improved protection and recovery from the central data center, optimization and acceleration for remote performance, and a greatly lowered OPEX.
In this paper we will review Riverbed’s SteelFusion (formerly known as Granite) branch converged infrastructure solution, and see how it marries together multiple technical advances including WAN optimization, stateless compute, and “projected” datacenter storage to solve those branch challenges and bring the benefits of convergence out to branch IT. We’ll see how SteelFusion is not only fulfilling the promise of a converged “branch” infrastructure that supports distributed IT, but also accelerates the business based on it.
The branch office has long been a critical dilemma for the IT organization. Branch offices for many organizations are a critical point of productivity and revenue generation, yet the branch has always come with a tremendous amount of operational overhead and risk. Worse yet, challenges are often exacerbated because the branch office too often looks like a carryover of outdated IT practices.
More often than not, the branch office is still a highly manual, human-effort-driven administration exercise. Physical equipment too often sits at a remote physical office, and requires significant human management and intervention for activities like data protection and recovery, or replacement of failed hardware. Given the remote nature of the branch office, such human intervention often comes with significant overhead in the form of telephone support, less than efficient over-the-wire system configuration, equipment build and ship processes, or even significant travel to remote locations. Moreover, in an attempt to avoid issues, the branch office is often over-provisioned with equipment in order to reduce the impact of outages, or is designed in such a way as to be too dependent on across the Wide Area Network (WAN) services that impair user productivity and simply exchange the risk of equipment failure for the risk of WAN outage. But while such practices come with significant operational cost, there’s a subtler cost lurking below the surface – any branch office outage is enmeshed in data consequences. Data protection may be a slower process for the branch office, subjecting the branch to greater risks with equipment failure or disaster, and restoring branch office data and productivity after a disaster can be a long slow process compared to the capabilities of the modern datacenter.
When branch offices are a key part of a business, these practices that are routinely accepted as the standard can make the branch office one of the costliest and riskiest areas of the IT infrastructure. Worse yet, for many enterprises, the branch office has only increased its importance over time, and may generate more revenue and require more responsive and available IT systems than ever before. The branch office clearly requires better agility and efficiency than it receives today.
Riverbed Technologies has long proven their mettle in helping enterprises optimize and better enable connectivity and data sharing for distributed work teams. Over the past decade, Riverbed has come to dominate the market for WAN optimization technologies that compress data and optimize the connection between branch or remote offices and the datacenter. But Riverbed rose to this position of dominance because their SteelHead appliances do far more than just optimize a connection – Riverbed’s dominance of this market sprung from deep collaboration and interaction optimization of CIFS/SMB and other protocols by way of intelligent interception and caching of the right data to make the remote experience feel like a local experience. Moreover, Riverbed SteelHead could do this while making that remote connection effectively stateless, and eliminating the need to protect or manage data in the branch office.
Almost two years ago, Riverbed announced a continuing evolution of their “location independent computing” focus with the introduction of their SteelFusion family of solutions. The vision behind SteelFusion was a focus on delivering far more performance and capability in branch offices, while doing away with the complexity of multiple component parts and scattered data. SteelFusion does this by transforming the branch office into a stateless “projection” of data, applications, and VMs stored in the datacenter. Moreover, SteelFusion does this with a converged solution that combines storage, networking, and compute all in one device – the first comprehensive converged infrastructure solution purpose-built for the branch. This converged offering though, is built on branch office “statelessness” that, as we’ll review, transparently stores data in the datacenter, and allows the business to configure, change, protect, and manage the branch office with enterprise tools, while eradicating the risk associated with traditional branch office infrastructure.
SteelFusion today does this by virtualizing VMware ESXi VMs on a stateless appliance that in essence “projects” data from the datacenter to a remote location, while maintaining localized speed of access and resilient availability that can tolerate even severe network outages. Three innovative technology components that make up Riverbed’s SteelFusion allow it to host virtual machines that access their primary data via the datacenter, from where it is cached on the SteelFusion appliance while maintaining a highly efficient but near synchronous connection back to the datacenter storage. In turn, SteelFusion makes it possible to run many local applications in a rich, complex branch office while requiring no other servers or devices. Riverbed promises that SteelFusion’s architecture can tolerate outages, but synchronize data so effectively that it will operate as a stateless appliance, enabling branch data to be completely protected by datacenter synchronization and backup, with more up to date protection and faster recovery regardless of whether there’s a loss of a single file, or the loss of an entire system. In short, this is a promise to comprehensively revolutionize the practice of branch office IT.
In January of 2014, Taneja Group took a deeper look at what Riverbed is doing with SteelFusion. While we’ve provided other written assessments on the use case and value of Riverbed SteelFusion, we also wanted to take a hands-on look at how the technology works, and whether in real world use it really delivers management effort reductions, availability improvements, and increased IT capabilities along with consequent improvements in the risks around branch office IT. To do this, we turned to a hands-on lab exercise – what we call a Technology Validation.
What did we find? We found that Riverbed SteelFusion does indeed deliver a transformation of branch office management and capabilities, by fundamentally reducing complexity, injecting a number of powerful capabilities (such as enterprise snapshots and access to all data, copies, and tools in the enterprise) and making the branch office resilient, constantly protected, and instantly recoverable. While the change in capabilities is significant, this also translates into a significant impact on time and effort, and we captured a number of metrics throughout our hands-on look at SteelFusion. For the details, we turn to the full report.
At its core, Software Defined Storage decouples storage management from the physical storage system. In practice Software Defined Storage vendors implement the solution using a variety of technologies: orchestration layers, virtual appliances and server-side products are all in the market now. They are valuable for storage administrators who struggle to manage multiple storage systems in the data center as well as remote data repositories.
What Software Defined Storage does not do is yield more value for the data under its control, or address global information governance requirements. To that end, Data Defined Storage yields the benefits of Software Defined Storage while also reducing data risk and increasing data value throughout the distributed data infrastructure. In this report we will explore how Tarmin’s GridBank Data Management Platform provides Software Defined Storage benefits and also drives reduced risk and added business value for distributed unstructured data with Data Defined Storage.