Includes Storage Arrays, NAS, File Systems, Clustered and Distributed File Systems, FC Switches/Directors, HBA, CNA, Routers, Components, Semiconductors, Server Blades.
Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All components that make up the SAN, FC-based or iSCSI-based, and all forms of file servers, including NAS systems based on clustered or distributed file systems, are covered soup to nuts. Our analysts have deep backgrounds in file systems area in particular. Components such as Storage Network Processors, SAS Expanders, FC Controllers are covered here as well. Server Blades coverage straddles this section as well as the Infrastructure Management section above.
The era of the software-defined data center is upon us. The promise of a software-defined strategy is a virtualized data center created from compute, network and storage building blocks. A Software-Defined Data Center (SDDC) moves the provisioning, management, and other advanced features into the software layer so that the entire system delivers improved agility and greater cost savings. This tectonic shift in the data center is as great as the shift to virtualized servers during the last decade and may prove to be greater in the long run.
This approach to IT infrastructure started over a decade ago when compute virtualization - through use of hypervisors - turned compute and server platforms into software objects. This same approach to virtualizing resources is now gaining acceptance in networking and storage architectures. When combined with overarching automation software, a business can now virtualize and manage an entire data center. The abstraction, pooling and running of compute, storage and networking functions, virtually, on shared hardware brings unprecedented agility and flexibility to the data center while driving costs down.
In this paper, Taneja Group takes an in-depth look at the capital expenditure (CapEx) savings that can be achieved by creating a state-of-the-art SDDC, based on currently available technology. We performed a comparative cost study of two different environments: one using the latest software solutions from VMware running on industry standard and white label hardware components; and the other running a more typical VMware virtualization environment, on mostly traditional, feature rich, hardware components, which we will describe as the Hardware-Dependent Data Center (HDDC). The CapEx saving we calculated were based on creating brand new (Greenfield) data centers for each scenario (an additional comparison for upgrading an existing data center is included at the end of this white paper).
Our analysis indicates that a dramatic cost savings, up to 49%, can be realized when using today’s SDDC capabilities combined with low cost white-label hardware, compared to a best in class HDDC. In addition, just by adopting VMware Virtual SAN and NSX in their current virtualized environment users can lower CapEx by 32%. By investing in SDDC technology, businesses can be assured their data center solution can be more easily upgraded and enhanced over the life of the hardware, providing considerable investment protection. Rapidly improving SDDC software capabilities, combined with declining hardware prices, promise to reduce total costs even further as complex embedded hardware features are moved into a more agile and flexible software environment.
Depending on customers’ needs and the choice of deployment model, an SDDC architecture offers a full spectrum of savings. VMware Virtual SAN is software-defined storage that pools inexpensive hard drives and common solid state drives installed in the virtualization hosts to lower capital expenses and simplify the overall storage architecture. VMware NSX aims to make these same advances for network virtualization by moving security and network functions to a software layer that can run on top of any physical network equipment. An SDDC approach is to “virtualize everything” along with data center automation that enables a private cloud with connectors to the public cloud if needed.
The massive trend to virtualize servers has brought great benefits to IT data centers everywhere, but other domains of IT infrastructure have been challenged to likewise evolve. In particular, enterprise storage has remained expensively tied to a traditional hardware infrastructure based on antiquated logical constructs that are not well aligned with virtual workloads – ultimately impairing both IT efficiency and organizational agility.
Software-Defined Storage provides a new approach to making better use of storage resources in the virtual environment. Some software-defined solutions are even enabling storage provisioning and management on an object, database or per-VM level instead of struggling with block storage LUN’s or file volumes. In particular, VM-centricity, especially when combined with an automatic policy-based approach to management, enables virtual admins to deal with storage in the same mindset and in the same flow as other virtual admin tasks.
In this paper, we will look at VMware’s Virtual SAN product and its impact on operations. Virtual SAN brings both virtualized storage infrastructure and VM-centric storage together into one solution that significantly reduces cost compared to a traditional SAN. While this kind of software-defined storage alters the acquisition cost of storage in several big ways (avoiding proprietary storage hardware, dedicated storage adapters and fabrics, et.al.) here at Taneja Group what we find more significant is the opportunity for solutions like VMware’s Virtual SAN to fundamentally alter the on-going operational (or OPEX) costs of storage.
In this report, we will look at how Software-Defined Storage stands to transform the long term OPEX for storage by examining VMware’s Virtual SAN product. We’ll do this by examining a representative handful of key operational tasks associated with enterprise storage and the virtual infrastructure in our validation lab. We’ll examine the key data points recorded from our comparative hands-on examination, estimating the overall time and effort required for common OPEX tasks on both VMware Virtual SAN and traditional enterprise storage.
Two big trends are driving IT today. One, of course, is big data. The growth in big data IT is tremendous, both in terms of data and in number of analytical apps developing in new architectures like Hadoop. The second is the well-documented long-term trend for critical resources like CPU and memory to get cheaper and denser over time. It seems a happy circumstance that these two trends accommodate each other to some extend; as data sets grow, resources are also growing. It's not surprising to see traditional scale-up databases with new in-memory options coming to the broader market for moderately-sized structured databases. What is not so obvious is that today an in-memory scale-out grid can cost-effectively accelerate both larger scale databases as well as those new big data analytical applications.
A robust in-memory distributed grid combines the speed of memory with massive horizontal scale-out and enterprise features previously reserved for disk-orienting systems. By transitioning data processing onto what's really now an in-memory data management platform, performance can be competitively accelerated across the board for all applications and all data types. For example, GridGain's In-Memory Computing Platform can functionally replace both slower disk-based SQL databases and accelerate unstructured big data processing to the point where formerly "batch" Hadoop-based apps can handle both streaming data and interactive analysis.
While IT shops may be generally familiar with traditional in-memory databases - and IT resource economics are shifting rapidly in favor of in-memory options - less known about how an in-memory approach is a game-changing enabler to big data efforts. In this report, we'll first briefly examine Hadoop and it's fundamental building blocks to see why high performance big data projects, those that are more interactive, real-time, streaming, and operationally focused have needed to continue to look for yet newer solutions. Then, much like the best in-memory database solutions, we'll see how GridGain's In-Memory Hadoop Accelerator can simply "plug-and-play" into Hadoop, immediately and transparently accelerating big data analysis by orders of magnitude. We'll finish by evaluating GridGain's enterprise robustness, performance and scalability, and consider how it enables a whole new set of competitive solutions unavailable over native databases and batch-style Hadoop.
The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.
In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:
· Implementation efficiency that accelerates time to realizing value from integrated systems
· Operational efficiency through optimized workload density and an ideally right sized set of infrastructure
· Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together
· Scale and agility efficiency unlocked through a repeatedly deployable building block approach
· Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure
In late 2013, HP introduced a new portfolio offering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.
All Flash Arrays (AFAs) are plentiful in the market. At one level all AFAs deliver phenomenal performance compared to an HDD array. But comparing AFAs to an HDD-based system is like comparing a Ford Focus to a Lamborghini. The comparison has to be inter-AFAs and when one looks under the hood one finds the AFAs in the market vary in performance, resiliency, consistency of performance, density, scalability and almost every dimension one can think of.
An AFA has to be viewed as a business transformation technology. A well-designed AFA, applied to the right applications will not only speed up application performance but by doing so enable you to make fundamental changes to your business. It may enable you to offer new services to your customers. Or serve your current customers faster and better. Or improve internal procedures in a way that improves employee morale and productivity. To not view an AFA through the business lens would be missing the point.
In this Product Profile we describe all the major criteria that should be used to evaluate AFAs and then look at Violin’s new entry, Concerto 7000 All Flash Array to see how it fares against these measures.
There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.
While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.
For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HP as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.
First, HP is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce storage and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.
But HP hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HP people have taken to heart a “customer first” message to provide a truly solution-focused HP experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HP business unit components are in the “box”. And significantly, this approach elevates HP from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HP is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.
In this report, we’ll examine first why HP StoreOnce and Data Protector products are truly game-changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.