Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Systems

Includes Storage Arrays, NAS, File Systems, Clustered and Distributed File Systems, FC Switches/Directors, HBA, CNA, Routers, Components, Semiconductors, Server Blades.

Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All components that make up the SAN, FC-based or iSCSI-based, and all forms of file servers, including NAS systems based on clustered or distributed file systems, are covered soup to nuts. Our analysts have deep backgrounds in file systems area in particular. Components such as Storage Network Processors, SAS Expanders, FC Controllers are covered here as well. Server Blades coverage straddles this section as well as the Infrastructure Management section above.

Page 1 of 31 pages  1 2 3 >  Last ›
Profile

IBM FlashSystem V840: Transforming the Traditional Datacenter

Within the past few months IBM announced a new member of its FlashSystem family of all-flash storage platforms – the IBM FlashSystem V840. FlashSystem V840 adds a rich set of storage virtualization features to the baseline FlashSystem 840 model. V840 combines two venerable technology heritages: the hardware hails from the long lineage of Texas Memory Systems flash storage arrays, and the storage services feature set for FlashSystem V840 is inherited from the IBM storage virtualization software that powers the SAN Volume Controller (SVC). One was created to deliver the highest performance out of flash technology and the other was a forerunner of what is being termed software defined storage. Together, these two technology streams represent decades of successful customer deployments in a wide variety of enterprise environments.

It is easy to be impressed with the performance and the tight integration of SVC functionality built into the FlashSystem V840. It is also easy to appreciate the wide variety of storage services built on top of SVC that are now an integral part of FlashSystem V840. But we believe the real impact of FlashSystem V840 is understood when one views how this product affects the cost of flash appliances, and more generally how this new cost profile will undoubtedly affect traditional data center architecture and deployment strategies. This Solution Profile will discuss how IBM FlashSystem V840 combines software-defined storage with the extreme performance of flash, and why the cost profile of this new product – equivalent essentially to current high performance disk storage – will have a major positive impact on data center storage architecture and the businesses that these data centers support.

Publish date: 09/16/14
Report

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio offering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 09/02/14
Free Reports

For Lowest Cost and Greatest Agility, Choose Software-Defined Data Center Architectures

The era of the software-defined data center is upon us. The promise of a software-defined strategy is a virtualized data center created from compute, network and storage building blocks. A Software-Defined Data Center (SDDC) moves the provisioning, management, and other advanced features into the software layer so that the entire system delivers improved agility and greater cost savings. This tectonic shift in the data center is as great as the shift to virtualized servers during the last decade and may prove to be greater in the long run.

This approach to IT infrastructure started over a decade ago when compute virtualization - through use of hypervisors - turned compute and server platforms into software objects. This same approach to virtualizing resources is now gaining acceptance in networking and storage architectures. When combined with overarching automation software, a business can now virtualize and manage an entire data center. The abstraction, pooling and running of compute, storage and networking functions, virtually, on shared hardware brings unprecedented agility and flexibility to the data center while driving costs down.

In this paper, Taneja Group takes an in-depth look at the capital expenditure (CapEx) savings that can be achieved by creating a state-of-the-art SDDC, based on currently available technology. We performed a comparative cost study of two different environments: one using the latest software solutions from VMware running on industry standard and white label hardware components; and the other running a more typical VMware virtualization environment, on mostly traditional, feature rich, hardware components, which we will describe as the Hardware-Dependent Data Center (HDDC). The CapEx saving we calculated were based on creating brand new (Greenfield) data centers for each scenario (an additional comparison for upgrading an existing data center is included at the end of this white paper).

Our analysis indicates that a dramatic cost savings, up to 49%, can be realized when using today’s SDDC capabilities combined with low cost white-label hardware, compared to a best in class HDDC. In addition, just by adopting VMware Virtual SAN and NSX in their current virtualized environment users can lower CapEx by 32%. By investing in SDDC technology, businesses can be assured their data center solution can be more easily upgraded and enhanced over the life of the hardware, providing considerable investment protection. Rapidly improving SDDC software capabilities, combined with declining hardware prices, promise to reduce total costs even further as complex embedded hardware features are moved into a more agile and flexible software environment.

Depending on customers’ needs and the choice of deployment model, an SDDC architecture offers a full spectrum of savings. VMware Virtual SAN is software-defined storage that pools inexpensive hard drives and common solid state drives installed in the virtualization hosts to lower capital expenses and simplify the overall storage architecture. VMware NSX aims to make these same advances for network virtualization by moving security and network functions to a software layer that can run on top of any physical network equipment. An SDDC approach is to “virtualize everything” along with data center automation that enables a private cloud with connectors to the public cloud if needed.

Publish date: 08/19/14
Report

Software-defined Storage and VMware’s Virtual SAN Redefining Storage Operations

The massive trend to virtualize servers has brought great benefits to IT data centers everywhere, but other domains of IT infrastructure have been challenged to likewise evolve. In particular, enterprise storage has remained expensively tied to a traditional hardware infrastructure based on antiquated logical constructs that are not well aligned with virtual workloads – ultimately impairing both IT efficiency and organizational agility.

Software-Defined Storage provides a new approach to making better use of storage resources in the virtual environment. Some software-defined solutions are even enabling storage provisioning and management on an object, database or per-VM level instead of struggling with block storage LUN’s or file volumes. In particular, VM-centricity, especially when combined with an automatic policy-based approach to management, enables virtual admins to deal with storage in the same mindset and in the same flow as other virtual admin tasks.

In this paper, we will look at VMware’s Virtual SAN product and its impact on operations. Virtual SAN brings both virtualized storage infrastructure and VM-centric storage together into one solution that significantly reduces cost compared to a traditional SAN. While this kind of software-defined storage alters the acquisition cost of storage in several big ways (avoiding proprietary storage hardware, dedicated storage adapters and fabrics, et.al.) here at Taneja Group what we find more significant is the opportunity for solutions like VMware’s Virtual SAN to fundamentally alter the on-going operational (or OPEX) costs of storage.

In this report, we will look at how Software-Defined Storage stands to transform the long term OPEX for storage by examining VMware’s Virtual SAN product. We’ll do this by examining a representative handful of key operational tasks associated with enterprise storage and the virtual infrastructure in our validation lab. We’ll examine the key data points recorded from our comparative hands-on examination, estimating the overall time and effort required for common OPEX tasks on both VMware Virtual SAN and traditional enterprise storage.

Publish date: 08/08/14
Profile

Memory is the Hidden Secret to Success with Big Data: GridGain’s In-Memory Hadoop Accelerator

Two big trends are driving IT today. One, of course, is big data. The growth in big data IT is tremendous, both in terms of data and in number of analytical apps developing in new architectures like Hadoop. The second is the well-documented long-term trend for critical resources like CPU and memory to get cheaper and denser over time. It seems a happy circumstance that these two trends accommodate each other to some extend; as data sets grow, resources are also growing. It's not surprising to see traditional scale-up databases with new in-memory options coming to the broader market for moderately-sized structured databases. What is not so obvious is that today an in-memory scale-out grid can cost-effectively accelerate both larger scale databases as well as those new big data analytical applications.

A robust in-memory distributed grid combines the speed of memory with massive horizontal scale-out and enterprise features previously reserved for disk-orienting systems. By transitioning data processing onto what's really now an in-memory data management platform, performance can be competitively accelerated across the board for all applications and all data types. For example, GridGain's In-Memory Computing Platform can functionally replace both slower disk-based SQL databases and accelerate unstructured big data processing to the point where formerly "batch" Hadoop-based apps can handle both streaming data and interactive analysis.

While IT shops may be generally familiar with traditional in-memory databases - and IT resource economics are shifting rapidly in favor of in-memory options - less known about how an in-memory approach is a game-changing enabler to big data efforts. In this report, we'll first briefly examine Hadoop and it's fundamental building blocks to see why high performance big data projects, those that are more interactive, real-time, streaming, and operationally focused have needed to continue to look for yet newer solutions. Then, much like the best in-memory database solutions, we'll see how GridGain's In-Memory Hadoop Accelerator can simply "plug-and-play" into Hadoop, immediately and transparently accelerating big data analysis by orders of magnitude. We'll finish by evaluating GridGain's enterprise robustness, performance and scalability, and consider how it enables a whole new set of competitive solutions unavailable over native databases and batch-style Hadoop.

Publish date: 07/08/14
Profile

Violin Concerto 7000 All Flash Array: Performance Packed with Data Services

All Flash Arrays (AFAs) are plentiful in the market. At one level all AFAs deliver phenomenal performance compared to an HDD array. But comparing AFAs to an HDD-based system is like comparing a Ford Focus to a Lamborghini. The comparison has to be inter-AFAs and when one looks under the hood one finds the AFAs in the market vary in performance, resiliency, consistency of performance, density, scalability and almost every dimension one can think of.

An AFA has to be viewed as a business transformation technology. A well-designed AFA, applied to the right applications will not only speed up application performance but by doing so enable you to make fundamental changes to your business. It may enable you to offer new services to your customers. Or serve your current customers faster and better. Or improve internal procedures in a way that improves employee morale and productivity. To not view an AFA through the business lens would be missing the point.

In this Product Profile we describe all the major criteria that should be used to evaluate AFAs and then look at Violin’s new entry, Concerto 7000 All Flash Array to see how it fares against these measures.

Publish date: 06/24/14
Page 1 of 31 pages  1 2 3 >  Last ›