Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Cloud Platforms and Apps

Includes Cloud Platforms; IaaS, PaaS and SaaS; enabling infrastructure technologies; deployment types; and cloud development technologies and approaches.

This category covers all types of cloud platforms, including IaaS, PaaS and SaaS, along with all types of cloud deployments, such as private, public, hybrid and multi cloud. We cover enabling cloud infrastructure technologies in areas such as compute, storage and networking. This practice spans cloud apps development and deployment, including containers and microservices architectures, and the DevOps functions to manage them. We look at cloud platforms in the context of displacing traditional on-premises IT infrastructure and enabling new on-demand apps and services, providing customers with greater flexibility and agility. Though cloud is growing rapidly, we believe that cloud and traditional datacenter infrastructure will co-exist for many years to come, with companies electing to maintain some of their IT workloads and processes on-premises for reasons such as security, control and/or cost. We address the pain points and opportunities resulting from this transformation, to help vendors and end users optimize their cloud investments.

Page 3 of 43 pages  < 1 2 3 4 5 >  Last ›
Profile

IBM FlashSystem V840: Transforming the Traditional Datacenter

Within the past few months IBM announced a new member of its FlashSystem family of all-flash storage platforms – the IBM FlashSystem V840. FlashSystem V840 adds a rich set of storage virtualization features to the baseline FlashSystem 840 model. V840 combines two venerable technology heritages: the hardware hails from the long lineage of Texas Memory Systems flash storage arrays, and the storage services feature set for FlashSystem V840 is inherited from the IBM storage virtualization software that powers the SAN Volume Controller (SVC). One was created to deliver the highest performance out of flash technology and the other was a forerunner of what is being termed software defined storage. Together, these two technology streams represent decades of successful customer deployments in a wide variety of enterprise environments.

It is easy to be impressed with the performance and the tight integration of SVC functionality built into the FlashSystem V840. It is also easy to appreciate the wide variety of storage services built on top of SVC that are now an integral part of FlashSystem V840. But we believe the real impact of FlashSystem V840 is understood when one views how this product affects the cost of flash appliances, and more generally how this new cost profile will undoubtedly affect traditional data center architecture and deployment strategies. This Solution Profile will discuss how IBM FlashSystem V840 combines software-defined storage with the extreme performance of flash, and why the cost profile of this new product – equivalent essentially to current high performance disk storage – will have a major positive impact on data center storage architecture and the businesses that these data centers support.

Publish date: 09/16/14
Report

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio oSDDCffering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 09/02/14
Technology Validation

Accelerating the VM with FlashSoft: Software-Driven Flash-Caching for the Virtual Infrastructure

Storage performance has long been the bane of the enterprise infrastructure. Fortunately, in the past couple of years, solid-state technologies have allowed new comers as well as established storage vendors to start shaping up clever, cost effective, and highly efficient storage solutions that unlock greater storage performance. It is our opinion that the most innovative of these solutions are the ones that require no real alteration in the storage infrastructure, nor a change in data management and protection practices.

This is entirely possible with server-side caching solutions today. Server-side caching solutions typically use either PCIe solid-state NAND Flash or SAS/SATA SSDs installed in the server alongside a hardware or software IO handler component that mirrors commonly utilized data blocks onto the local high speed solid-state storage. Then the IO handler redirects server requests for data blocks to those local copies that are served up with lower latency (microseconds instead of milliseconds) and greater bandwidth than the original backend storage. Since data is simply cached, instead of moved, the solution is transparent to the infrastructure. Data remains consolidated on the same enterprise infrastructure, and all of the original data management practices – such as snapshots and backup – still work. Moreover, server-side caches can actually offload IO from the backend storage system, and can allow a single storage system to effectively serve many more clients. Clearly there’s tremendous potential value in a solution that can be transparently inserted into the infrastructure and address storage performance problems.

Publish date: 08/25/14
Free Reports

HP StoreVirtual VSA and VMware Virtual SAN - A Closer Look

The age of the software defined datacenter (SDDC) and converged infrastructure is upon us. The benefits of abstracting, pooling and running compute, storage and networking functions together on shared commodity hardware brings unprecedented agility and flexibility to the datacenter while driving actual costs down. The tectonic shift in the datacenter caused by software-defined storage and networking will prove to be as great as, and may prove to be greater than, the shift to virtualized servers during the last decade. While software-defined networking (SDN) is still in its infancy, software-defined storage (SDS) has been developing for quite some time.

LeftHand Networks (now HP StoreVirtual) released its first iSCSI VSA (virtual storage appliance) in 2007, which brought the advantages of software-based storage to small and midsize company environments. LeftHand Networks’ VSA was a virtual machine that hosted a software implementation of LeftHand’s well-regarded iSCSI hardware storage array. Since that time many other vendors have released VSAs, but none have captured the market share of HP’s StoreVirtual VSA. But the release of VMware Virtual SAN (VSAN) in March of 2014 could change that as VSAN, with the backing of the virtualization giant, is poised to be a serious contender in the SDS marketplace. Taneja Group thought that it would be interesting to take a closer look at how a mature, well regarded and widely deployed SDS product such as HP StoreVirtual VSA compares to the newest entry in the SDS market: VMware’s VSAN.

The observations we have made for both products are based on hands-on lab testing, but we do not consider this a Technology Validation exercise because we were not able to conduct an apples-to-apples comparison between the offerings, primarily due to the limited hardware compatibility list (HCL) for VMware VSAN. However, the hands-on testing that we were able to conduct gave us a very good understanding of both products. Both products surprised and, more often than not, did not disappoint us. In an ideal world without budgetary constraints, both products may have a place in your datacenter, but they are not by any means interchangeable. We found that one of the products would be more useful for a variety of datacenter storage needs, including some tier 1 use cases, while the other is more suited today to supporting the needs some of tier 2 and tier 3 applications.

Publish date: 08/21/14
Profile

Memory is the Hidden Secret to Success with Big Data: GridGain’s In-Memory Hadoop Accelerator

Two big trends are driving IT today. One, of course, is big data. The growth in big data IT is tremendous, both in terms of data and in number of analytical apps developing in new architectures like Hadoop. The second is the well-documented long-term trend for critical resources like CPU and memory to get cheaper and denser over time. It seems a happy circumstance that these two trends accommodate each other to some extend; as data sets grow, resources are also growing. It's not surprising to see traditional scale-up databases with new in-memory options coming to the broader market for moderately-sized structured databases. What is not so obvious is that today an in-memory scale-out grid can cost-effectively accelerate both larger scale databases as well as those new big data analytical applications.

A robust in-memory distributed grid combines the speed of memory with massive horizontal scale-out and enterprise features previously reserved for disk-orienting systems. By transitioning data processing onto what's really now an in-memory data management platform, performance can be competitively accelerated across the board for all applications and all data types. For example, GridGain's In-Memory Computing Platform can functionally replace both slower disk-based SQL databases and accelerate unstructured big data processing to the point where formerly "batch" Hadoop-based apps can handle both streaming data and interactive analysis.

While IT shops may be generally familiar with traditional in-memory databases - and IT resource economics are shifting rapidly in favor of in-memory options - less known about how an in-memory approach is a game-changing enabler to big data efforts. In this report, we'll first briefly examine Hadoop and it's fundamental building blocks to see why high performance big data projects, those that are more interactive, real-time, streaming, and operationally focused have needed to continue to look for yet newer solutions. Then, much like the best in-memory database solutions, we'll see how GridGain's In-Memory Hadoop Accelerator can simply "plug-and-play" into Hadoop, immediately and transparently accelerating big data analysis by orders of magnitude. We'll finish by evaluating GridGain's enterprise robustness, performance and scalability, and consider how it enables a whole new set of competitive solutions unavailable over native databases and batch-style Hadoop.

Publish date: 07/08/14
Profile

Violin Concerto 7000 All Flash Array: Performance Packed with Data Services

All Flash Arrays (AFAs) are plentiful in the market. At one level all AFAs deliver phenomenal performance compared to an HDD array. But comparing AFAs to an HDD-based system is like comparing a Ford Focus to a Lamborghini. The comparison has to be inter-AFAs and when one looks under the hood one finds the AFAs in the market vary in performance, resiliency, consistency of performance, density, scalability and almost every dimension one can think of.

An AFA has to be viewed as a business transformation technology. A well-designed AFA, applied to the right applications will not only speed up application performance but by doing so enable you to make fundamental changes to your business. It may enable you to offer new services to your customers. Or serve your current customers faster and better. Or improve internal procedures in a way that improves employee morale and productivity. To not view an AFA through the business lens would be missing the point.

In this Product Profile we describe all the major criteria that should be used to evaluate AFAs and then look at Violin’s new entry, Concerto 7000 All Flash Array to see how it fares against these measures.

Publish date: 06/24/14
Page 3 of 43 pages  < 1 2 3 4 5 >  Last ›