Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Report

Page 1 of 40 pages  1 2 3 >  Last ›
Report

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Report

EMC PowerPath: Optimized IO Multipathing for All Flash Arrays

All-flash arrays are changing the datacenter for the better. No longer do we worry about IOPS bottlenecks from the array: all-flash arrays (AFA) can deliver a staggering amount of IOPs. AFAs with the ability to deliver hundreds of thousands of IOPs are not uncommon. The problem now, however, is how to get the IOPS from the array to the servers. We recently had a chance to see how well an AFA using EMC PowerPath driver works to eliminate this bottleneck—and we were blown away. Most comparisons with datacenter infrastructure show a 10-30% improvement in performance; but, the performance improvement that we saw with PowerPath was extraordinary.

Getting bits from an array to server is easy —very easy, in fact. The trick is getting the bits from a server to an array in an efficient manner when you have many virtual machines (VM) on multiple physical hosts that are transmitting the bits over a physical network with a virtual fabric overlay; this is much more difficult. Errors can get introduced and must be dealt with, the most efficient path must be obtained and established, re-evaluated and reestablished continually, and any misconfiguration can produce less than optimal performance. In some cases, this can cause outages or even data loss. In order to deal with the “pathing,” or how the I/O travels from the VM to storage, the OS running on the host needs a driver, or in the case where multiple paths can be taken from the server to the array, a multipathing driver needs to be used to direct the traffic.

Windows, Linux, VMware and most other modern operating systems include a basic multipath driver; however, these drivers tend to be generic and not code optimized to extract the maximum performance from an array and come with only rudimentary traffic optimization and management functions. In some cases these generic drivers are fine, but in the majority of datacenters the infrastructure is overtaxed and its equipment needs to be used in the most efficient manner possible. Fortunately, storage companies such as EMC are committed to making their arrays work as performant as possible and spend a considerable amount of time and research to develop multipathing drivers optimized for their arrays. EMC invited us to take a look at how PowerPath, their optimized “intelligent” multipath driver, performed on an XtremIO flash array connected to a Dell PowerEdge R710 server running ESXi 6.0 while simulating an Oracle workload. We looked at the results of the various tests EMC ran comparing PowerPath/VE multipath driver against VMware’s ESXi Native Multipath driver and we were impressed—very impressed—by the difference that an optimized, multipath driver like PowerPath can make in a high IO traffic scenario.

Publish date: 04/30/15
Report

Scale Computing Field Report

Virtualization is mature and widely adopted in the enterprise market, and convergence/hyperconvergence with virtualization is taking the market by storm. But what about mid-sized and SMB? Are they falling behind?

Many of them are. Generalist IT, low virtualization budgets, and small staff sizes all militate against complex virtualization projects and high costs. What this means is that when mid-sized and SMB want to virtualize, they either get sticker shock from high prices and high complexity, or dissatisfaction with cheap, poorly scalable and unreliable solutions. What they want and need is hyperconvergence for ease in management, lower CapEx and OpEx; and a simplified but highly scalable and available virtualization platform.

This is a tall order but not an impossible one: Scale Computing claims to meet these requirements for this large market segment, and Taneja Group’s HC3 Validation Report supports those claims. However, although lab results are vital to knowing the real story they are only part of that story. We also wanted to hear directly from IT about Scale in the real world of the mid-sized and SMB data center.

We undertook a Field Report project where we spoke at length with eight Scale customers. This report details our findings around the top common points we found throughout eight different environments: exceptional simplicity, excellent support, clear value, painless scalability, and high availability – all at a low price. These key features make a hyperconverged platform a reality for SMB and mid-market virtualization customers. 

Publish date: 01/05/15
Report

Field Report: Nutanix vs. VCE - Web-Scale Vs. Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix. The Taneja Group analyzed the experiences of seven Nutanix Virtual Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership. VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments.

Publish date: 10/16/14
Report

Massively Scalable, Intrinsically Simple: Tintri’s Low TCO for the Virtualized Data Center

Fast-growing virtualized environments present a thorny storage challenge to IT. Whether mission-critical applications with demanding SLAs, VDI rollouts with boot storms, or deploying a private cloud for large dev & test environments: delivering virtualized environments and cloud deployments using traditional storage can stall or break a virtualization project.

Flash technology is certainly part of the solution to performance challenges posed by virtualized workloads, but can be prohibitively expensive to broadly implement across the environment. Although flash can be deployed in a number of targeted ways and placed in the infrastructure, the more it is tied down to specific hosts and workloads, the less benefit it provides to the overall production environment. This in turn causes more management overhead.

Recently Taneja Group ran Tintri VMstore storage through our hands-on validation lab and documented significant large factors of improvement over traditional storage. Those factors accrue through Tintri’s cost-effective acquisition, simplicity and ease of deployment and data migration, effective high performance and availability and smooth expansion over time.

This Field Report validates our impressive lab findings with feedback from the field: six customers who have Tintri storage in production environments. While each customer has a unique own story to tell, we found that everyone documented a compelling value proposition based on TCO factors. Throughout our research we found that Tintri’s approach provides significantly lower TCO than traditional storage solutions.

Publish date: 10/13/14
Report

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio oSDDCffering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 09/02/14
Page 1 of 40 pages  1 2 3 >  Last ›