Join Newsletter
Trusted Business Advisors, Expert Technology Analysts

Research Areas


Includes virtual infrastructure technologies (server, desktop, I/O), virtual infrastructure management (monitoring, optimization and performance), and virtualized data center operations and strategies (automation and Cloud computing).

Virtualization is arguably the most disruptive technology shift in data center infrastructure and management in the last decade. While its basic principles may not be new, virtualization has never been so widespread, nor has it been applied to as many platforms as it is today. Taneja Group analysts combine expert knowledge of server and storage virtualization with keen insight into their impact on all aspects of IT operations and management to give our clients the research and analysis required to take advantage of this “virtual evolution.” Our virtualization practice covers all virtual infrastructure components: server virtualization/hypervisors, desktop/client virtualization, storage virtualization, and network and I/O virtualization. We also explore application virtualization and delivery strategies. In addition, Taneja is uniquely focused on the end-to-end impact of virtualization on IT management, from the desktop to the Cloud, including: virtual server lifecycle management; virtual infrastructure instrumentation, performance management, and optimization; data protection, backup, and HA/DR for virtual environments; data center and run-book automation; and virtual infrastructure security and compliance management.

Page 1 of 21 pages  1 2 3 >  Last ›

FlashSoft 4 for vSphere 6: Acceleration Technology Tailor-Made for VMware Environments

For all the gains server virtualization has brought in compute utilization, flexibility and efficiency, it has created an equally weighty set of challenges on the storage side, particularly in traditional storage environments. As servers become more consolidated, virtualized workloads must increasingly contend for scarce storage and IO resources, preventing them from consistently meeting throughput and response time objectives. On top of that, there is often no way to ensure that the most critical apps or virtual machines can gain priority access to data storage as needed, even in lightly consolidated environments. With a majority (70+%) of all workloads now running virtualized, it can be tough to achieve strong and predictable app performance with traditional shared storage.

To address these challenges, many VMware customers are now turning to server-side acceleration solutions, in which the flash storage resource can be placed closer to the application. But server-side acceleration is not a panacea. While some point solutions have been adapted to work in virtualized infrastructures, they generally lack enterprise features, and are often not well integrated with vSphere and the vCenter management platform. Such offerings are at best band-aid treatments, and at worst second-class citizens in the virtual infrastructure, proving difficult to scale, deploy and manage. To provide true enterprise value, a solution should seamlessly deliver performance to your critical VMware workloads, but without compromising availability, workload portability, or ease of deployment and management.

This is where FlashSoft 4 for VMware vSphere 6 comes in. FlashSoft is an intelligent, software-defined caching solution that accelerates your critical VMware workloads as an integrated vSphere data service, while still allowing you to take full advantage of all the vSphere enterprise capabilities you use today.

In this paper we examine the technology underlying the FlashSoft 4 for vSphere 6 solution, describe the features and capabilities it enables, and articulate the benefits customers can expect to realize upon deploying the solution.

Publish date: 08/31/16

Flash Virtualization System: Powerful but Cost-Effective Acceleration for VMware Workloads

Server virtualization can bring your business significant benefits, especially in the initial stages of deployment. Companies we speak with in the early stages of adoption often cite more flexible and automated management of both infrastructure and apps, along with CAPEX and OPEX savings resulting from workload consolidation.  However, as an increasing number of apps are virtualized, many of these organizations encounter significant storage performance challenges. As more virtualized workloads are consolidated on a given host, aggregate IO demands put tremendous pressure on shared storage, server and networking resources, with the strain further exacerbated by the IO blender effect, in which IO streams processed by the hypervisor become random and unpredictable. Together, these conditions reduce host productivity—e.g. by lowering data and transactional throughput and increasing application response time—and may prevent you from meeting performance requirements for your business-critical applications.

How can you best address these storage performance challenges in your virtual infrastructure? Adding solid-state or flash storage will provide a significant performance boost, but where should it be deployed to give your critical applications the biggest improvement per dollar spent? How can you ensure that the additional storage fits effortlessly into your existing environment, without requiring disruptive and costly changes to your infrastructure, applications, or management capabilities?

We believe that server-side acceleration provides the best answer to all of these questions. In particular, we like server solutions that combine intelligent caching with high-performance PCIe memory, which are tightly integrated with the virtualization platform, and enable sharing of storage across multiple hosts or an entire cluster. The Flash Virtualization System from SanDisk is an outstanding example of such a solution. As we’ll see, Flash Virtualization enables a shared cache resource across a cluster of hosts in a VMware environment, improving application performance and response time without disrupting primary storage or host servers. This solution will allow you to satisfy SLAs and keep your users happy, without breaking the bank.

Publish date: 06/14/16

Making Your Virtual Infrastructure Non-Stop with Veritas Products

All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.

The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.

With application and service availability in mind, companies such as Veritas have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Veritas Technologies LLC. has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Veritas ApplicationHA (developed in partnership with VMware) and Veritas InfoScale Availability (formerly Veritas Cluster Server). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.

Publish date: 09/04/15

The Promise of VM-Centric Storage and VVols: Tintri VMstore Delivers the Future Promise Now

The din surrounding VMware vSphere Virtual Volumes (VVols) is deafening. It started in 2011 when VMware announced the concept of VVols and the storage industry reacted with enthusiasm and culminated with its introduction as part of vSphere 6 release in April 2015. Viewed simply, VVols is an API that enables storage arrays that support the functionality to provision and manage storage at the granularity of a VM, rather than LUNs or Volumes or mount points, as they do today. Without question, VVols is an incredibly powerful concept and will fundamentally change the interaction between storage and VMs in a way not seen since the concept of server virtualization first came to market. No surprise then that each and every storage vendor in the market is feverishly trying to build in VVols support and competing on the superiority of their implementation.

Yet one storage player, Tintri, has been delivering products with VM-centric features for four years without the benefit of VVols. How can this be so? How could Tintri do this? And what does it mean for them now that VVols are here? To do justice to this question we will briefly look at what VVols are and how they work and then dive into how Tintri has delivered the benefits of VVols for several years. We will also look at what the buyer of Tintri gets today and how Tintri plans to integrate VVols. Read on…

Publish date: 06/26/15

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Free Reports

Market Landscape Abstract: Survey of VVol Implementation by Various Storage Vendors

VMware Virtual Volumes (VVols) is one of the most important technologies that impacts how storage interacts with virtual machines. In April and May 2015, Taneja Group surveyed eleven storage vendors to understand how each was implementing VVols in their storage arrays. This survey consisted of 32 questions that explored what storage array features were exported to vSphere 6, how VMs were provisioned and managed. We were surprised at the level of differences and the variety of methods used to enable VVols. It was also clear from the analysis that underlying limitations of an array will limit what is achievable with VVols. However, it is also important to understand that there are many other aspects of a storage array that matter—the VVol implementation is but one major factor. And VVol implementation is a work in progress and this represents only the first pass.

We categorized these implementations in three levels: Type 1, 2 and 3, with Type 3 delivering the most sophisticated VVol benefits. The definitions of these three types is shown below, as is the summary of findings.

Most storage array vendors participated in our survey but a few chose not to, often due to the fact that they already delivered the most important benefits that VVols deliver, i.e. the ability to provision and manage storage at a VM-level, rather than at a LUN, volume or mount point level. In particular that list included the hyperconverged players, such as Nutanix and SimpliVity but also players like Tintri.

Publish date: 06/08/15
Page 1 of 21 pages  1 2 3 >  Last ›