Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Software Defined/Virtualized Infrastructure

Includes Software-Defined Infrastructure (compute, storage and networking), Virtual Infrastructure technologies (server virtualization, desktop virtualization, I/O virtualization), and the interplay between these technologies and traditional storage. Covers different types of Software-Defined Storage, such as Scale-out NAS, in depth.

Taneja Group has been at the forefront of assessing and characterizing virtualization and software-defined infrastructure technologies since they began to emerge in the early 2000’s. Virtualization has caused one of the most disruptive technology shifts in data center infrastructure in the last 15 years. While its basic principles may not be new, virtualization has never been so widespread, nor has it been applied to as many platforms as it is in today. Taneja Group analysts combine expert knowledge of server and storage virtualization with keen insight into their impact on all aspects of IT operations and management to give our clients the research and analysis required to take advantage of this “virtual evolution.” We focus on the interplay of server and client virtualization technologies with storage, and study the impact on performance, security and management of the IT infrastructure. Our virtualization practice covers all virtual infrastructure components: server virtualization/hypervisors, desktop/client virtualization, storage virtualization, and network and I/O virtualization.

Page 2 of 22 pages  < 1 2 3 4 >  Last ›
Profile

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
Technology Validation

Scale Computing HC3: A Look at a Hyperconverged Appliance

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by virtual workloads, IT has been able to pack more systems into the data center than before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management, as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before - tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance and bring about more capability. All too often, an increase in capability has come at the cost of complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. 

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

Scale Computing, an early pioneer in HyperConverged solutions, has released multiple versions of HC3 appliances and now includes the 6th generation of Scale’s HyperCore Operating System. Scale Computing continues to push the boundary in regards to simplicity, value and availability that many SMB IT departments everywhere have come to rely on.  HC3 is an integration of storage and virtualized compute within a scale-out building block architecture that couples all of the elements of a virtual data center together inside a hyperconverged appliance. The result is a system that is simple to use and does away with much of the complexity associated with virtualization in the data center. By virtualizing and intermingling compute and storage inside a system that is designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex compute clusters, provision and manage storage, and a bevy of other day to day administrative tasks. Provisioning additional resources - any resource - becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service - our hands-on lab service - to the task of evaluating whether Scale Computing's HC3 could deliver on these promises in the real world. For this task, we put an HC3 cluster through the paces to see how well it deployed, how it held up under use, and what special features it delivered that might go beyond the features found in traditional integrations of discreet compute and storage systems.

While we did touch upon whether Scale's architecture could scale performance as well as capacity, we focused our testing upon how the seamless integration of storage and compute within HC3 tackles key complexity challenges in the traditional virtual infrastructure.

As it turns out, HC3 is a far different system than the traditional compute and storage systems that we've looked at before. HC3's combination of compute and storage takes place within a scale-out paradigm, where adding more resources is simply a matter of adding additional nodes to a cluster. This immediately brings on more storage and compute resources, and makes adapting and growing the IT infrastructure a no-brainer exercise. On top of this adaptability, virtual machines (VMs) can run on any of the nodes, without any complex external networking. This delivers seamless utilization of all datacenter resources, in a dense and power efficient footprint, while significantly enhancing storage performance.

Meanwhile, within an HC3 cluster, these capabilities are all delivered on top of a uniquely robust system architecture that can tolerate any failure - from a disk to an entire cluster node - and guarantee a level of availability seldom seen by mid-sized customers. Moreover, that uniquely robust, clustered, scale-out architecture can also intermix different generation of nodes in a way that will put an end to painful upgrades by reducing them to simply decommissioning old nodes as new ones are introduced.

HC3’s flexibility, ease of deployment, robustness and a management interface is the simplest and easiest to use that we have seen. This makes HC3 a disruptive game changer for SMB and SME businesses. HC3 stands to banish complex IT infrastructure deployment, permanently alter on-going operational costs, and take application availability to a new level. With those capabilities in focus, single bottom-line observations don’t do HC3 justice. In our assessment, HC3 may take as little as 1/10th the effort to setup and install as traditional infrastructure, 1/4th the effort to configure and deploy a virtual machine (VM) versus doing so using traditional infrastructure, and can banish the planning, performance troubleshooting, and reconfiguration exercises that can consume as much as 25-50% of an IT administrator’s time. HC3 is about delivering on all of these promises simultaneously, and with the additional features we'll discuss, transforming the way SMB/SME IT is done.

Publish date: 09/30/15
Report

Business Continuity Best Practices for SMB

Virtualization’s biggest driver is big savings: slashing expenditures on servers, licenses, management, and energy. Another major benefit is the increased ease of disaster recovery and business continuity (DR/BC) in virtualized environments.

Note that disaster recovery and business continuity are closely aligned but not identical. We define disaster recovery as the process of restoring lost data, applications and systems following a profound data loss event, such as a natural disaster, a deliberate data breach or employee negligence. Business continuity takes DR a step further. BC’s goal is not only to recover the computing environment but also to recover them swiftly and with zero data loss. This is where recovery point objectives (RPO) and recovery time objectives (RTO) enter the picture, with IT assigning differing RPO and RTO strategies according to application priority.

DR/BC can be difficult to do well in data centers with traditional physical servers, particularly in SMB with limited IT budgets and generalist IT staff. Many of these servers are siloed with direct-attached storage and individual data protection processes. Mirroring and replication used to require one-to-one hardware correspondence and can be expensive, leading to a universal reliance on localized backup as data protection. In addition, small IT staffs do not always take the time to perfect their backup processes across disparate servers. Either they do not do it at all –rolling the dice and hoping there won’t be a disaster – or they slap backups on tape or USB drives and stick them on a shelf.

Virtualization can transform this environment into a much more efficient and protected data center. Backing up VMs from a handful of host servers is faster and less resource-intensive than backing up tens or hundreds of physical servers. And with scheduled replication, companies achieve faster backup and much improved recovery objectives.

However, many SMBs avoid virtualization. They cite factors such as cost, unfamiliarity with hypervisors, and added complexity. And they are not wrong: virtualization can introduce complexity, it can be expensive, and it can require familiarity with hypervisors. Virtualization cuts down on physical servers but is resource-intensive, especially as the virtualized environment grows. This means capital costs for high performance CPUs and storage. SMBs may also have to deal VM licensing and management costs, administrative burdens, and the challenge of protecting and replicating virtualized data on a strict budget.

For all its complexity and learning curve, is virtualization worth it for SMBs? Definitely. Its benefits far outweigh its problems, particularly its advantages for DR/BC. But for many SMBs, traditional virtualization is often too expensive and complex to warrant the effort. We believe that the answer is HyperConverged Infrastructure: HCI. Of HCI providers, Scale Computing is exceptionally attractive to the SMB. This paper will explain why. 

Publish date: 09/30/15
Report

Making Your Virtual Infrastructure Non-Stop with Veritas Products

All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.

The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.

With application and service availability in mind, companies such as Veritas have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Veritas Technologies LLC. has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Veritas ApplicationHA (developed in partnership with VMware) and Veritas InfoScale Availability (formerly Veritas Cluster Server). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.

Publish date: 09/04/15
Report

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Free Reports

Market Landscape Abstract: Survey of VVol Implementation by Various Storage Vendors

VMware Virtual Volumes (VVols) is one of the most important technologies that impacts how storage interacts with virtual machines. In April and May 2015, Taneja Group surveyed eleven storage vendors to understand how each was implementing VVols in their storage arrays. This survey consisted of 32 questions that explored what storage array features were exported to vSphere 6, how VMs were provisioned and managed. We were surprised at the level of differences and the variety of methods used to enable VVols. It was also clear from the analysis that underlying limitations of an array will limit what is achievable with VVols. However, it is also important to understand that there are many other aspects of a storage array that matter—the VVol implementation is but one major factor. And VVol implementation is a work in progress and this represents only the first pass.

We categorized these implementations in three levels: Type 1, 2 and 3, with Type 3 delivering the most sophisticated VVol benefits. The definitions of these three types is shown below, as is the summary of findings.

Most storage array vendors participated in our survey but a few chose not to, often due to the fact that they already delivered the most important benefits that VVols deliver, i.e. the ability to provision and manage storage at a VM-level, rather than at a LUN, volume or mount point level. In particular that list included the hyperconverged players, such as Nutanix and SimpliVity but also players like Tintri.

Publish date: 06/08/15
Page 2 of 22 pages  < 1 2 3 4 >  Last ›