Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Virtualization

Includes virtual infrastructure technologies (server, desktop, I/O), virtual infrastructure management (monitoring, optimization and performance), and virtualized data center operations and strategies (automation and Cloud computing).

Virtualization is arguably the most disruptive technology shift in data center infrastructure and management in the last decade. While its basic principles may not be new, virtualization has never been so widespread, nor has it been applied to as many platforms as it is today. Taneja Group analysts combine expert knowledge of server and storage virtualization with keen insight into their impact on all aspects of IT operations and management to give our clients the research and analysis required to take advantage of this “virtual evolution.” Our virtualization practice covers all virtual infrastructure components: server virtualization/hypervisors, desktop/client virtualization, storage virtualization, and network and I/O virtualization. We also explore application virtualization and delivery strategies. In addition, Taneja is uniquely focused on the end-to-end impact of virtualization on IT management, from the desktop to the Cloud, including: virtual server lifecycle management; virtual infrastructure instrumentation, performance management, and optimization; data protection, backup, and HA/DR for virtual environments; data center and run-book automation; and virtual infrastructure security and compliance management.

Page 1 of 20 pages  1 2 3 >  Last ›
Profile

Qumulo Core: Data-Aware Scale-Out NAS Software (Product Profile)

Let's face it: Today’s storage is dumb. Mostly it is a dumping ground for data. As we produce more data we simply buy more storage and fill it up. We don't know who is using what storage at a given point in time, which applications are hogging storage or have gone rogue, what and how much sensitive information is stored, moved or accessed by whom, and so on. Basically, we are blind to whatever is happening inside that storage array. On the other hand, storage should just work, users of storage should see it as an endless invisible resource, while the administrators of storage should be able to unlock the value of data itself through real-time analytical insight, not fighting fires just to keep storage running and provisioned.

Storage systems these days are often quoted in petabytes and will eventually move to exabytes and beyond. Businesses are being crushed under the weight of this data sprawl and a new tsunami of data is coming their way as the Internet of Things fully comes online in the next decade. How are administrators dealing with this ever increasing appetite to store more data? It is time for a radical new approach to building a storage system, one that is aware of the information stored within while dramatically reducing the time administrators spend managing the system.

Welcome to the new era of data aware storage. This could not have come at a better time. Storage growth, as we all know, is out of control. Granted the cost per GB keeps falling at about a 40% per year rate, but we keep growing capacity at about a 60% growth rate. This causes both the cost and capacity to keep increasing every year. While cost increase is certainly an issue, the bigger issue is manageability. And not knowing what we have buried in those mounds of data is a bigger issue. Instead of data being an asset, it is a dead weight that keeps getting heavier. If we didn’t do something about it, we would simply be overwhelmed, if we are not already.

The question we ask is why is it possible to develop data aware storage today when we couldn’t yesterday? The answer is simple: flash technology, virtualization, and the availability of “free” CPU cycles make it possible for us to build storage today that can do a lot of heavy lifting from the inside. While this was possible yesterday, if implemented, it would have slowed down the performance of primary storage to a point where it would be useless. So, in the past, we simply let it store data. But today, we can build in a lot of intelligence without impacting performance or quality of service. We call this new type of storage Data Aware Storage.

When implemented correctly, data aware storage can provide insights that were not possible yesterday. It would reduce risk for non-compliance. It would improve governance. It would automate many of the storage management processes that are manual today. It would provide insights into how well the storage is being utilized. It would identify if a dangerous situation was about to occur, either for compliance or capacity or performance or SLA. You get the point. Storage that is inherently smart and knows: what type of data it has, how it is growing, who is using it, who is abusing it, and so on.

In this profile, we dive deep into a new technology, called Qumulo Core, the industry’s first data-aware scale-out NAS platform. Qumulo Core promises to radically change the scale-out NAS product category by using built-in data awareness to massively scale a distributed file system, while at the same time radically reducing the time to administer a system than can hold billions of files. File systems in the past could not scale to this level because administrative tools would crush under the weight of the system.

Publish date: 05/14/15
Profile

HP ConvergedSystem: Solution Positioning for HP ConvergedSystem Hyper-Converged Products

Converged infrastructure systems – the integration of compute, networking, and storage - have rapidly become the preferred foundational building block adopted by businesses of all shapes and sizes. The success of these systems has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the effort and time to custom build their infrastructure from best of breed D-I-Y components. Purpose built converged infrastructure systems have been optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI).

Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, integrated at the rack level gave businesses the flexibility to cover the widest range of solution workload requirements while still using well-known infrastructure components. Emerging onto the scene recently has been a more modular approach to convergence using what we term Hyper-Convergence. With hyper-convergence, the three-tier architecture has been collapsed into a single system appliance that is purpose-built for virtualization with hypervisor, compute, and storage with advanced data services all integrated into an x86 industry-standard building block.

In this paper we will examine the ideal solution environments where Hyper-Converged products have flourished. We will then give practical guidance on solution positioning for HP’s latest ConvergedSystem Hyper-Converged product offerings.

Publish date: 05/07/15
Profile

Dell XC Web-Scale Hyperconverged Series: A Solution for your Most Dynamic Virtualized Environments

Over the past few years, to reduce cost and to improve time-to-value, converged infrastructure systems – the integration of compute, networking and storage - have been readily adopted by large enterprise users. The success of these systems results from the deployment of purpose built integrated converged infrastructure optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI). Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, while integrated together in same rack, still consisted of best-in-breed standalone devices. These systems work well in stable, predictable environments, however, many virtualized environments are now dynamic with unpredictable growth, traditional three-tier architectures often times lack the simplicity, scalability and flexibility needed to operate in such environments.

Enter HyperConvergence, where the three-tier architecture has been collapsed into a single system that is purpose-built for virtualization from the ground up with virtualization, compute and storage, along with advanced features such as deduplication, compression and data protection, are all integrated into an x86 industry-standard building block node. These devices are built upon scale-out architectures with a 100% VM centric management paradigm. The simplicity, scalability and flexibility of this architecture make it a perfect fit for many virtualized environments.

Dell XC Web-scale Converged Appliances powered by Nutanix software are delivered as a series of HyperConverged products that are extremely flexible, scalable and can fit many enterprise workloads. In this solution brief we will examine what constitutes a dynamic virtualized environment and how the Dell XC Web-scale Appliance series fits into such an environment. We can confidently state that by implementing Dell’s XC flexible range of Web-scale appliances, businesses can deploy solutions across a broad spectrum of virtualized workloads where flexibility, scalability and simplicity are critical requirements. Dell is an ideal partner to deliver Nutanix software because of its global reach, streamlined operations and enterprise systems solutions expertise. The company is bringing HyperConverged platforms to the masses and by introducing the second generation of the XC Series appliances enables them to reach an even broader set of customers.

Publish date: 04/30/15
Technology Validation

Better Together: Optimizing VMware vSphere 6.0 Deployments with Dell EqualLogic PS Series

Although server virtualization provides enormous benefits for the modern data center, it can also be daunting from a storage perspective. Provisioning storage to match the exact virtual machine (VM) requirements has always been challenging, often ending in a series of compromises. Each VM will have its own unique performance and storage requirements; this can create over-provisioning and other inefficient uses of storage. A virtualization administrator has to try and match a VM’s storage requirements as close as possible to storage that has been pre-provisioned. Provisioning in this way is time consuming and does not possess the VM-level granularity required to meet the specific needs of the applications running in a VM. To exacerbate the problem, often, over the lifetime of a VM, the storage requirements for a VM will change, which requires ongoing diligence and review of the storage platform and manual intervention to meet the new requirements.

VMware vSphere 6.0 introduced the biggest change to the ESXi storage stack since the company’s inception with the inclusion of vSphere Virtual Volumes (VVol). VVol helps to solve the challenge of how to match a VM’s storage requirements with external storage capabilities on a per VM basis. We found that VVol when combined with Dell EqualLogic PS Series arrays become a powerful force in the datacenter.

In this brief, we call out some storage-related highlights of vSphere 6.0 such as Virtual Volumes, and then take a close look at how they, as well as traditional datastore stored VMs have been enhanced and packaged by Dell Storage into the EqualLogic PS Series storage solution. We will show how Dell and VMware have combined forces to deliver an enterprise-class virtual server and storage environment that is highly optimized and directly addresses the performance, availability, data protection and complexity challenges common in today’s business-critical virtualized data centers.

Publish date: 04/28/15
Technology Validation

Making your Virtual Infrastructure Non-Stop: Making availability efficient with Symantec products

All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.

The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.

With application and service availability in mind, companies such as Symantec have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Symantec has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Symantec ApplicationHA (developed in partnership with VMware) and Symantec Cluster Server powered by Veritas (VCS). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.

Publish date: 04/13/15
Report

Scale Computing Field Report

Virtualization is mature and widely adopted in the enterprise market, and convergence/hyperconvergence with virtualization is taking the market by storm. But what about mid-sized and SMB? Are they falling behind?

Many of them are. Generalist IT, low virtualization budgets, and small staff sizes all militate against complex virtualization projects and high costs. What this means is that when mid-sized and SMB want to virtualize, they either get sticker shock from high prices and high complexity, or dissatisfaction with cheap, poorly scalable and unreliable solutions. What they want and need is hyperconvergence for ease in management, lower CapEx and OpEx; and a simplified but highly scalable and available virtualization platform.

This is a tall order but not an impossible one: Scale Computing claims to meet these requirements for this large market segment, and Taneja Group’s HC3 Validation Report supports those claims. However, although lab results are vital to knowing the real story they are only part of that story. We also wanted to hear directly from IT about Scale in the real world of the mid-sized and SMB data center.

We undertook a Field Report project where we spoke at length with eight Scale customers. This report details our findings around the top common points we found throughout eight different environments: exceptional simplicity, excellent support, clear value, painless scalability, and high availability – all at a low price. These key features make a hyperconverged platform a reality for SMB and mid-market virtualization customers. 

Publish date: 01/05/15
Page 1 of 20 pages  1 2 3 >  Last ›