Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Software Defined/Virtualized Infrastructure

Includes Software-Defined Infrastructure (compute, storage and networking), Virtual Infrastructure technologies (server virtualization, desktop virtualization, I/O virtualization), and the interplay between these technologies and traditional storage. Covers different types of Software-Defined Storage, such as Scale-out NAS, in depth.

Taneja Group has been at the forefront of assessing and characterizing virtualization and software-defined infrastructure technologies since they began to emerge in the early 2000’s. Virtualization has caused one of the most disruptive technology shifts in data center infrastructure in the last 15 years. While its basic principles may not be new, virtualization has never been so widespread, nor has it been applied to as many platforms as it is in today. Taneja Group analysts combine expert knowledge of server and storage virtualization with keen insight into their impact on all aspects of IT operations and management to give our clients the research and analysis required to take advantage of this “virtual evolution.” We focus on the interplay of server and client virtualization technologies with storage, and study the impact on performance, security and management of the IT infrastructure. Our virtualization practice covers all virtual infrastructure components: server virtualization/hypervisors, desktop/client virtualization, storage virtualization, and network and I/O virtualization.

Page 3 of 22 pages  < 1 2 3 4 5 >  Last ›
Report

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Free Reports

Market Landscape Abstract: Survey of VVol Implementation by Various Storage Vendors

VMware Virtual Volumes (VVols) is one of the most important technologies that impacts how storage interacts with virtual machines. In April and May 2015, Taneja Group surveyed eleven storage vendors to understand how each was implementing VVols in their storage arrays. This survey consisted of 32 questions that explored what storage array features were exported to vSphere 6, how VMs were provisioned and managed. We were surprised at the level of differences and the variety of methods used to enable VVols. It was also clear from the analysis that underlying limitations of an array will limit what is achievable with VVols. However, it is also important to understand that there are many other aspects of a storage array that matter—the VVol implementation is but one major factor. And VVol implementation is a work in progress and this represents only the first pass.

We categorized these implementations in three levels: Type 1, 2 and 3, with Type 3 delivering the most sophisticated VVol benefits. The definitions of these three types is shown below, as is the summary of findings.

Most storage array vendors participated in our survey but a few chose not to, often due to the fact that they already delivered the most important benefits that VVols deliver, i.e. the ability to provision and manage storage at a VM-level, rather than at a LUN, volume or mount point level. In particular that list included the hyperconverged players, such as Nutanix and SimpliVity but also players like Tintri.

Publish date: 06/08/15
Profile

Qumulo Core: Data-Aware Scale-Out NAS Software (Product Profile)

Let's face it: Today’s storage is dumb. Mostly it is a dumping ground for data. As we produce more data we simply buy more storage and fill it up. We don't know who is using what storage at a given point in time, which applications are hogging storage or have gone rogue, what and how much sensitive information is stored, moved or accessed by whom, and so on. Basically, we are blind to whatever is happening inside that storage array. On the other hand, storage should just work, users of storage should see it as an endless invisible resource, while the administrators of storage should be able to unlock the value of data itself through real-time analytical insight, not fighting fires just to keep storage running and provisioned.

Storage systems these days are often quoted in petabytes and will eventually move to exabytes and beyond. Businesses are being crushed under the weight of this data sprawl and a new tsunami of data is coming their way as the Internet of Things fully comes online in the next decade. How are administrators dealing with this ever increasing appetite to store more data? It is time for a radical new approach to building a storage system, one that is aware of the information stored within while dramatically reducing the time administrators spend managing the system.

Welcome to the new era of data aware storage. This could not have come at a better time. Storage growth, as we all know, is out of control. Granted the cost per GB keeps falling at about a 40% per year rate, but we keep growing capacity at about a 60% growth rate. This causes both the cost and capacity to keep increasing every year. While cost increase is certainly an issue, the bigger issue is manageability. And not knowing what we have buried in those mounds of data is a bigger issue. Instead of data being an asset, it is a dead weight that keeps getting heavier. If we didn’t do something about it, we would simply be overwhelmed, if we are not already.

The question we ask is why is it possible to develop data aware storage today when we couldn’t yesterday? The answer is simple: flash technology, virtualization, and the availability of “free” CPU cycles make it possible for us to build storage today that can do a lot of heavy lifting from the inside. While this was possible yesterday, if implemented, it would have slowed down the performance of primary storage to a point where it would be useless. So, in the past, we simply let it store data. But today, we can build in a lot of intelligence without impacting performance or quality of service. We call this new type of storage Data Aware Storage.

When implemented correctly, data aware storage can provide insights that were not possible yesterday. It would reduce risk for non-compliance. It would improve governance. It would automate many of the storage management processes that are manual today. It would provide insights into how well the storage is being utilized. It would identify if a dangerous situation was about to occur, either for compliance or capacity or performance or SLA. You get the point. Storage that is inherently smart and knows: what type of data it has, how it is growing, who is using it, who is abusing it, and so on.

In this profile, we dive deep into a new technology, called Qumulo Core, the industry’s first data-aware scale-out NAS platform. Qumulo Core promises to radically change the scale-out NAS product category by using built-in data awareness to massively scale a distributed file system, while at the same time radically reducing the time to administer a system than can hold billions of files. File systems in the past could not scale to this level because administrative tools would crush under the weight of the system.

Publish date: 05/14/15
Technology Validation

Better Together: Optimizing VMware vSphere 6.0 Deployments with Dell EqualLogic PS Series

Although server virtualization provides enormous benefits for the modern data center, it can also be daunting from a storage perspective. Provisioning storage to match the exact virtual machine (VM) requirements has always been challenging, often ending in a series of compromises. Each VM will have its own unique performance and storage requirements; this can create over-provisioning and other inefficient uses of storage. A virtualization administrator has to try and match a VM’s storage requirements as close as possible to storage that has been pre-provisioned. Provisioning in this way is time consuming and does not possess the VM-level granularity required to meet the specific needs of the applications running in a VM. To exacerbate the problem, often, over the lifetime of a VM, the storage requirements for a VM will change, which requires ongoing diligence and review of the storage platform and manual intervention to meet the new requirements.

VMware vSphere 6.0 introduced the biggest change to the ESXi storage stack since the company’s inception with the inclusion of vSphere Virtual Volumes (VVol). VVol helps to solve the challenge of how to match a VM’s storage requirements with external storage capabilities on a per VM basis. We found that VVol when combined with Dell EqualLogic PS Series arrays become a powerful force in the datacenter.

In this brief, we call out some storage-related highlights of vSphere 6.0 such as Virtual Volumes, and then take a close look at how they, as well as traditional datastore stored VMs have been enhanced and packaged by Dell Storage into the EqualLogic PS Series storage solution. We will show how Dell and VMware have combined forces to deliver an enterprise-class virtual server and storage environment that is highly optimized and directly addresses the performance, availability, data protection and complexity challenges common in today’s business-critical virtualized data centers.

Publish date: 04/28/15
Technology Validation

Making your Virtual Infrastructure Non-Stop: Making availability efficient with Symantec products

All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.

The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.

With application and service availability in mind, companies such as Symantec have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Symantec has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Symantec ApplicationHA (developed in partnership with VMware) and Symantec Cluster Server powered by Veritas (VCS). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.

Publish date: 04/13/15
Report

Charting a Path Toward Streamlined and Automated Data Center Operations

Companies that work with VMware to virtualize their server infrastructures tend to achieve significant benefits, such as increased resource utilization, greater infrastructure availability, and sharply reduced CAPEX costs. But with this success comes new challenges. As customers scale their environments and begin to harness the power of the cloud, management of their infrastructure and applications can become more complex and time consuming.

The findings in this paper are based on research completed in November 2014. Taneja Group polled more than 300 IT professionals, including a mix of senior managers, architects and administrators in VMware customer organizations worldwide, on the IT priorities they would like to achieve. For each of the top 2 IT initiatives that respondents identified, we learned about their key operational and business challenges, and then focused on the capabilities, products and features they plan to adopt—both today and in the future—to overcome those challenges.

Companies that use separate point tools to manage the health, capacity and performance of their environments often find these tools to be lacking, since they are rarely automated let alone integrated with each other. This forces IT to invest manual effort to fill in the capability gaps. Management becomes largely reactive, with IT lacking early visibility into app/infrastructure health issues, performance bottlenecks or capacity shortfalls. Planning becomes difficult, since managers lack the cross-domain tools, data and analytics needed to model operations and predict the future impact of changes.

Based on the results of our latest research, a subset of VMware customers are facing one or more of these challenges today. Taneja Group’s research probed VMware customers—from large enterprises to small and midsize companies and across multiple geographies and industries—on their data center operations capabilities and practices. Many companies in our survey population that face operational challenges are now looking for solutions that will help them to overcome these issues. These companies are hoping to increase operational visibility and control, improve resource utilization, and streamline monitoring and management—while also reducing CAPEX and OPEX costs.

In this paper, you will learn—based on our study findings and VMware customer experience—how you can benefit by following a proven path to streamline and automate data center operations in your own organization.

Publish date: 03/23/15
Page 3 of 22 pages  < 1 2 3 4 5 >  Last ›