The pace of IT development is fast and it is relentless. What was bleeding edge just a few short years ago becomes old hat, replaced by the next big thing. Generally, no one can predict in detail what that next advancement may be, either. Since no one wants to have to replace everything in their data center to accommodate the newest technology, maintaining flexibility is extremely important. Remaining flexible means that you can continue to utilize the capital investment you’ve already made, while rolling in the next wave of new products and services with minimal cost and disruption.
Most companies don’t have the luxury of starting from scratch every few years and building out a new IT infrastructure from the ground up. Instead, they have to work with what they’ve got. That means dealing with an existing array of operating systems, hypervisors, servers and storage. It means adding in cloud connections and services to existing architectures, systems and processes. It means moving forward into an ever more virtualized world while still supporting the status quo.
Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management. But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers. Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space. While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.
The din surrounding VMware vSphere Virtual Volumes (VVols) is deafening. It started in 2011 when VMware announced the concept of VVols and the storage industry reacted with enthusiasm and culminated with its introduction as part of vSphere 6 release in April 2015. Viewed simply, VVols is an API that enables storage arrays that support the functionality to provision and manage storage at the granularity of a VM, rather than LUNs or Volumes or mount points, as they do today. Without question, VVols is an incredibly powerful concept and will fundamentally change the interaction between storage and VMs in a way not seen since the concept of server virtualization first came to market. No surprise then that each and every storage vendor in the market is feverishly trying to build in VVols support and competing on the superiority of their implementation.
Yet one storage player, Tintri, has been delivering products with VM-centric features for four years without the benefit of VVols. How can this be so? How could Tintri do this? And what does it mean for them now that VVols are here? To do justice to this question we will briefly look at what VVols are and how they work and then dive into how Tintri has delivered the benefits of VVols for several years. We will also look at what the buyer of Tintri gets today and how Tintri plans to integrate VVols. Read on…
While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.
The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.
The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.
While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new. And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.
In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.
All-flash arrays are changing the datacenter for the better. No longer do we worry about IOPS bottlenecks from the array: all-flash arrays (AFA) can deliver a staggering amount of IOPs. AFAs with the ability to deliver hundreds of thousands of IOPs are not uncommon. The problem now, however, is how to get the IOPS from the array to the servers. We recently had a chance to see how well an AFA using EMC PowerPath driver works to eliminate this bottleneck—and we were blown away. Most comparisons with datacenter infrastructure show a 10-30% improvement in performance; but, the performance improvement that we saw with PowerPath was extraordinary.
Getting bits from an array to server is easy —very easy, in fact. The trick is getting the bits from a server to an array in an efficient manner when you have many virtual machines (VM) on multiple physical hosts that are transmitting the bits over a physical network with a virtual fabric overlay; this is much more difficult. Errors can get introduced and must be dealt with, the most efficient path must be obtained and established, re-evaluated and reestablished continually, and any misconfiguration can produce less than optimal performance. In some cases, this can cause outages or even data loss. In order to deal with the “pathing,” or how the I/O travels from the VM to storage, the OS running on the host needs a driver, or in the case where multiple paths can be taken from the server to the array, a multipathing driver needs to be used to direct the traffic.
Windows, Linux, VMware and most other modern operating systems include a basic multipath driver; however, these drivers tend to be generic and not code optimized to extract the maximum performance from an array and come with only rudimentary traffic optimization and management functions. In some cases these generic drivers are fine, but in the majority of datacenters the infrastructure is overtaxed and its equipment needs to be used in the most efficient manner possible. Fortunately, storage companies such as EMC are committed to making their arrays work as performant as possible and spend a considerable amount of time and research to develop multipathing drivers optimized for their arrays. EMC invited us to take a look at how PowerPath, their optimized “intelligent” multipath driver, performed on an XtremIO flash array connected to a Dell PowerEdge R710 server running ESXi 6.0 while simulating an Oracle workload. We looked at the results of the various tests EMC ran comparing PowerPath/VE multipath driver against VMware’s ESXi Native Multipath driver and we were impressed—very impressed—by the difference that an optimized, multipath driver like PowerPath can make in a high IO traffic scenario.
Virtualization is mature and widely adopted in the enterprise market, and convergence/hyperconvergence with virtualization is taking the market by storm. But what about mid-sized and SMB? Are they falling behind?
Many of them are. Generalist IT, low virtualization budgets, and small staff sizes all militate against complex virtualization projects and high costs. What this means is that when mid-sized and SMB want to virtualize, they either get sticker shock from high prices and high complexity, or dissatisfaction with cheap, poorly scalable and unreliable solutions. What they want and need is hyperconvergence for ease in management, lower CapEx and OpEx; and a simplified but highly scalable and available virtualization platform.
This is a tall order but not an impossible one: Scale Computing claims to meet these requirements for this large market segment, and Taneja Group’s HC3 Validation Report supports those claims. However, although lab results are vital to knowing the real story they are only part of that story. We also wanted to hear directly from IT about Scale in the real world of the mid-sized and SMB data center.
We undertook a Field Report project where we spoke at length with eight Scale customers. This report details our findings around the top common points we found throughout eight different environments: exceptional simplicity, excellent support, clear value, painless scalability, and high availability – all at a low price. These key features make a hyperconverged platform a reality for SMB and mid-market virtualization customers.