Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.
All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.
Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.
Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.
Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.
In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.
All of the trends leading towards the world-wide Internet of Things (IoT) – ubiquitous, embedded computing, mobile, organically distributed nodes, and far-flung networks tying them together - are also coming in full force into the IT data center. These solutions are taking the form of converged and hyperconverged modules of IT infrastructure. Organizations adopting such solutions gain from a simpler building-block way to architect and deploy IT, and forward-thinking vendors now have a unique opportunity to profit from subscription services that while delivering superior customer insight and support, also help build a trusted advisor relationship that promises an ongoing “win-win” scenario for both the client and the vendor.
There are many direct (e.g. revenue impacting) and indirect (e.g. customer satisfaction) benefits we mention in this report, but the key enabler to this opportunity is in establishing an IoT scale data analysis capability. Specifically, by approaching converged and hyperconverged solutions as an IoT “appliance”, and harvesting low-level component data on utilization, health, configuration, performance, availability, faults, and other end point metrics across the full worldwide customer base deployment of appliances, an IoT vendor can then analyze the resulting stream of data with great profit for both the vendor and each individual client. Top-notch analytics can feed support, drive product management, assure sales/account control, inform marketing, and even provide a revenue opportunity directly (e.g. offering a gold level of service to the end customer).
An IoT data stream from a large pool of appliances is almost literally the definition of “big data” – non-stop machine data at large scale with tremendous variety (even within a single converged solution stack) – and operating and maintaining such a big data solution requires a significant amount of data wrangling, data science and ongoing maintenance to stay current. Unfortunately this means IT vendors looking to position IoT oriented solutions may have to invest a large amount of cash, staff and resources into building out and supporting such analytics. For many vendors, especially those with a varied or complex convergence solution portfolio or established as a channel partner building them from third-party reference architectures, these big data costs can be prohibitive. However, failing to provide these services may result in large friction selling and supporting converged solutions to clients now expecting to manage IT infrastructure as appliances.
In this report, we’ll look at the convergence and hyperconvergence appliance trend, and the increasing customer expectations for such solutions. In particular we’ll see how IT appliances in the market need to be treated as complete, commoditized products as ubiquitous and with the same end user expectations as emerging household IoT solutions. In this context, we’ll look at Glassbeam’s unique B2B SaaS SCALAR that converged and hyperconverged IT appliance vendors can immediately adopt to provide an IoT machine data analytic solution. We’ll see how Glassbeam can help differentiate amongst competing solutions, build a trusted client relationship, better manage and support clients, and even provide additional direct revenue opportunities.
The din surrounding VMware vSphere Virtual Volumes (VVols) is deafening. It started in 2011 when VMware announced the concept of VVols and the storage industry reacted with enthusiasm and culminated with its introduction as part of vSphere 6 release in April 2015. Viewed simply, VVols is an API that enables storage arrays that support the functionality to provision and manage storage at the granularity of a VM, rather than LUNs or Volumes or mount points, as they do today. Without question, VVols is an incredibly powerful concept and will fundamentally change the interaction between storage and VMs in a way not seen since the concept of server virtualization first came to market. No surprise then that each and every storage vendor in the market is feverishly trying to build in VVols support and competing on the superiority of their implementation.
Yet one storage player, Tintri, has been delivering products with VM-centric features for four years without the benefit of VVols. How can this be so? How could Tintri do this? And what does it mean for them now that VVols are here? To do justice to this question we will briefly look at what VVols are and how they work and then dive into how Tintri has delivered the benefits of VVols for several years. We will also look at what the buyer of Tintri gets today and how Tintri plans to integrate VVols. Read on…
While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.
The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.
The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.
While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new. And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.
In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.
VMware Virtual Volumes (VVols) is one of the most important technologies that impacts how storage interacts with virtual machines. In April and May 2015, Taneja Group surveyed eleven storage vendors to understand how each was implementing VVols in their storage arrays. This survey consisted of 32 questions that explored what storage array features were exported to vSphere 6, how VMs were provisioned and managed. We were surprised at the level of differences and the variety of methods used to enable VVols. It was also clear from the analysis that underlying limitations of an array will limit what is achievable with VVols. However, it is also important to understand that there are many other aspects of a storage array that matter—the VVol implementation is but one major factor. And VVol implementation is a work in progress and this represents only the first pass.
We categorized these implementations in three levels: Type 1, 2 and 3, with Type 3 delivering the most sophisticated VVol benefits. The definitions of these three types is shown below, as is the summary of findings.
Most storage array vendors participated in our survey but a few chose not to, often due to the fact that they already delivered the most important benefits that VVols deliver, i.e. the ability to provision and manage storage at a VM-level, rather than at a LUN, volume or mount point level. In particular that list included the hyperconverged players, such as Nutanix and SimpliVity but also players like Tintri.
Let's face it: Today’s storage is dumb. Mostly it is a dumping ground for data. As we produce more data we simply buy more storage and fill it up. We don't know who is using what storage at a given point in time, which applications are hogging storage or have gone rogue, what and how much sensitive information is stored, moved or accessed by whom, and so on. Basically, we are blind to whatever is happening inside that storage array. On the other hand, storage should just work, users of storage should see it as an endless invisible resource, while the administrators of storage should be able to unlock the value of data itself through real-time analytical insight, not fighting fires just to keep storage running and provisioned.
Storage systems these days are often quoted in petabytes and will eventually move to exabytes and beyond. Businesses are being crushed under the weight of this data sprawl and a new tsunami of data is coming their way as the Internet of Things fully comes online in the next decade. How are administrators dealing with this ever increasing appetite to store more data? It is time for a radical new approach to building a storage system, one that is aware of the information stored within while dramatically reducing the time administrators spend managing the system.
Welcome to the new era of data aware storage. This could not have come at a better time. Storage growth, as we all know, is out of control. Granted the cost per GB keeps falling at about a 40% per year rate, but we keep growing capacity at about a 60% growth rate. This causes both the cost and capacity to keep increasing every year. While cost increase is certainly an issue, the bigger issue is manageability. And not knowing what we have buried in those mounds of data is a bigger issue. Instead of data being an asset, it is a dead weight that keeps getting heavier. If we didn’t do something about it, we would simply be overwhelmed, if we are not already.
The question we ask is why is it possible to develop data aware storage today when we couldn’t yesterday? The answer is simple: flash technology, virtualization, and the availability of “free” CPU cycles make it possible for us to build storage today that can do a lot of heavy lifting from the inside. While this was possible yesterday, if implemented, it would have slowed down the performance of primary storage to a point where it would be useless. So, in the past, we simply let it store data. But today, we can build in a lot of intelligence without impacting performance or quality of service. We call this new type of storage Data Aware Storage.
When implemented correctly, data aware storage can provide insights that were not possible yesterday. It would reduce risk for non-compliance. It would improve governance. It would automate many of the storage management processes that are manual today. It would provide insights into how well the storage is being utilized. It would identify if a dangerous situation was about to occur, either for compliance or capacity or performance or SLA. You get the point. Storage that is inherently smart and knows: what type of data it has, how it is growing, who is using it, who is abusing it, and so on.
In this profile, we dive deep into a new technology, called Qumulo Core, the industry’s first data-aware scale-out NAS platform. Qumulo Core promises to radically change the scale-out NAS product category by using built-in data awareness to massively scale a distributed file system, while at the same time radically reducing the time to administer a system than can hold billions of files. File systems in the past could not scale to this level because administrative tools would crush under the weight of the system.