Taneja Group | Software+Defined+Storage
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Software+Defined+Storage

news / Blog

Storage Virtualization Meets Software Defined Storage: EMC ViPR 1.0

EMC recently GA'd a first version of ViPR. Many storage folks are not still clear about what ViPR is all about. Is it just storage virtualization repackaged to augment EMC's physical infrastructure solution features this time? Is it a shining new example of Software Defined Storage? Is it unified storage, management, and data services ala private cloud? What is going on here?

  • Premiered: 10/03/13
  • Author: Mike Matchett
Topic(s): EMC Storage Virtualization Software Defined Storage ViPR
news

Forget software-defined storage; we need software-concealed storage

Instead of software-defined storage, says Rich Castagna, we should have software-concealed storage that puts all that plumbing and exceptional functionality under the covers.

  • Premiered: 12/17/13
  • Author: Taneja Group
  • Published: Tech Target: Search Storage
Topic(s): TBA SDS TBA Software Defined Storage TBA Storage TBA Arun Taneja TBA Mike Matchett
news

EMC ViPR adds support for Hadoop, Storage Resource Management Suite

EMC Corp. today released its first update to its EMC ViPR software-defined storage application, adding support for Hadoop and the EMC Storage Resource Management Suite. EMC also upgraded the Storage Resource Management Suite, which is used to manage all of its storage hardware platforms.

  • Premiered: 01/30/14
  • Author: Taneja Group
  • Published: Tech Target: Search Storage
Topic(s): TBA EMC TBA Hadoop TBA Software Defined Storage TBA ViPR TBA data
Profiles/Reports

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio oSDDCffering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 09/02/14
news / Blog

HP gives a free license for 1TB StoreVirtual VSA license to all Intel Xeon E5 V3 users

In one of the gutsiest moves HP gives a free license for 1TB StoreVirtual VSA license to all Intel Xeon E5 V3 users. Is this a new HP or what?

Profiles/Reports

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio offering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 10/16/14
Resources

Vendor Panel: The New Shape of Software Defined Storage

Come listen in as Taneja Group Sr. Analyst Mike Matchett explores the rapidly expanding world of Software Defined Storage with a panel of the hottest SDS vendors in the market. These panelists will no doubt come out swinging with their best definitions of Software Defined - we already know they don't see eye-to-eye. What does a true SDS solution need to offer, and how are they different from each other? How can SDS change the datacenter landscape? Is SDS a pre-req for the SDDC or hybrid cloud? Do we need SDS just to stay competitive as data storage needs continue to grow? Where should you start deploying SDS, or is it mature enough to even start today? Bring your best SDS questions for these panelists as we search for some Software Defined answers.
Hear from the following:

  • Gridstore: Founder & CTO: Kelly Murphy
  • HP: Director, Product Marketing & Management - SDS: Rob Strechay
  • IBM: Director, Storage and SDE Strategy: Sam Werner
  • Red Hat: Product Marketing Lead, Big Data: Irshad Raihan

  • Premiered: 11/13/14
  • Location: OnDemand
  • Speaker(s): Mike Matchett, Taneja Group; Kelly Murphy, Gridstore; Rob Strechay, HP; Sam Werner, IBM; Irshad Raihan, Red Hat
Topic(s): TBA Topic(s): Vendor Panel Topic(s): TBA Topic(s): SDS Topic(s): TBA Topic(s): software-defined storage Topic(s): TBA Topic(s): Software Defined Storage Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): IBM Topic(s): TBA Topic(s): HP Topic(s): TBA Topic(s): Gridstore Topic(s): TBA Topic(s): Red Hat
news

Make this your most modern IT year yet

No one sticks with their New Year's resolution -- where is that gym card anyway? -- but you can pick up seven modern habits and feel good about fit, healthy IT ops.

  • Premiered: 01/13/15
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Big Data TBA hyperconvergence TBA Converged Infrastructure TBA predictive analytics TBA SDS TBA Software Defined Storage TBA software-defined storage TBA In-Memory TBA Caching TBA Infrastructure TBA Cloud TBA DR TBA Disaster Recovery TBA DRaaS TBA Disaster Recovery as a Service TBA IoT TBA Internet of Things TBA Virtual storage
news

5 Ways the Sanbolic Acquisition Could Change Citrix

The VDI giant is evolving beyond its core competency. That's a good thing.

  • Premiered: 01/14/15
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA Sanbolic TBA Citrix TBA SDS TBA Software Defined Storage TBA software-defined storage TBA VMWare TBA Nutanix TBA Scale Computing TBA Linux TBA Windows TBA KVM TBA Xen TBA Hyper-V TBA VSA TBA vSphere TBA Microsoft TBA IBM TBA Red Hat TBA HP TBA VDI TBA Virtual Desktop Infrastructure TBA virtual desktop TBA CloudPlatform TBA NetScaler
news

Breaking Down VMware's VSAN 6

How does the new, all-flash version compare with VSAN 5.5?

  • Premiered: 02/03/15
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA All Flash TBA Flash TBA SSD TBA VSAN TBA VMWare TBA VMware VSAN TBA SDS TBA software-defined TBA software defined TBA Software Defined Storage TBA Hypervisor TBA Virtualization TBA Low latency TBA Capacity TBA VM TBA Virtual Machine TBA VMFS TBA Storage TBA Hybrid TBA Hybrid Array TBA hybrid flash TBA vSphere
news

General Purpose Disk Array Buying Guide

The disk array remains the core element of any storage infrastructure. So it’s appropriate that we delve into it in a lot more detail.

  • Premiered: 02/17/15
  • Author: Taneja Group
  • Published: Enterprise Storage Forum
Topic(s): TBA Disk TBA Storage TBA VDI TBA virtual desktop TBA NAS TBA Network Attached Storage TBA VCE TBA VNX TBA HDS TBA EMC TBA HP TBA NetApp TBA Hitachi Data Systems TBA Hitachi TBA IBM TBA Dell TBA Syncplicity TBA VSPEX TBA IBM SVC TBA SAN Volume Controller TBA software-defined TBA Software Defined Storage TBA SDS TBA Storwize TBA Storwize V7000 TBA replication TBA Automated Tiering TBA tiering TBA Virtualization TBA 3PAR
news

Are all software-defined storage vendors the same?

What does software-defined storage mean? How is it implemented? And why does it matter for data center managers?

  • Premiered: 03/17/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA Storage TBA SDS TBA software-defined storage TBA Datacenter TBA Software Defined Storage TBA VMWare TBA VSAN TBA HP TBA StoreVirtual TBA StoreOnce TBA VSA TBA Maxta TBA Maxta Storage Platform TBA Tarmin TBA GridBank TBA data-defined TBA DDS TBA Data Defined Storage TBA Nexenta TBA NexentaStor TBA Big Data TBA replication TBA scalability TBA Cloud TBA data mining
Profiles/Reports

HP ConvergedSystem: Solution Positioning for HP ConvergedSystem Hyper-Converged Products

Converged infrastructure systems – the integration of compute, networking, and storage - have rapidly become the preferred foundational building block adopted by businesses of all shapes and sizes. The success of these systems has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the effort and time to custom build their infrastructure from best of breed D-I-Y components. Purpose built converged infrastructure systems have been optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI).

Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, integrated at the rack level gave businesses the flexibility to cover the widest range of solution workload requirements while still using well-known infrastructure components. Emerging onto the scene recently has been a more modular approach to convergence using what we term Hyper-Convergence. With hyper-convergence, the three-tier architecture has been collapsed into a single system appliance that is purpose-built for virtualization with hypervisor, compute, and storage with advanced data services all integrated into an x86 industry-standard building block.

In this paper we will examine the ideal solution environments where Hyper-Converged products have flourished. We will then give practical guidance on solution positioning for HP’s latest ConvergedSystem Hyper-Converged product offerings.

Publish date: 05/07/15
Profiles/Reports

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Resources

Emerging Technologies in Storage: Disaggregation of Storage Function

Join us for a fast-paced and informative 60-minute roundtable as we discuss one of the newest trends in storage: Disaggregation of traditional storage functions. A major trend within IT is to leverage server and server-side resources to the maximum extent possible. Hyper-scale architectures have led to the commoditization of servers and flash technology is now ubiquitous and is often times most affordable as a server-side component. Underutilized compute resources exist in many datacenters as the growth in CPU power has outpaced other infrastructure elements. One current hot trend — software-defined-storage—advocates collocating all storage functions to the server side but also relies on local, directly attached storage to create a shared pool of storage. That limits the server’s flexibility in terms of form factor and compute scalability. 
Now some vendors are exploring a new, optimally balanced approach. New forms of storage are emerging that first smartly modularize storage functions, and then intelligently host components in different layers of the infrastructure. With the help of a lively panel of experts we will unpack this topic and explore how their innovative approach to intelligently distributing storage functions can bring about better customer business outcomes.

Moderator:
Jeff Kato, Senior Analyst & Consultant, Taneja Group

Panelists:
Brian Biles, Founder & CEO, Datrium
Kate Davis, Senior Marketing Manager, HPE
Nutanix

  • Premiered: 05/19/16
  • Location: OnDemand
  • Speaker(s): Jeff Kato, Taneja Group; Brian Biles, Datrium; Kate Davis, HPE; Nutanix
Topic(s): TBA Topic(s): Jeff Kato Topic(s): TBA Topic(s): Datrium Topic(s): TBA Topic(s): HPE Topic(s): TBA Topic(s): Nutanix Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): hyper scale Topic(s): TBA Topic(s): hyper-scale Topic(s): TBA Topic(s): hyperscale Topic(s): TBA Topic(s): software-defined Topic(s): TBA Topic(s): software-defined storage Topic(s): TBA Topic(s): SDS Topic(s): TBA Topic(s): software defined Topic(s): TBA Topic(s): Software Defined Storage
news

Hedvig storage update offers multicloud capabilities

Hedvig outlined its vision for a Universal Data Plane spanning public and private clouds, as it announced an updated version of its software-defined storage.

  • Premiered: 09/22/16
  • Author: Taneja Group
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Hedvig TBA Cloud TBA Jeff Kato TBA Public Cloud TBA Private Cloud TBA software-defined TBA software-defined storage TBA SDS TBA Software Defined Storage TBA software defined TBA Universal Data Plane TBA iSCSI TBA NFS TBA Amazon TBA Amazon S3 TBA OpenStack TBA OpenStack Swift TBA Deduplication TBA Data Deduplication TBA Compression TBA Snapshots TBA Snapshot TBA tiering TBA Caching TBA VMWare TBA VMware vSphere TBA vSphere TBA Docker TBA Mirantis TBA scale-out
Profiles/Reports

Datrium's Optimized Platform for Virtualized IT: "Open Convergence" Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16
news

Software Defined Storage: Changing Data from ‘State-ful’ to Stateless

Hedvig, the Santa Clara software defined storage (SDS) start-up now in its third year, has announced the infusion of a cool $21.5 million in Series C venture funding as attention increasingly turns to the fragmented SDS market, predicted to surpass $7 billion by 2020.

  • Premiered: 03/04/17
  • Author: Taneja Group
  • Published: Enterprise Tech
Topic(s): TBA Jeff Kato TBA Hedvig TBA software-defined TBA software defined TBA software-defined storage TBA Software Defined Storage TBA SDS TBA Distributed Storage TBA Cloud TBA cloud adoption TBA Cloud Storage TBA Storage TBA complexity TBA hyperscale TBA Hybrid Cloud TBA Public Cloud TBA API TBA scalable TBA scalability TBA Hypervisor TBA container TBA containers TBA VM TBA Virtual Machine TBA Virtualization TBA Docker TBA OpenStack TBA Hyper-V TBA VMWare TBA Primary Storage