Taneja Group | virtual+desktop
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: virtual+desktop

news

Astute Networks improves ViSX G4 VM storage box with speed, dedupe

Astute Networks Inc. today said it will deliver its next-generation flash storage systems designed to speed performance of storage for virtual machines and virtual desktops.

  • Premiered: 08/20/12
  • Author: Taneja Group
  • Published: TechTarget: SearchVirtualStorage.com
Topic(s): TBA Astute Networks TBA ViSX TBA VM TBA Flash TBA SSD TBA Virtual Machine TBA virtual desktop TBA VDI TBA Dedupe
news

Astute Networks' Appliance Eliminates Virtual Machine I/O Performance Barriers...

LEADING ANALYST FIRM REPORTS THAT ASTUTE NETWORKS’ HIGH-PERFORMANCE FLASH VM STORAGE APPLIANCE ELIMINATES VIRTUAL MACHINE I/O PERFORMANCE BARRIERS FOR VIRTUAL SERVER AND VIRTUAL DESKTOP ENVIRONMENTS

  • Premiered: 09/11/12
  • Author: Taneja Group
  • Published: Astute Networks Press
Topic(s): TBA Astute Networks TBA VM TBA IO TBA Virtual Server TBA virtual desktop TBA VDI
news

Hitachi NAS Platform adds primary deduplication

Hitachi Data Systems is offering primary deduplication in its Hitachi NAS Platform, a feature that was more than two years in the making.

  • Premiered: 05/03/13
  • Author: Taneja Group
  • Published: Tech Target: Search Storage
Topic(s): TBA HDS TBA Hitachi TBA NAS TBA Deduplication TBA primary deduplication TBA HNAS TBA FPGA TBA BlueArc TBA Permabit TBA Ocarina Networks TBA IBM TBA Storwize TBA NetApp TBA Engenio TBA GreenBytes TBA Virtualization TBA virtual desktop
news

5 Ways the Sanbolic Acquisition Could Change Citrix

The VDI giant is evolving beyond its core competency. That's a good thing.

  • Premiered: 01/14/15
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA Sanbolic TBA Citrix TBA SDS TBA Software Defined Storage TBA software-defined storage TBA VMWare TBA Nutanix TBA Scale Computing TBA Linux TBA Windows TBA KVM TBA Xen TBA Hyper-V TBA VSA TBA vSphere TBA Microsoft TBA IBM TBA Red Hat TBA HP TBA VDI TBA Virtual Desktop Infrastructure TBA virtual desktop TBA CloudPlatform TBA NetScaler
news

General Purpose Disk Array Buying Guide

The disk array remains the core element of any storage infrastructure. So it’s appropriate that we delve into it in a lot more detail.

  • Premiered: 02/17/15
  • Author: Taneja Group
  • Published: Enterprise Storage Forum
Topic(s): TBA Disk TBA Storage TBA VDI TBA virtual desktop TBA NAS TBA Network Attached Storage TBA VCE TBA VNX TBA HDS TBA EMC TBA HP TBA NetApp TBA Hitachi Data Systems TBA Hitachi TBA IBM TBA Dell TBA Syncplicity TBA VSPEX TBA IBM SVC TBA SAN Volume Controller TBA software-defined TBA Software Defined Storage TBA SDS TBA Storwize TBA Storwize V7000 TBA replication TBA Automated Tiering TBA tiering TBA Virtualization TBA 3PAR
news

5 Ways vSphere 6 Will Change the Datacenter

vSphere 6 has more than 650 new features. Here are five of the ones that have the ability to transform your environment.

  • Premiered: 02/20/15
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA VMWare TBA vSphere TBA VMware vSphere TBA Tom Fenton TBA vSphere 6 TBA Datacenter TBA VVOL TBA Virtualization TBA Virtual Machine TBA VM TBA vGPU TBA virtual desktop
news

How can I simplify storage allocation for virtual desktops?

Find out the easiest methods of allocating storage for a virtual desktop infrastructure.

  • Premiered: 04/06/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Storage TBA Virtualization TBA VM TBA Virtual Machine TBA virtual desktop TBA VDI TBA Virtual Desktop Infrastructure TBA VMWare TBA Horizon View TBA Citrix TBA XenDesktop TBA MCS TBA Machine Creation Services TBA Clones TBA Tom Fenton
news

Should I use an all-flash or hybrid array for my virtual desktop environment?

Taneja Group's Tom Fenton weighs the benefits and drawbacks of using hybrid and all-flash arrays for storage in this expert answer.

  • Premiered: 04/08/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA all-flash TBA All Flash TBA SSD TBA Flash TBA Hybrid Array TBA Hybrid TBA hybrid storage TBA VDI TBA virtual desktop TBA Virtual Desktop Infrastructure TBA Storage TBA IOPS TBA HDD
news

Nimboxx broadens Atomic Unit platform to support HA

Nimboxx adds high availability features to its Atomic Unit to support migrating storage from physical to highly available virtual machines.

  • Premiered: 04/10/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Nimboxx TBA HA TBA high availability TBA Virtualization TBA VM TBA Virtual Machine TBA Storage TBA hyperconvergence TBA hyper-convergence TBA MeshOS TBA virtual desktop TBA VDI TBA Virtual Desktop Infrastructure TBA KVM TBA Hypervisor TBA Migration TBA Nutanix TBA SimpliVity TBA Scale Computing TBA Arun Taneja
news

Are data reduction techniques essential in VDI environments?

Data reduction methods can help shrink your storage footprint and costs. Here are some guidelines for incorporating these approaches into your VDI environment.

  • Premiered: 04/13/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Tom Fenton TBA TechTarget TBA Data reduction TBA VDI TBA virtual desktop TBA Virtual Desktop Infrastructure TBA Storage TBA Enterprise Storage TBA Data Storage TBA Deduplication TBA data compression TBA Virtualization
news

Reduxio launches hybrid array with built-in protection

Startup Reduxio Systems launches hybrid storage system with BackDating data recovery to any second in time, inline dedupe, and continuous block tiering.

  • Premiered: 09/04/15
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA Reduxio TBA Hybrid Array TBA Protection TBA Hybrid TBA Storage TBA data optimization TBA Optimization TBA Data protection TBA inline deduplication TBA Deduplication TBA Flash TBA SSD TBA flash hybrid TBA CDP TBA Continuous Data Protection TBA Recovery TBA Snapshot TBA Snapshots TBA Compression TBA Virtualization TBA Metadata TBA Data reduction TBA VDI TBA Virtual Desktop Infrastructure TBA virtual desktop TBA Mike Matchett TBA Auto Tiering TBA Jeff Kato TBA clone
Profiles/Reports

Edge HyperConvergence for Robo's: Riverbed SteelFusion Brings IT All Together

Hyperconvergence is one of the hottest IT trends going in to 2016. In a recent Taneja Group survey of senior enterprise IT folks we found that over 25% of organizations are looking to adopt hyperconvergence as their primary data center architecture. Yet the centralized enterprise datacenter may just be the tip of the iceberg when it comes to the vast opportunity for hyperconverged solutions. Where there are remote or branch office (ROBO) requirements demanding localized computing, some form of hyperconvergence would seem the ideal way to address the scale, distribution, protection and remote management challenges involved in putting IT infrastructure “out there” remotely and in large numbers.

However, most of today’s popular hyperconverged appliances were designed as data center infrastructure, converging data center IT resources like servers, storage, virtualization and networking into Lego™ like IT building blocks.  While these at first might seem ideal for ROBOs – the promise of dropping in “whole” modular appliances precludes any number of onsite integration and maintenance challenges, ROBOs have different and often more challenging requirements than a datacenter. A ROBO does not often come with trained IT staff or a protected datacenter environment. They are, by definition, located remotely across relatively unreliable networks. And they fan out to the thousands (or tens of thousands) of locations.

Certainly any amount of convergence simplifies infrastructure making easier to deploy and maintain. But in general popular hyperconvergence appliances haven’t been designed to be remotely managed en masse, don’t address unreliable networks, and converge storage locally and directly within themselves. Persisting data in the ROBO is a recipe leading to a myriad of ROBO data protection issues. In ROBO scenarios, the datacenter form of hyperconvergence is not significantly better than simple converged infrastructure (e.g. pre-configured rack or blades in a box).

Riverbed’s SteelFusion we feel has brought full hyperconvergence benefits to the ROBO edge of the organization. They’ve married their world-class WANO technologies, virtualization, and remote storage “projection” to create what we might call “Edge Hyperconvergence”. We see the edge hyperconverged SteelFusion as purposely designed for companies with any number of ROBO’s that each require local IT processing.

Publish date: 12/17/15
Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
news

Datrium DVX storage takes novel approach for VMware, flash

The Datrium DVX storage system for VMware virtual machines is generally available and drawing interest with its server-powered architecture, flash-boosted performance and low cost.

  • Premiered: 02/01/16
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Datrium TBA DVX TBA Storage TBA VMWare TBA Flash TBA flash storage TBA SSD TBA VM TBA Performance TBA flash performance TBA VMware vSphere TBA vSphere TBA SAN TBA Virtual Machine TBA LUN TBA Hypervisor TBA Deduplication TBA Compression TBA RAM TBA SAS TBA Capacity TBA virtual desktop TBA Virtual Desktop Infrastructure TBA VDI TBA vCenter TBA CPU TBA ESX TBA all-flash TBA all flash array TBA AFA
news

Evaluating hyper-converged architectures: Five key CIO considerations

Simplicity and cost savings are among the benefits of hyper-converged architectures, but the biggest draw for CIOs may be how these systems make IT teams more business-ready.

  • Premiered: 04/27/16
  • Author: Mike Matchett
  • Published: TechTarget: Search CIO
Topic(s): TBA CIO TBA simplicity TBA hyper-converged TBA hyperconverged TBA hyperconvergence TBA Storage TBA CAPEX TBA Data protection TBA DP TBA Cloud TBA Business Continuity TBA BC TBA Disaster Recovery TBA DR TBA Hypervisor TBA software-defined TBA Networking TBA converged TBA convergence TBA VM TBA Virtual Machine TBA Virtualization TBA VDI TBA virtual desktop TBA Virtual Desktop Infrastructure TBA Datacenter TBA scale-out TBA OPEX TBA Provisioning TBA WANO
Profiles/Reports

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16
Profiles/Reports

Datrium's Optimized Platform for Virtualized IT: "Open Convergence" Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16