Taneja Group | network
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: network

news / Blog

InfiniBand as Data Center Communication Virtualization

Recently we posted a new market assessment of InfiniBand and its growing role in enterprise data centers. Here is a more philosophical thought about the optimized design of InfiniBand and its role as data center communication virtualization...

  • Premiered: 07/27/12
  • Author: Mike Matchett
  • Published: Taneja Blog
Topic(s): InfiniBand convergence network Virtualization
Profiles/Reports

Convergence for the Branch Office: Transforming Resiliency and TCO with Riverbed SteelFusion (TVS)

The branch office has long been a critical dilemma for the IT organization. Branch offices for many organizations are a critical point of productivity and revenue generation, yet the branch has always come with a tremendous amount of operational overhead and risk. Worse yet, challenges are often exacerbated because the branch office too often looks like a carryover of outdated IT practices.

More often than not, the branch office is still a highly manual, human-effort-driven administration exercise. Physical equipment too often sits at a remote physical office, and requires significant human management and intervention for activities like data protection and recovery, or replacement of failed hardware. Given the remote nature of the branch office, such human intervention often comes with significant overhead in the form of telephone support, less than efficient over-the-wire system configuration, equipment build and ship processes, or even significant travel to remote locations. Moreover, in an attempt to avoid issues, the branch office is often over-provisioned with equipment in order to reduce the impact of outages, or is designed in such a way as to be too dependent on across the Wide Area Network (WAN) services that impair user productivity and simply exchange the risk of equipment failure for the risk of WAN outage. But while such practices come with significant operational cost, there’s a subtler cost lurking below the surface – any branch office outage is enmeshed in data consequences. Data protection may be a slower process for the branch office, subjecting the branch to greater risks with equipment failure or disaster, and restoring branch office data and productivity after a disaster can be a long slow process compared to the capabilities of the modern datacenter.

When branch offices are a key part of a business, these practices that are routinely accepted as the standard can make the branch office one of the costliest and riskiest areas of the IT infrastructure. Worse yet, for many enterprises, the branch office has only increased its importance over time, and may generate more revenue and require more responsive and available IT systems than ever before. The branch office clearly requires better agility and efficiency than it receives today.

Riverbed Technologies has long proven their mettle in helping enterprises optimize and better enable connectivity and data sharing for distributed work teams. Over the past decade, Riverbed has come to dominate the market for WAN optimization technologies that compress data and optimize the connection between branch or remote offices and the datacenter. But Riverbed rose to this position of dominance because their SteelHead appliances do far more than just optimize a connection – Riverbed’s dominance of this market sprung from deep collaboration and interaction optimization of CIFS/SMB and other protocols by way of intelligent interception and caching of the right data to make the remote experience feel like a local experience. Moreover, Riverbed SteelHead could do this while making that remote connection effectively stateless, and eliminating the need to protect or manage data in the branch office.

Almost two years ago, Riverbed announced a continuing evolution of their “location independent computing” focus with the introduction of their SteelFusion family of solutions. The vision behind SteelFusion was a focus on delivering far more performance and capability in branch offices, while doing away with the complexity of multiple component parts and scattered data. SteelFusion does this by transforming the branch office into a stateless “projection” of data, applications, and VMs stored in the datacenter. Moreover, SteelFusion does this with a converged solution that combines storage, networking, and compute all in one device – the first comprehensive converged infrastructure solution purpose-built for the branch. This converged offering though, is built on branch office “statelessness” that, as we’ll review, transparently stores data in the datacenter, and allows the business to configure, change, protect, and manage the branch office with enterprise tools, while eradicating the risk associated with traditional branch office infrastructure.

SteelFusion today does this by virtualizing VMware ESXi VMs on a stateless appliance that in essence “projects” data from the datacenter to a remote location, while maintaining localized speed of access and resilient availability that can tolerate even severe network outages. Three innovative technology components that make up Riverbed’s SteelFusion allow it to host virtual machines that access their primary data via the datacenter, from where it is cached on the SteelFusion appliance while maintaining a highly efficient but near synchronous connection back to the datacenter storage. In turn, SteelFusion makes it possible to run many local applications in a rich, complex branch office while requiring no other servers or devices. Riverbed promises that SteelFusion’s architecture can tolerate outages, but synchronize data so effectively that it will operate as a stateless appliance, enabling branch data to be completely protected by datacenter synchronization and backup, with more up to date protection and faster recovery regardless of whether there’s a loss of a single file, or the loss of an entire system. In short, this is a promise to comprehensively revolutionize the practice of branch office IT.

In January of 2014, Taneja Group took a deeper look at what Riverbed is doing with SteelFusion. While we’ve provided other written assessments on the use case and value of Riverbed SteelFusion, we also wanted to take a hands-on look at how the technology works, and whether in real world use it really delivers management effort reductions, availability improvements, and increased IT capabilities along with consequent improvements in the risks around branch office IT. To do this, we turned to a hands-on lab exercise – what we call a Technology Validation.

What did we find?  We found that Riverbed SteelFusion does indeed deliver a transformation of branch office management and capabilities, by fundamentally reducing complexity, injecting a number of powerful capabilities (such as enterprise snapshots and access to all data, copies, and tools in the enterprise) and making the branch office resilient, constantly protected, and instantly recoverable. While the change in capabilities is significant, this also translates into a significant impact on time and effort, and we captured a number of metrics throughout our hands-on look at SteelFusion. For the details, we turn to the full report.

Publish date: 04/14/14
news

Violin and Microsoft play a duet to speed up applications

A partnership between Microsoft and Violin Memory will let enterprises tightly tie a new all-flash storage array to their servers, speeding up popular Microsoft applications.

  • Premiered: 04/22/14
  • Author: Taneja Group
  • Published: CIO
Topic(s): TBA Violin Memory TBA Microsoft TBA Flash TBA SSD TBA Storage TBA network TBA server TBA VDI TBA Virtualization TBA SQL Server
Resources

Hyper-converged infrastructure starts to offer greater choice

A hyper-converged infrastructure tightly integrates storage, compute, networking and server virtualization resources in the same box, and now products are starting to offer more points of differentiation.

Arun Taneja, founder and consulting analyst at Taneja Group in Hopkinton, Massachusetts, surveyed the hyper-converged product landscape in this podcast interview. He explained the distinction between hyper-converged and converged systems, updated the list of products that meet his definition of hyper-convergence, discussed the latest choices users will find for hypervisors and hardware, and offered his predictions on the direction hyper-converged storage products could take.

  • Premiered: 06/02/14
  • Location: OnDemand
  • Speaker(s): Arun Taneja
  • Sponsor(s): TechTarget
Topic(s): TBA Topic(s): Arun Taneja Topic(s): TBA Topic(s): hyper convergence Topic(s): TBA Topic(s): Converged Infrastructure Topic(s): TBA Topic(s): Compute Topic(s): TBA Topic(s): network Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): Server Virtualization Topic(s): TBA Topic(s): Virtualization Topic(s): TBA Topic(s): WAN Optimization Topic(s): TBA Topic(s): WANO Topic(s): TBA Topic(s): SimpliVity Topic(s): TBA Topic(s): Nutanix Topic(s): TBA Topic(s): Scale Computing Topic(s): TBA Topic(s): VMWare Topic(s): TBA Topic(s): VSAN Topic(s): TBA Topic(s): LUN
Profiles/Reports

Unified Storage Array Efficiency: HP 3PAR StoreServ 7400c versus EMC VNX 5600 (TVS)

The IT industry is in the middle of a massive transition toward simplification and efficiency around managing on-premise infrastructure at today’s enterprise data centers. In the past few years there has been a rampant onset of technology clearly focused at simplifying and radically changing the economics of traditional enterprise infrastructure. These technologies include Public/Private Clouds, Converged Infrastructure, and Integrated Systems to name a few. All of these technologies are geared to provide more efficiency of resources, take less time to administer, all at a reduced TCO. However, these technologies all rely on efficiency and simplicity of the underlying technologies of Compute, Network, and Storage. Often times the overall solution is only as good as the weakest link in the chain. The storage tier of the traditional infrastructure stack is often considered the most complex to manage.

This technology validation focuses on measuring the efficiency and management simplicity by comparing two industry leading mid-range external storage arrays configured in the use case of unified storage. Unified storage has been a popular approach to storage subsystems that consolidates both file access and block access within a single external array thus being able to share the same precious drive capacity resources across both protocols simultaneously. Businesses value the ability to send server workloads down a high performance low latency block protocol while still taking advantage of simplicity and ease of sharing file protocols to various clients. In the past businesses would have either setup a separate file server in front of their block array or buy completely separate NAS devices, thus possibly over buying storage resource and adding complexity. Unified storage takes care of this by providing ease of managing one storage device for all business workload needs. In this study we compared the attributes of storage efficiency and ease of managing and monitoring an EMC VNX unified array versus an HP 3PAR StoreServ unified array. The approach we used was to setup two arrays side-by-side and recorded the actual complexity of managing each array for file and block access, per the documents and guides provided for each product. We also went through the exercise of sizing various arrays via publicly available configuration guides to see what the expected storage density efficiency would be for some typically configured systems.

Our conclusion was nothing short of astonishment. In the case of the EMC VNX2 technology, the approach to unification more closely resembles a hardware packaging and management veneer approach than what would have been expected for a second generation unified storage system. HP 3PAR StoreServ on the other hand, in its second generation of unified storage has transitioned the file protocol services from external controllers to completely converged block and file services within the common array controllers. In addition, all the data path and control plumbing is completely internal as well with no need to wire loop back cables between controllers. HP has also made the investment to create a totally new management paradigm based on the HP OneView management architecture, which radically simplifies the administrative approach to managing infrastructure. After performing this technology validation we can state with confidence that HP 3PAR StoreServ 7400c is 2X easier to provision, 2X easier to monitor, and up to 2X more data density efficient than a similarly configured EMC VNX 5600. 

Publish date: 12/03/14
Profiles/Reports

HP ConvergedSystem: Solution Positioning for HP ConvergedSystem Hyper-Converged Products

Converged infrastructure systems – the integration of compute, networking, and storage - have rapidly become the preferred foundational building block adopted by businesses of all shapes and sizes. The success of these systems has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the effort and time to custom build their infrastructure from best of breed D-I-Y components. Purpose built converged infrastructure systems have been optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI).

Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, integrated at the rack level gave businesses the flexibility to cover the widest range of solution workload requirements while still using well-known infrastructure components. Emerging onto the scene recently has been a more modular approach to convergence using what we term Hyper-Convergence. With hyper-convergence, the three-tier architecture has been collapsed into a single system appliance that is purpose-built for virtualization with hypervisor, compute, and storage with advanced data services all integrated into an x86 industry-standard building block.

In this paper we will examine the ideal solution environments where Hyper-Converged products have flourished. We will then give practical guidance on solution positioning for HP’s latest ConvergedSystem Hyper-Converged product offerings.

Publish date: 05/07/15
news / Blog

SOC, NOC, and Roll - AccelOps Converges Security and Network Ops

We are seeing convergence everywhere in IT these days. AccelOps shows how convergence in systems management offers many of the same kinds of values as it does in other areas of IT - leveraged capabilities across formerly silo'd practices, simplified tasks and automation embedding best practices, and ready to roll deployment out of the box. ...

  • Premiered: 06/04/15
  • Author: Mike Matchett
Topic(s): AccelOps NOC SOC Performance Security network
news

Integrate cloud tiering with on-premises storage

Cloud and on-premises storage are increasingly becoming integrated so cloud is just another tier available to storage administrators.

  • Premiered: 06/02/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Cloud TBA Storage TBA Mike Matchett TBA Public Cloud TBA elasticity TBA Hybrid Cloud TBA Performance TBA hyperconverged TBA hyperconvergence TBA Cloud Storage TBA EFSS TBA Amazon EBS TBA AWS TBA Amazon Web Services TBA Amazon TBA EFS TBA Elastic Block Storage TBA Block Storage TBA Elastic File Store TBA SoftNAS TBA API TBA SDS TBA software-defined storage TBA OpenStack TBA Maxta TBA Nexenta TBA Qumulo TBA Tarmin TBA WANO TBA WAN Optimization
news

Cohesity aims to converge all secondary storage

Cohesity rolls out early-access appliance that aims to remove the need for multiple storage products by converging secondary storage workflows.

  • Premiered: 06/19/15
  • Author: Taneja Group
  • Published: TechTarget: Search Data Backup
Topic(s): TBA Cohesity TBA Storage TBA convergence TBA converged TBA DP TBA Data protection TBA Nutanix TBA hyper-converge TBA hyper-converged TBA hyper-convergence TBA Virtualization TBA Compute TBA network TBA Backup TBA Archiving TBA Data Center TBA NFS TBA HDFS TBA Hadoop Distributed File System TBA replication TBA DR TBA Disaster Recovery TBA SSD TBA Flash TBA Snapshots TBA Arun Taneja
news

Assimilate converged IT infrastructure into the data center

With the hype around converged and hyper-converged IT infrastructure, resistance seems futile. While your IT organization will likely be a mix of converged, dedicated and cloud-based resources, convergence is moving into new areas.

  • Premiered: 03/16/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA convergence TBA Converged Infrastructure TBA Datacenter TBA Storage TBA converged TBA IT convergence TBA converged IT TBA Cloud TBA Public Cloud TBA OPEX TBA VCE TBA vBlock TBA Virtualization TBA software-defined TBA hyperconverged infrastructure TBA HCI TBA Nutanix TBA SimpliVity TBA Data protection TBA DP TBA Microsoft TBA StorSimple TBA Hybrid Cloud TBA Archive TBA Backup TBA BC TBA Business Continuity TBA Disaster Recovery TBA DR TBA data lake
news

Making Sense of the Internet of Things with Converged Infrastructure

With its flexibility and scalability, converged infrastructure can be a good solution to the influx of IoT data.

  • Premiered: 03/22/16
  • Author: Taneja Group
  • Published: Windows IT Pro
Topic(s): TBA Internet of Things TBA IoT TBA converged TBA Converged Infrastructure TBA convergence TBA IT infrastructure TBA Servers TBA Storage TBA network TBA flexibility TBA scalability TBA Data protection TBA storage architecture TBA Hadoop TBA Apache TBA Spark TBA Apache Spark TBA structured data TBA Mike Matchett
news

Delving into neural networks and deep learning

Deep learning and neural networks will play a big role in the future of everything from data center management to application development. But are these two technologies actually new?

  • Premiered: 06/16/16
  • Author: Mike Matchett
  • Published: TechTarget: Search IT Operations
Topic(s): TBA deep learning TBA datacenter management TBA Data Center TBA Datacenter TBA Big Data TBA Machine Learning TBA big data analytics TBA Artificial Intelligence TBA Compute TBA neural networks TBA Cloud TBA scale-out TBA scale-out architecture TBA High Performance Computing TBA High Performance TBA HPC TBA NVIDIA TBA Mellanox TBA DataDirect Networks TBA 4U appliance TBA Google TBA Google AlphaGo TBA network TBA Mike Matchett
Profiles/Reports

Datrium's Optimized Platform for Virtualized IT: "Open Convergence" Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16