Taneja Group | CAPEX
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: CAPEX

Profiles/Reports

Enterprise Storage that Simply Runs and Runs: Infinidat Infinibox Delivers Incredible New Standard

Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.

Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.

Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.

In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.

Publish date: 09/29/15
news

Flash storage market remains a tsunami

The flash storage market is poised for rapid growth into enterprise data centers as costs drop and solid-state drive density and capacity expands.

  • Premiered: 10/02/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA SSD TBA Flash TBA Storage TBA Mike Matchett TBA Enterprise TBA Data Center TBA enterprise data TBA solid state TBA Capacity TBA Storage Acceleration TBA Storage Acceleration and Performance TBA Performance TBA Storage Performance TBA all-flash TBA Hybrid TBA flash hybrid TBA hybrid storage TBA IOPS TBA HDD TBA QoS TBA OPEX TBA CAPEX
news

Can your cluster management tools pass muster?

The right designs and cluster management tools ensure your clusters don't become a cluster, er, failure.

  • Premiered: 11/17/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA cluster TBA Cluster Management TBA Cluster Server TBA Storage TBA Cloud TBA Public Cloud TBA Private Cloud TBA Virtual Infrastructure TBA Virtualization TBA hyperconvergence TBA hyper-convergence TBA software-defined TBA software-defined storage TBA SDS TBA Big Data TBA scale-up TBA CAPEX TBA IT infrastructure TBA OPEX TBA Hypervisor TBA Migration TBA QoS TBA Virtual Machine TBA VM TBA VMWare TBA VMware VVOLs TBA VVOLs TBA Virtual Volumes TBA cloud infrastructure TBA OpenStack
news

Hyperconvergence for ROBOs and the Datacenter

Remote/branch office management is more important, and complicated, than ever. Here are some survival tips.

  • Premiered: 12/17/15
  • Author: Mike Matchett
  • Published: Virtualization Review
Topic(s): TBA ROBO TBA hyperconvergence TBA hyperconverged TBA converged TBA convergence TBA Virtualization TBA Data Center TBA Datacenter TBA Compute TBA cluster TBA clusters TBA SAN TBA Storage TBA WANO TBA WAN Optimization TBA Cloud TBA cloud gateway TBA Backup TBA Scale TBA VCE TBA Dell TBA HP TBA IBM TBA SimpliVity TBA Nutanix TBA Pivot3 TBA Scale Computing TBA CAPEX TBA OPEX TBA TCO
Profiles/Reports

Array Efficient, VM-Centric Data Protection: HPE Data Protector and 3PAR StoreServ

One of the biggest storage trends we are seeing in our current research here at Taneja Group is that of storage buyers (and operators) looking for more functionality – and at the same time increased simplicity – from their storage infrastructure. For this and many other reasons, including TCO (both CAPEX and OPEX) and improved service delivery, functional “convergence” is currently a big IT theme. In storage we see IT folks wanting to eliminate excessive layers in their complex stacks of hardware and software that were historically needed to accomplish common tasks. Perhaps the biggest, most critical, and unfortunately onerous and unnecessarily complex task that enterprise storage folks have had to face is that of backup and recovery. As a key trusted vendor of both data protection and storage solutions, we note that HPE continues to invest in producing better solutions in this space.

HPE has diligently been working towards integrating data protection functionality natively within their enterprise storage solutions starting with the highly capable tier-1 3PAR StoreServ arrays. This isn’t to say that the storage array now turns into a single autonomous unit, becoming a chokepoint or critical point of failure, but rather that it becomes capable of directly providing key data services to downstream storage clients while being directed and optimized by intelligent management (which often has a system-wide or larger perspective). This approach removes excess layers of 3rd party products and the inefficient indirect data flows traditionally needed to provide, assure, and then accelerate comprehensive data protection schemes. Ultimately this evolution creates a type of “software-defined data protection” in which the controlling backup and recovery software, in this case HPE’s industry-leading Data Protector, directly manages application-centric array-efficient snapshots.

In this report we examine this disruptively simple approach and how HPE extends it to the virtual environment – converging backup capabilities between Data Protector and 3PAR StoreServ to provide hardware assisted agentless backup and recovery for virtual machines. With HPE’s approach, offloading VM-centric snapshots to the array while continuing to rely on the hypervisor to coordinate the physical resources of virtual machines, virtualized organizations gain on many fronts including greater backup efficiency, reduced OPEX, greater data protection coverage, immediate and fine-grained recovery, and ultimately a more resilient enterprise. We’ll also look at why HPE is in a unique position to offer this kind of “converging” market leadership, with a complete end-to-end solution stack including innovative research and development, sales, support, and professional services.

Publish date: 12/21/15
news

What's the future of data storage in 2016?

Mike Matchett takes a closer look at the future of data storage technology in 2016 based on research from the Taneja Group.

  • Premiered: 01/06/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Storage TBA Data Storage TBA software-defined TBA Flash TBA SSD TBA Performance TBA Density TBA all-flash TBA all flash array TBA AFA TBA Hybrid TBA Hybrid Array TBA hybrid storage TBA OPEX TBA Auto-Tiering TBA Optimization TBA Capacity TBA CAPEX TBA QoS TBA EMC TBA Dell TBA HPE TBA NetApp TBA IBM TBA NAS TBA 3PAR TBA StoreOnce TBA data protector TBA Oracle TBA ZDLRA
Profiles/Reports

The Mainstream Adoption of All-Flash Storage: HPE Customers Show How Everyone Can Leverage Flash

Flash storage offers higher performance, lower power consumption, decreased footprint and increased reliability over spinning media. It would be the rare IT shop today that doesn’t have some flash acceleration deployed in performance hot spots. But many IT folks are still on the sidelines watching and waiting for the right time to jump into a bigger adoption of flash-based shared storage.

When will flash costs (CAPEX) drop to make it affordable to switch – and for which workloads does all-flash make sense? How much better does it need to be to overcome the pain and cost (OPEX) of adopting and migrating to a whole new storage solution? How much more complex and costly is it to run a mixed storage environment with some all-flash, some tiered, and some capacity arrays?

In this insightful field report we’ve interviewed a half dozen real-world IT storage groups who faced those challenges, and with HPE 3PAR StoreServ have been able to easily transition important performance-sensitive workloads to all-flash storage. By staying within the 3PAR StoreServ family for their larger storage needs, they’ve been able to steer a clear, cost-effective and rewarding course to flash performance.

In each interview, we explore how they executed their transition from HDD to hybrid to all-flash under their real world IT initiatives that included consolidation, data center transformation, and performance acceleration. We’ll learn about the particular business values they are each successfully producing, and we will present some recommendations for others making all-flash storage decisions.

Publish date: 02/24/16
news

Hybrid cloud implementation preparation checklist

Get a complete listing of the resources, use cases and requirements that need to be taken into account before starting a hybrid cloud deployment project.

  • Premiered: 02/29/16
  • Author: Jeff Byrne
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Hybrid Cloud TBA Cloud Storage TBA Hybrid Cloud Storage TBA Storage TBA Disaster Recovery TBA Disaster Recovery as a Service TBA DRaaS TBA SLA TBA QoS TBA Storage Performance TBA Availability TBA Security TBA VPN TBA IaaS TBA CAPEX TBA SDS TBA software-defined storage TBA Storage Management
news

Evaluating hyper-converged architectures: Five key CIO considerations

Simplicity and cost savings are among the benefits of hyper-converged architectures, but the biggest draw for CIOs may be how these systems make IT teams more business-ready.

  • Premiered: 04/27/16
  • Author: Mike Matchett
  • Published: TechTarget: Search CIO
Topic(s): TBA CIO TBA simplicity TBA hyper-converged TBA hyperconverged TBA hyperconvergence TBA Storage TBA CAPEX TBA Data protection TBA DP TBA Cloud TBA Business Continuity TBA BC TBA Disaster Recovery TBA DR TBA Hypervisor TBA software-defined TBA Networking TBA converged TBA convergence TBA VM TBA Virtual Machine TBA Virtualization TBA VDI TBA virtual desktop TBA Virtual Desktop Infrastructure TBA Datacenter TBA scale-out TBA OPEX TBA Provisioning TBA WANO
news

Hyper-convergence infrastructure: CIOs mull hardware savings

Hyper-converged infrastructure may or may not cut hardware costs, but some CIOs and other industry executives suggest an emphasis on sticker price misses the point.

  • Premiered: 04/28/16
  • Author: Taneja Group
  • Published: TechTarget: Search CIO
Topic(s): TBA HCI TBA hyper-converged TBA hyperconverged infrastructure TBA hyperconverged TBA Storage TBA CIO TBA Jeff Kato TBA Datacenter TBA Nutanix TBA CAPEX
Profiles/Reports

Flash Virtualization System: Powerful but Cost-Effective Acceleration for VMware Workloads

Server virtualization can bring your business significant benefits, especially in the initial stages of deployment. Companies we speak with in the early stages of adoption often cite more flexible and automated management of both infrastructure and apps, along with CAPEX and OPEX savings resulting from workload consolidation.  However, as an increasing number of apps are virtualized, many of these organizations encounter significant storage performance challenges. As more virtualized workloads are consolidated on a given host, aggregate IO demands put tremendous pressure on shared storage, server and networking resources, with the strain further exacerbated by the IO blender effect, in which IO streams processed by the hypervisor become random and unpredictable. Together, these conditions reduce host productivity—e.g. by lowering data and transactional throughput and increasing application response time—and may prevent you from meeting performance requirements for your business-critical applications.

How can you best address these storage performance challenges in your virtual infrastructure? Adding solid-state or flash storage will provide a significant performance boost, but where should it be deployed to give your critical applications the biggest improvement per dollar spent? How can you ensure that the additional storage fits effortlessly into your existing environment, without requiring disruptive and costly changes to your infrastructure, applications, or management capabilities?

We believe that server-side acceleration provides the best answer to all of these questions. In particular, we like server solutions that combine intelligent caching with high-performance PCIe memory, which are tightly integrated with the virtualization platform, and enable sharing of storage across multiple hosts or an entire cluster. The Flash Virtualization System from SanDisk is an outstanding example of such a solution. As we’ll see, Flash Virtualization enables a shared cache resource across a cluster of hosts in a VMware environment, improving application performance and response time without disrupting primary storage or host servers. This solution will allow you to satisfy SLAs and keep your users happy, without breaking the bank.

Publish date: 06/14/16
news

Enterprise SSDs: The Case for All-Flash Data Centers

Adding small amounts of flash as cache or dedicated storage is certainly a good way to accelerate a key application or two, but enterprises are increasingly adopting shared all-flash arrays to increase performance for every primary workload in the data center.

  • Premiered: 06/23/16
  • Author: Mike Matchett
  • Published: Enterprise Storage Forum
Topic(s): TBA Flash TBA SSD TBA Mike Matchett TBA Storage TBA AFA TBA all-flash TBA all flash array TBA ROI TBA HDD TBA IOPS TBA IO performance TBA flash storage TBA Datacenter TBA Data Center TBA HPE TBA NetApp TBA Capacity TBA simplicity TBA CAPEX TBA scalability TBA scalable TBA OPEX TBA resiliency TBA VDI TBA Dedupe TBA Deduplication TBA Pure Storage TBA Kaminario TBA HPE 3PAR TBA 3PAR
news

Spark speeds up adoption of big data clusters and clouds

Infrastructure that supports big data comes from both the cloud and clusters. Enterprises can mix and match these seven infrastructure choices to meet their needs.

  • Premiered: 07/19/16
  • Author: Mike Matchett
  • Published: TechTarget: Search IT Operations
Topic(s): TBA Apache Spark TBA Spark TBA Mike Matchett TBA Cloud TBA cloud cluster TBA cluster TBA Big Data TBA big data analytics TBA MapReduce TBA Business Intelligence TBA BI TBA MLlib TBA High Performance TBA hadoop cluster TBA HDFS TBA Hadoop Distributed File System TBA IBM TBA Hortonworks TBA Cloudera TBA capacity management TBA Performance Management TBA API TBA SAN TBA storage area networks TBA CAPEX TBA DataDirect Networks TBA HPC TBA Lustre TBA Virtualization TBA VM
Profiles/Reports

Converging Hyperconvergence with Cloud: HyperGrid Rolls Out the Future of IT Today

A recent Taneja Group survey on IT infrastructure shows that Hyperconvergence is quickly becoming the preferred datacenter architecture of choice for traditionally oriented datacenters. Today well over half of IT decision-makers want to transition off legacy silo stacks of servers, storage, networking and complicated layers of integrating protocols into more seamless, more ideally cloud-like pools of easily and dynamically composable resources.

Furthermore, these IT organizations are discovering that in the transition to an on-premise modular, plug-and-play infrastructure they can also readily take advantage of hybrid cloud options and benefits. In fact, looking at it from the cloud side, HCI architectures are also attractive to many kinds of service providers who themselves desire a scalable, low OPEX infrastructure. 

The business and IT benefits of both hyperconvergence and hybrid cloud are undeniable. Who wouldn’t want better, faster, and cheaper? The IT dream for years has been to be able to host data and applications on-site as required, but when desired, transparently leverage cloud services – for cost optimization, bursting, DR, global access, mobile and web app support, etc.. Here at Taneja Group we’ve been looking for emerging solutions that demonstrate further evolution in IT architectures by further converging HyperConverged infrastructure with hybrid cloud operations.

Enter HyperGrid, a re-born Gridstore, delivering on just that vision.

Publish date: 08/31/16
news

IT management as a service is coming to a data center near you

IT management as a service uses big data analytics and vendors' expertise to ease the IT administration and optimization process. IT orgs must trust the flow of log and related data into an offsite, multi-tenant cloud.

  • Premiered: 10/18/16
  • Author: Mike Matchett
  • Published: TechTarget: Search IT Operations
Topic(s): TBA Data Center TBA Mike Matchett TBA Big Data TBA big data analytics TBA Multi-tenancy TBA Storage TBA Cloud TBA CAPEX TBA OPEX TBA Virtualization TBA Cloud Computing TBA hyperconverged TBA Hybrid Cloud TBA Public Cloud TBA cluster TBA cloud cluster TBA Performance Management TBA IaaS
Profiles/Reports

Datrium's Optimized Platform for Virtualized IT: "Open Convergence" Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16
news

Scale-out software-defined storage market menaces traditional storage

Scale-out software-defined storage is on the rise to the detriment and decline of traditional storage products and arrays.

  • Premiered: 04/05/17
  • Author: Jeff Kato
  • Published: TechTarget: Search Storage
Topic(s): TBA scale-out TBA software-defined TBA software-defined storage TBA SDS TBA Storage TBA Cloud TBA Public Cloud TBA scale-out SDS TBA Dell EMC TBA VNX TBA HPE TBA NetApp TBA FAS TBA SSD TBA Flash TBA RDMA TBA latency TBA CAPEX TBA HCI TBA hyperconverged TBA hyperconvergence TBA hyperconverged infrastructure TBA Virtualization TBA software-defined virtualization TBA OmniStack TBA data virtualization TBA Nutanix TBA Microsoft Azure TBA VMware VSAN TBA VSAN
news

Hyper-converged infrastructure products give data centers a boost

Hyper-converged products take your data center and network to new heights by leveraging industry-first infrastructure innovations to improve storage and more.

  • Premiered: 06/07/17
  • Author: Arun Taneja
  • Published: TechTarget: Search Converged Infrastructure
Topic(s): TBA Arun Taneja TBA hyperconverged infrastructure TBA HCI TBA hyperconverged TBA hyperconvergence TBA Data Center TBA Storage TBA VCE TBA converged TBA convergence TBA Dell EMC TBA NetApp TBA FAS TBA VNX TBA HPE TBA Nutanix TBA SimpliVity TBA Hedvig TBA Amazon Web Services TBA AWS TBA Microsoft Azure TBA Microsoft TBA SwiftStack TBA Cloud TBA Public Cloud TBA Cisco TBA HDS TBA Hitachi Data Systems TBA IBM TBA Compute
news

Disaggregating network, compute and storage allocation demystified

Explore the ways disaggregation concepts and principles are being applied to create and allocate pools of compute and storage resources to serve applications on demand.

  • Premiered: 08/02/17
  • Author: Arun Taneja
  • Published: Storage Magazine
Topic(s): TBA disaggregated storage TBA Storage TBA Compute TBA Arun Taneja TBA hyperconverged TBA HCI TBA hyperconverged infrastructure TBA Spark TBA Hadoop TBA DriveScale TBA HPE TBA Synergy TBA Nutanix TBA Pivot3 TBA CAPEX TBA OPEX TBA Datrium TBA Snapshot TBA Snapshots TBA Compression TBA Deduplication TBA replication TBA Encryption TBA RAM TBA NVRAM TBA DVX TBA Big Data TBA big data cluster TBA cluster TBA flexibility