Taneja Group | Datrium
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Datrium

news

Datrium DVX storage takes novel approach for VMware, flash

The Datrium DVX storage system for VMware virtual machines is generally available and drawing interest with its server-powered architecture, flash-boosted performance and low cost.

  • Premiered: 02/01/16
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Datrium TBA DVX TBA Storage TBA VMWare TBA Flash TBA flash storage TBA SSD TBA VM TBA Performance TBA flash performance TBA VMware vSphere TBA vSphere TBA SAN TBA Virtual Machine TBA LUN TBA Hypervisor TBA Deduplication TBA Compression TBA RAM TBA SAS TBA Capacity TBA virtual desktop TBA Virtual Desktop Infrastructure TBA VDI TBA vCenter TBA CPU TBA ESX TBA all-flash TBA all flash array TBA AFA
news

Server Powered Storage: Intelligent Storage Arrays Gain Server Superpowers

At Taneja Group we are seeing a major trend within IT to leverage server and server-side resources to the maximum extent possible.

  • Premiered: 05/05/16
  • Author: Mike Matchett
  • Published: InfoStor
Topic(s): TBA Storage TBA Virtualization TBA Cloud TBA SAP TBA VMWare TBA NSX TBA CPU TBA hyperconverged TBA hyperconvergence TBA software-defined TBA Flash TBA SSD TBA SimpliVity TBA Gridstore TBA Nutanix TBA Scale Computing TBA TCO TBA ROBO TBA mobile TBA Riverbed TBA SteelFusion TBA Hybrid TBA Big Data TBA Internet of Things TBA IoT TBA Deduplication TBA scale-out TBA Infinio TBA Pernix TBA SanDisk
Resources

Emerging Technologies in Storage: Disaggregation of Storage Function

Join us for a fast-paced and informative 60-minute roundtable as we discuss one of the newest trends in storage: Disaggregation of traditional storage functions. A major trend within IT is to leverage server and server-side resources to the maximum extent possible. Hyper-scale architectures have led to the commoditization of servers and flash technology is now ubiquitous and is often times most affordable as a server-side component. Underutilized compute resources exist in many datacenters as the growth in CPU power has outpaced other infrastructure elements. One current hot trend — software-defined-storage—advocates collocating all storage functions to the server side but also relies on local, directly attached storage to create a shared pool of storage. That limits the server’s flexibility in terms of form factor and compute scalability. 
Now some vendors are exploring a new, optimally balanced approach. New forms of storage are emerging that first smartly modularize storage functions, and then intelligently host components in different layers of the infrastructure. With the help of a lively panel of experts we will unpack this topic and explore how their innovative approach to intelligently distributing storage functions can bring about better customer business outcomes.

Moderator:
Jeff Kato, Senior Analyst & Consultant, Taneja Group

Panelists:
Brian Biles, Founder & CEO, Datrium
Kate Davis, Senior Marketing Manager, HPE
Nutanix

  • Premiered: 05/19/16
  • Location: OnDemand
  • Speaker(s): Jeff Kato, Taneja Group; Brian Biles, Datrium; Kate Davis, HPE; Nutanix
Topic(s): TBA Topic(s): Jeff Kato Topic(s): TBA Topic(s): Datrium Topic(s): TBA Topic(s): HPE Topic(s): TBA Topic(s): Nutanix Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): hyper scale Topic(s): TBA Topic(s): hyper-scale Topic(s): TBA Topic(s): hyperscale Topic(s): TBA Topic(s): software-defined Topic(s): TBA Topic(s): software-defined storage Topic(s): TBA Topic(s): SDS Topic(s): TBA Topic(s): software defined Topic(s): TBA Topic(s): Software Defined Storage
news / Blog

Server Side Is Where It's At - Leveraging Server Resources For Performance

If you want performance, especially in IO, you have to bring it to where the compute is happening. We've recently seen Datrium launch a smart "split" array solution in which the speedy (and compute intensive) bits of the logical array are hosted server-side, with persisted data served from a shared simplified controller and (almost-JBOD) disk shelf. Now Infinio has announced their new caching solution version 3.0 this week, adding tiered cache support for server-side SSD's and other flash to their historically memory focused IO acceleration.

  • Premiered: 06/14/16
  • Author: Mike Matchett
Topic(s): Datrium Infinio VMware VSAN Flash SSD Caching
news

Disaggregation marks an evolution in hyper-convergence

Hyper-convergence vendors are pushing forward with products that will offer disaggregation, the latest entry into the data center paradigm.

  • Premiered: 08/03/16
  • Author: Arun Taneja
  • Published: TechTarget: Search Storage
Topic(s): TBA disaggregated storage TBA Storage TBA disaggregation TBA hyperconverged TBA hyperconvergence TBA hyper-converged TBA hyper-convergence TBA Datacenter TBA Data Center TBA converged TBA convergence TBA Moore's Law TBA Hypervisor TBA software-defined TBA DataCore TBA EMC TBA ScaleIO TBA HPE TBA StoreVirtual TBA StoreVirtual VSA TBA VSA TBA VMWare TBA VMware VSAN TBA Virtual SAN TBA SAN TBA Virtualization TBA cluster TBA Nutanix TBA SimpliVity TBA VxRack
news

When data storage infrastructure really has a brain

Big data analysis and the internet of things are helping produce more intelligent storage infrastructure.

  • Premiered: 09/06/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Big Data TBA big data analytics TBA Internet of Things TBA IoT TBA storage infrastructure TBA Storage TBA Intelligent Storage TBA CPU TBA software-defined TBA software-defined storage TBA SDS TBA HPE TBA StoreVirtual TBA hyper-converged TBA hyper-converged architectures TBA HyperGrid TBA Nutanix TBA Pivot3 TBA SimpliVity TBA Optimization TBA Datrium TBA Provisioning TBA Artificial Intelligence TBA Cloud TBA elastic cloud TBA data processing TBA Python TBA Spark TBA API TBA REST API
Profiles/Reports

Datrium's Optimized Platform for Virtualized IT: "Open Convergence" Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16
news

Datrium Introduces Industry-First Blanket Encryption for Private Clouds

Datrium, the leading provider of Open Convergence for cloud builders, today announced Datrium Blanket Encryption, an industry-first software product that combines always-on efficient deduplication and compression technology with high-speed, end-to-end encryption: in use at the host, in flight across the network and at rest on persistent storage.

  • Premiered: 02/28/17
  • Author: Taneja Group
  • Published: PR Newswire
Topic(s): TBA Datrium TBA convergence TBA Data reduction TBA Efficiency TBA Security TBA Deduplication TBA Cloud TBA Compression TBA Storage TBA hyperconverged TBA Hypervisor TBA SSD TBA flash storage TBA RAM TBA cloud infrastructure TBA Private Cloud TBA Encryption TBA Arun Taneja
news

Datrium Blanket Encryption goes end-to-end to protect data

Datrium's Blanket Encryption software encrypts deduplicated and compressed data from servers to storage, but doesn't yet support third-party key management products.

  • Premiered: 02/28/17
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA Datrium TBA Deduplication TBA Compression TBA Storage TBA Virtual Machine TBA VM TBA DVX TBA VMWare TBA Flash TBA SSD TBA RAM TBA Mike Matchett TBA Encryption TBA Backup TBA Cloud Storage TBA Cloud TBA VSAN TBA VMware VSAN TBA Hypervisor TBA API
news

Scale-out software-defined storage market menaces traditional storage

Scale-out software-defined storage is on the rise to the detriment and decline of traditional storage products and arrays.

  • Premiered: 04/05/17
  • Author: Jeff Kato
  • Published: TechTarget: Search Storage
Topic(s): TBA scale-out TBA software-defined TBA software-defined storage TBA SDS TBA Storage TBA Cloud TBA Public Cloud TBA scale-out SDS TBA Dell EMC TBA VNX TBA HPE TBA NetApp TBA FAS TBA SSD TBA Flash TBA RDMA TBA latency TBA CAPEX TBA HCI TBA hyperconverged TBA hyperconvergence TBA hyperconverged infrastructure TBA Virtualization TBA software-defined virtualization TBA OmniStack TBA data virtualization TBA Nutanix TBA Microsoft Azure TBA VMware VSAN TBA VSAN