Taneja Group | CPU
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: CPU

news

Turn to in-memory processing when performance matters

In-memory processing can improve data mining and analysis, and other dynamic data processing uses. When considering in-memory, however, look out for data protection, cost and bottlenecks.

  • Premiered: 05/21/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Memory TBA Storage TBA Performance TBA Database TBA RAM TBA CPU TBA Amazon Web Services TBA AWS TBA HP TBA Dell TBA Oracle TBA Hadoop TBA Microsoft TBA SQL Server TBA DRAM TBA MongoDB TBA Tokutek
news

SSD controllers may run your applications someday

It's time for enterprise applications and storage to work more closely together, even to the point where SSDs become a pool of computing power, according to Samsung Semiconductor.

  • Premiered: 08/06/14
  • Author: Taneja Group
  • Published: ComputerWorld
Topic(s): TBA SSD TBA Flash TBA SSD Controller TBA CPU TBA Samsung TBA Storage TBA Performance TBA latency TBA Arun Taneja
news / Blog

Pivot3 is Emerging from the HyperConverged Shadows

Pivot3 recently announced a partnership deal with Lenovo that will enable a broader adoption of Pivot3’s HyperConverged Infrastructure (HCI) solutions to Lenovo customers. The combined solution will be sold as the “Hyper-Converged ONE” appliance and will be distributed by Arrow ECS.

  • Premiered: 07/20/15
  • Author: Jeff Kato
Topic(s): Pivot3 HCI hyper convergence hyperconverged SAN Storage Lenovo CPU
news

SimpliVity OmniCube sets sights on remote office storage

SimpliVity OmniCube hyper-converged family gets a little brother for remote offices and file restores for more granular backups.

  • Premiered: 08/26/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA SimpliVity TBA OmniCube TBA Arun Taneja TBA hyper-converge TBA hyper-converged TBA hyper-convergence TBA hyperconverged TBA hyperconvergence TBA Remote Office TBA Storage TBA ROBO TBA Data protection TBA DP TBA Virtualization TBA Backup TBA Compute TBA Networking TBA Virtual Machine TBA VM TBA CPU TBA Capacity TBA VMWare TBA KVM TBA Hypervisor TBA Microsoft TBA Hyper-V TBA all-flash TBA All Flash TBA SSD TBA SQL
news

Memristor technology brings about an analog revolution

Are we ready for memristor-based artificially intelligent infrastructure in the enterprise data center?

  • Premiered: 09/17/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA Memristor TBA Data Center TBA Machine Learning TBA Big Data TBA Storage TBA data stream TBA Performance TBA Capacity TBA Compute TBA convergence TBA data processing TBA CPU TBA NAND TBA NVRAM TBA Flash TBA SSD TBA HP TBA Knowm TBA Neural Processing Unit
news

VMware vSphere 6 release good news for storage admins

VMware's vSphere 6 release shows that the vendor is aiming for a completely software-defined data center with a fully virtualized infrastructure.

  • Premiered: 10/05/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VMware vSphere TBA vSphere TBA vSphere 6 TBA software-defined TBA Software-Defined Data Center TBA SDDC TBA Virtualization TBA virtualized infrastructure TBA VSAN TBA VVOLs TBA VMware VVOLs TBA Virtual Volumes TBA VMotion TBA high availability TBA Security TBA scalability TBA Data protection TBA replication TBA VMware PEX TBA Fault Tolerance TBA Virtual Machine TBA VM TBA Provisioning TBA Storage Management TBA SLA TBA 3D Flash TBA FT TBA vCPU TBA CPU
news

Startup InterModal Data targets Web-scale storage

Startup InterModal Data introduced its Web-scale storage software that runs on commodity hardware, and is designed to scale to thousands of storage and performance nodes.

  • Premiered: 12/08/15
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA InterModal Data TBA web-scale storage TBA web-scale TBA Storage TBA Storage Performance TBA Distributed Storage TBA Capacity TBA IO performance TBA Amazon TBA Google TBA Facebook TBA all-flash TBA hybrid storage TBA RAM TBA Flash TBA SSD TBA CPU TBA NFS TBA iSCSI TBA storage container TBA containers TBA Virtualization TBA mirror data TBA Mirroring TBA scale-out TBA Jeff Kato
Profiles/Reports

HyperConverged Infrastructure Powered by Pivot3: Benefits of a More Efficient HCI Architecture

Virtualization has matured and become widely adopted in the enterprise market. HyperConverged Infrastructure (HCI), with virtualization at its core, is taking the market by storm, enabling virtualization for businesses of all sizes. The success of these technologies has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the time and effort required to create custom infrastructure from best-of-breed DIY components.

With HCI, the traditional three-tier architecture has been collapsed into a single system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. The immense success of this approach has led to increased competition in this space and the customers are required to sort through the various offerings, analyzing key attributes to determine which are significant.

One of these competing vendors, Pivot3, was founded in 2002 and has been in the HCI market since 2008, well before the term HyperConverged was used. For many years, Pivot3’s vSTAC architecture has provided the most efficient scale-out Software-Defined Storage (SDS) system available on the market. This efficiency is attributed to three design innovations. The first is their extremely efficient and reliable erasure coding technology called Scalar Erasure Coding. Conversely, many leading HCI implementations use replication-based redundancy techniques which are heavy on storage capacity utilization. Scalar Erasure Coding from Pivot3 can deliver significant capacity savings depending on the level of drive protection selected. The second innovation is Pivot3’s Global Hyperconvergence which creates a cross-cluster virtual SAN, the HyperSAN: in case of appliance failure, a VM migrates to another node and continues operations without the need to divert compute power to copy data over to that node. The third innovation has been a reduction in CPU overhead needed to implement the SDS features and other VM centric management tasks. Implementation of the HCI software uses the same CPU complex as business applications, this additional usage is referred to as the HCI overhead tax. HCI overhead tax is important since the licensing cost for many applications and infrastructure software are based on a per CPU basis. Even with today’s ever- increasing cores per CPU there still can be significant cost saving by keeping the HCI overhead tax low.

The Pivot3 family of HCI products delivering high data efficiency with a very low overhead are an ideal solution for storage-centric business workload environments where storage costs and reliability are critical success factors. One example of this is a VDI implementation where cost per seat determines success. Other examples would be capacity-centric workloads such as big data or video surveillance that could benefit from a Pivot3 HCI approach with leading storage capacity and reliability. In this paper we compare Pivot3 with other leading HCI architectures. We utilized data extracted from the alternative HCI vendor’s reference architectures for VDI implementations. Using real world examples, we have demonstrated that with other solutions, users must purchase up to 136% more raw storage capacity and up to 59% more total CPU cores than are required when using equivalent Pivot3 products. These impressive results can lead to significant costs savings. 

Publish date: 12/10/15
news

Mobile gaming company plays new Hadoop cluster management card

Chartboost, which operates a platform for mobile games, turned to new cluster management software in an effort to overcome problems in controlling the use of its Hadoop processing resources.

  • Premiered: 01/05/16
  • Author: Taneja Group
  • Published: TechTarget: Search Data Management
Topic(s): TBA Chartboost TBA mobile TBA cluster TBA Cluster Management TBA Hadoop TBA processing TBA data processing TBA analytics TBA Big Data TBA MapReduce TBA Hive TBA Spark TBA Optimization TBA Cloudera TBA AWS TBA Amazon TBA Cloud TBA YARN TBA Pepperdata TBA Memory TBA CPU TBA Application TBA Concurrent TBA SLA TBA service-level agreement TBA HBase TBA application performance TBA application performance management TBA Mike Matchett
Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
news

Datrium DVX storage takes novel approach for VMware, flash

The Datrium DVX storage system for VMware virtual machines is generally available and drawing interest with its server-powered architecture, flash-boosted performance and low cost.

  • Premiered: 02/01/16
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Datrium TBA DVX TBA Storage TBA VMWare TBA Flash TBA flash storage TBA SSD TBA VM TBA Performance TBA flash performance TBA VMware vSphere TBA vSphere TBA SAN TBA Virtual Machine TBA LUN TBA Hypervisor TBA Deduplication TBA Compression TBA RAM TBA SAS TBA Capacity TBA virtual desktop TBA Virtual Desktop Infrastructure TBA VDI TBA vCenter TBA CPU TBA ESX TBA all-flash TBA all flash array TBA AFA
news

Server Powered Storage: Intelligent Storage Arrays Gain Server Superpowers

At Taneja Group we are seeing a major trend within IT to leverage server and server-side resources to the maximum extent possible.

  • Premiered: 05/05/16
  • Author: Mike Matchett
  • Published: InfoStor
Topic(s): TBA Storage TBA Virtualization TBA Cloud TBA SAP TBA VMWare TBA NSX TBA CPU TBA hyperconverged TBA hyperconvergence TBA software-defined TBA Flash TBA SSD TBA SimpliVity TBA Gridstore TBA Nutanix TBA Scale Computing TBA TCO TBA ROBO TBA mobile TBA Riverbed TBA SteelFusion TBA Hybrid TBA Big Data TBA Internet of Things TBA IoT TBA Deduplication TBA scale-out TBA Infinio TBA Pernix TBA SanDisk
Profiles/Reports

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16
news

Big data and IoT benefit from machine learning, AI apocalypse not Imminent

Enterprise gets more from its data and better predictive analytics with capable machine learning, but this AI still isn't good enough at finding meaningful patterns in data.

  • Premiered: 05/19/16
  • Author: Mike Matchett
  • Published: TechTarget: Search IT Operations
Topic(s): TBA Mike Matchett TBA Storage TBA Big Data TBA IoT TBA Internet of Things TBA predictive analytics TBA Machine Learning TBA Artificial Intelligence TBA AI TBA deep learning TBA cloud hosting TBA Cloud TBA Microsoft TBA Twitter TBA Kik TBA GroupMe TBA Google TBA AlphaGo TBA CPU TBA predictive modeling TBA cluster TBA Microsoft Azure TBA Azure TBA AWS TBA Amazon AWS TBA Amazon Web Services TBA Google Cloud Platform
news

When data storage infrastructure really has a brain

Big data analysis and the internet of things are helping produce more intelligent storage infrastructure.

  • Premiered: 09/06/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Big Data TBA big data analytics TBA Internet of Things TBA IoT TBA storage infrastructure TBA Storage TBA Intelligent Storage TBA CPU TBA software-defined TBA software-defined storage TBA SDS TBA HPE TBA StoreVirtual TBA hyper-converged TBA hyper-converged architectures TBA HyperGrid TBA Nutanix TBA Pivot3 TBA SimpliVity TBA Optimization TBA Datrium TBA Provisioning TBA Artificial Intelligence TBA Cloud TBA elastic cloud TBA data processing TBA Python TBA Spark TBA API TBA REST API
news

Kinetica Unveils GPU-accelerated Database for Analyzing Streaming Data with Enhanced Performance

Kinetica today announced the newest release of its distributed, in-memory database accelerated by GPUs that simultaneously ingests, explores, and visualizes streaming data.

  • Premiered: 09/21/16
  • Author: Taneja Group
  • Published: Business Wire
Topic(s): TBA high availability TBA Mike Matchett TBA Kinetica TBA In-Memory TBA Security TBA IoT TBA Internet of Things TBA Data Management TBA OLTP TBA CPU TBA GPU TBA NVIDIA TBA Data Center TBA scalability TBA Apache TBA Hadoop TBA Apache Hadoop TBA Apache Kafka TBA Apache Spark TBA Apache NiFi TBA High Performance TBA cluster TBA Big Data TBA scale-out
news

Smart storage systems smart for business

Mike Matchett explains how data-aware storage combined with application awareness is leading to a new wave of intelligent data storage.

  • Premiered: 10/03/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Mike Matchett TBA Storage TBA High Performance TBA convergence TBA CPU TBA software-defined storage TBA SDS TBA QoS TBA Data protection TBA Archiving TBA Metadata TBA object storage TBA e-discovery TBA Primary Storage TBA Dropbox TBA Evernote TBA data awareness TBA data-aware TBA Security TBA Lucene TBA Solr TBA Distributed Storage TBA Tarmin TBA GridBank TBA StoreAll TBA HPE TBA Qumulo TBA SteelFusion TBA Riverbed TBA VM
news

More RAM, new chips may open doors for hyper-converged infrastructure

Greater memory and faster chips will better suit VMware's hyper-converged infrastructure for Tier 1 workloads, the company hopes, as its tries to outdo market-leading Nutanix.

  • Premiered: 10/17/16
  • Author: Taneja Group
  • Published: TechTarget: Search Data Center
Topic(s): TBA HCI TBA hyperconverged infrastructure TBA hyperconverged TBA hyperconvergence TBA Nutanix TBA VMWare TBA remote branch office TBA Remote Office TBA Branch Office TBA VxRail TBA Arun Taneja TBA Dell EMC TBA VMware vSphere TBA vSphere TBA Virtual SAN TBA Virtual storage TBA virtual storage appliance TBA Storage TBA Virtualization TBA VSA TBA CPU TBA IOPS TBA IBM TBA EVO:RAIL TBA SAN TBA SimpliVity
Profiles/Reports

Datrium's Optimized Platform for Virtualized IT: "Open Convergence" Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16
Profiles/Reports

HPE StoreVirtual 3200: A Look at the Only Entry Array with Scale-out and Scale-up

Innovation in traditional external storage has recently taken a back seat to the current market darlings of All Flash Arrays and Software-defined Scale-out Storage. Can there be a better way to redesign the mainstream dual-controller array that has been a popular choice for entry-level shared storage for the last 20 years?  Hewlett-Packard Enterprise (HPE) claims the answer is a resounding Yes.

HPE StoreVirtual 3200 (SV3200) is a new entry storage device that leverages HPE’s StoreVirtual Software-defined Storage (SDS) technology combined with an innovative use of low-cost ARM-based controller technology found in high-end smartphones and tablets. This approach allows HPE to leverage StoreVirtual technology to create an entry array that is more cost effective than using the same software on a set of commodity X86 servers. Optimizing the cost/performance ratio using ARM technology instead of excessive power hungry processing and memory using X86 computers unleashes an attractive SDS product unmatched in affordability. For the first time, an entry storage device can both Scale-up and Scale-out efficiently and also have the additional flexibility to be compatible with a full complement of hyper-converged and composable infrastructure (based on the same StoreVirtual technology). This unique capability gives businesses the ultimate flexibility and investment protection as they transition to a modern infrastructure based on software-defined technologies. The SV3200 is ideal for SMB on-premises storage and enterprise remote office deployments. In the future, it will also enable low-cost capacity expansion for HPE’s Hyper Converged and Composable infrastructure offerings.

Taneja Group evaluated the HPE SV3200 to validate the fit as an entry storage device. Ease-of-use, advanced data services, and supportability were just some of the key attributes we validated with hands-on testing. What we found was that the SV3200 is an extremely easy-to-use device that can be managed by IT generalists. This simplicity is good news for both new customers that cannot afford dedicated administrators and also for those HPE customers that are already accustomed to managing multiple HPE products that adhere to the same HPE OneView infrastructure management paradigm. We also validated that the advanced data services of this entry array match that of the field proven enterprise StoreVirtual products already in the market. The SV3200 can support advanced features such as linear scale-out and multi-site stretch cluster capability that enables advanced business continuity techniques rarely found in storage products of this class. What we found is that HPE has raised the bar for entry arrays, and we strongly recommend businesses that are looking at either SDS technology or entry storage strongly consider HPE’s SV3200 as a product that has the flexibility to provide the best of both. A starting price at under $10,000 makes it very affordable to start using this easy, powerful, and flexible array. Give it a closer look.

Publish date: 04/11/17