Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Free Reports

Page 1 of 7 pages  1 2 3 >  Last ›
Free Reports

Is Object Storage Right For Your Organization?

Is object storage right for your organization? Many companies are asking this question as they seek out storage solutions that support vast unstructured data growth throughout their organizations. Object storage is ideal for large-scale unstructured data storage because it easily scales to several petabytes and beyond by simply adding storage nodes. Object storage also provides high fault tolerance, simplified storage management and hardware independence – core capabilities that are essential to cost-effectively manage large-scale storage environments. Add to this built-in support for geographically distributed environments and it’s easy to see why object storage solutions are the preferred storage approach for multiple use cases such as cloud-native applications, highly scalable file backup, secure enterprise collaboration, active archival, content repositories and increasingly cognitive computing workloads such as Big Data analytics.

To help you decide if object storage is right for your company and to help you understand how to apply various storage technologies, we have created a table below that positions object storage relative to block storage and file storage.

As the table shows, there are several factors that differentiate block, file and object storage. An easy way to think about the differences is the following; block storage is necessary for critical applications where storage performance is the key consideration, file storage is well-suited for highly scalable shared file systems and object storage is ideal when cloud-scale capacity and convenience as well as reliability and geographically distributed access are the major storage requirements. 

Publish date: 12/30/16
Free Reports

Datrium’s Optimized Platform for Virtualized IT: “Open Convergence” Challenges HyperConvergence

The storage market is truly changing for the better with new storage architectures finally breaking the rusty chains long imposed on IT by traditional monolithic arrays. Vast increases in CPU power found in newer generations of servers (and supported by ever faster networks) have now freed key storage functionality to run wherever it can best serve applications. This freedom has led to the rise of all software-defined storage (SDS) solutions that power modular HyperConverged infrastructure (HCI). At the same time, increasingly affordable flash resources have enabled all-flash array options that promise both OPEX simplification and inherent performance gains. Now, we see a further evolution of storage that intelligently converges performance-oriented storage functions on each server while avoiding major problems with HyperConverged “single appliance” adoption.

Given the market demand for better, more efficient storage solutions, especially those capable of large scale, low latency and mixed use, we are seeing a new generation of vendors like Datrium emerge. Datrium studied the key benefits that hyperconvergence previously brought to market including the leverage of server-side flash for cost-effective IO performance, but wanted to avoid the all-in transition and the risky “monoculture” that can result from vendor-specific HCI. Their resulting design runs compute-intensive IO tasks scaled-out on each local application server (similar to parts of SDS), but persists and fully protects data on cost-efficient, persistent shared storage capacity. We have come to refer to this optimizing tiered design approach as “Server Powered Storage” (SPS), indicating that it can take advantage of the best of both shared and server-side resources.

Ultimately this results in an “Open Convergence” approach that helps virtualized IT environments transition off of aging storage arrays in an easier, flexible and more natural adoption path than with a fork-lift HyperConvergence migration. In this report we will briefly review the challenges and benefits of traditional convergence with SANs, the rise of SDS and HCI appliances, and now this newer “open convergence” SPS approach as pioneered by Datrium DVX. In particular, we’ll review how Datrium offers benefits ranging from elastic performance, greater efficiency (with independent scaling of performance vs. capacity), VM-centric management, enterprise scalability and mixed workload support while still delivering on enterprise requirements for data resiliency and availability.


DATA Challenges in Virtualized Environments

Virtualized environments present a number of unique challenges for user data. In physical server environments, islands of storage were mapped uniquely to server hosts. While at scale that becomes expensive, isolating resources and requiring a lot of configuration management (all reasons to virtualize servers), this at least provided directly mapped relationships to follow when troubleshooting, scaling capacity, handling IO growth or addressing performance.

However, in the virtual server environment, the layers of virtual abstraction that help pool and share real resources also obfuscate and “mix up” where IO actually originates or flows, making it difficult to understand who is doing what. Worse, the hypervisor platform aggregates IO from different workloads hindering optimization and preventing prioritization. Hypervisors also tend to dynamically move virtual machines around a cluster to load balance servers. Fundamentally, server virtualization makes it hard to meet application storage requirements with traditional storage approaches.

Current Virtualization Data Management Landscape

Let’s briefly review the three current trends in virtualization infrastructure used to ramp up data services to serve demanding and increasingly larger scale clusters:

  • Converged Infrastructure - with hybrid/All-Flash Arrays (AFA)
  • HyperConverged Infrastructure - with Software Defined Storage (SDS)
  • Open Converged Infrastructure - with Server Powered Storage (SPS)

Converged Infrastructure - Hybrid and All-Flash Storage Arrays (AFA)

We first note that converged infrastructure solutions simply pre-package and rack traditional arrays with traditional virtualization cluster hosts. The traditional SAN provides well-proven and trusted enterprise storage. The primary added value of converged solutions is in a faster time-to-deploy for a new cluster or application. However, ongoing storage challenges and pain points remain the same as in un-converged clusters (despite claims of converged management as these tend to just aggregate dashboards into a single view).

The traditional array provides shared storage from which virtual machines draw for both images and data, either across Fibre Channel or IP network (NAS or iSCSI). While many SAN’s in the hands of an experienced storage admin can be highly configurable, they do require specific expertise to administer. Almost every traditional array has by now become effectively hybrid, capable of hosting various amounts of flash, but if the array isn’t fully engineered for flash it is not going to be an optimal choice for an expensive flash investment. Hybrid arrays can offer good performance for the portion of IO that receives flash acceleration, but network latencies are far larger than most gains. Worse, it is impossible for a remote SAN to know which IO coming from a virtualized host should be cached/prioritized (in flash)– it all looks the same and is blended together by the time it hits the array.

Some organizations deploy even more costly all-flash arrays, which can guarantee array-side performance for all IO and promise to simplify administration overhead. For a single key workload, a dedicated AFA can deliver great performance. However, we note that virtual clusters mostly host mixed workloads, many of which don’t or won’t benefit from the expensive cost of persisting all data on all flash array storage. Bottomline - from a financial perspective, SAN flash is always more expensive than server-side flash. And by placing flash remote across a network in the SAN, there is always a relatively large network latency which denigrates the benefit of that array side flash investment.

HyperConverged Infrastructures - Software Defined Storage (SDS)

As faster resources like flash, especially added to servers directly, came down in price, so-called Software Defined Storage (SDS) options proliferated. Because CPU power has continuously grown faster and denser over the years, many traditional arrays came to be actually built on plain servers running custom storage operating systems. The resulting storage “software” often now is packaged as a more cost-effective “software-defined” solution that can be run or converged directly on servers (although we note most IT shops  prefer buying ready-to-run solutions, not software requiring on-site integration).

In most cases software-defined storage runs within virtual machines or containers such that storage services can be hosted on the same servers as compute workloads (e.g. VMware VSAN). An IO hungry application accessing local storage services can get excellent IO service (i.e. no network latency), but capacity planning and performance tuning in these co-hosted infrastructures can be exceedingly difficult. Acceptable solutions must provide tremendous insight or complex QoS facilities that can dynamically shift IO acceleration with workloads as they might move across a cluster (eg. to keep data access local).  Additionally, there is often a huge increase in East-West traffic between servers.

Software Defined Storage enabled a new kind of HyperConverged Infrastructure (HCI). Hyperconvergence vendors produce modular appliances in which a hypervisor (or container management), networking and (software-defined) storage all are pre-integrated to run within the same server. Because of vendor-specific storage, network, and compute integration, HCI solutions can offer uniquely optimized IO paths with plug-and-play scalability for certain types of workloads (e.g. VDI).

For highly virtualized IT shops, HCI simplifies many infrastructure admin responsibilities. But HCI presents new challenges too, not least among them is that migration to HCI requires a complete forklift turnover of all infrastructure. Converting all of your IT infrastructure to a unique vendor appliance creates a “full stack” single vendor lock-in issue (and increased risk due to lowered infrastructure “diversity”).

As server-side flash is cheaper than other flash deployment options, and servers themselves are commodity resources, HCI does help optimize the total return on infrastructure CAPEX – especially as compared to traditional silo’d server and SAN architectures. But because of the locked-down vendor appliance modularity, it can be difficult to scale storage independently from compute when needed (or even just storage performance from storage capacity). Obviously, pre-configured HCI vendor SKU’s also preclude using existing hardware or taking advantage of blade-type solutions.

With HCI, every node is also a storage node which at scale can have big impacts on software licensing (e.g. if you need to add nodes just for capacity, you will also pay for compute licenses), overbearing “East-West” network traffic, and in some cases unacceptable data availability risks (e.g. when servers lock/crash/reboot for any reason, an HCI replication/rebuild can be a highly vulnerable window).

OPEN Converged Infrastructure - Server Powered Storage (SPS)

When it comes to performance, IO still may need to transit a network incurring a latency penalty. To help, there are several third party vendors of IO caching that can be layered in the IO path – integrated with the server or hypervisor driver stack or even placed in the network. These caching solutions take advantage of server memory or flash to help accelerate IO. However, layering in yet another vendor and product into the IO path incurs additional cost, and also complicates the end-to-end IO visibility. Multiple layers of caches (vm, hypervisor, server, network, storage) can disguise a multitude of ultimately degrading performance issues.

Ideally, end-to-end IO, from within each local server to shared capacity, should all fall into a single converged storage solution – one that is focused on providing the best IO service by distributing and coordinating storage functionality where it best serves the IO consuming applications. It should also optimize IT’s governance, cost, and data protection requirements. Some HCI solutions might claim this in total, but only by converging everything into a single vendor appliance. But what if you want a easier solution capable of simply replace aging arrays in your existing virtualized environments – especially enabling scalability in multiple directions at different times and delivering extremely low latency while still supporting a complex mix of diverse workloads?

This is where we’d look to a Server Powered Storage (SPS) design. For example, Datrium DVX still protects data with cost-efficient shared data servers on the back-end for enterprise quality data protection, yet all the compute-intensive, performance-impacting functionality is “pushed” up into each server to provide local, accelerated IO. As Datrium’s design leverages each application server instead of requiring dedicated storage controllers, the cost of Datrium compared to traditional arrays is quite favorable, and the performance is even better than (and as scalable as) a 3rd party cache layered over a remote SAN.

In the resulting Datrium “open converged” infrastructure stack, all IO is deduped and compressed (and locally served) server-side to optimize storage resources and IO performance, while management of storage is fully VM-centric (no LUN’s to manage). In this distributed, open and unlocked architecture, performance scales with each server added to naturally scale storage performance with application growth.

Datrium DVX makes great leverage for a given flash investment by using any “bring-your-own” SSDs, far cheaper to add than array-side flash (and can be added to specific servers as needed/desired). In fact, most vm’s and workloads won’t ever read from the shared capacity on the network – it is write-optimized persistent data protection and can be filled with cost-effective high-capacity drives.

Taneja Group Opinion

As just one of IT’s major concerns, all data bits must be persisted and fully managed and protected somewhere at the end of the day. Traditional arrays, converged or not, just don’t perform well in highly virtualized environments, and using SDS (powering HCI solutions) to farm all that critical data across fungible compute servers invokes some serious data protection challenges. It just makes sense to look for a solution that leverages the best aspects of both enterprise arrays (for data protection) and software/hyperconverged solutions (that localize data services for performance).

At the big picture level, Server Powered Storage can be seen as similar (although more cost-effective and performant) to a multi-vendor solution in which IT layers server-side IO acceleration functionality from one vendor over legacy or existing SANs from another vendor. But now we are seeing a convergence (yes, this is an overused word these days, but accurate here) of those IO path layers into a single vendor product. Of course, a single vendor solution that fully integrates distributed capabilities in one deployable solution will perform better and be naturally easier to manage and support (and likely cheaper).

There is no point in writing storage RFP’s today that get tangled up in terms like SDS or HCI. Ultimately the right answer for any scenario is to do what is best for applications and application owners while meeting IT responsibilities. For existing virtualization environments, new approaches like Server Powered Storage and Open Convergence offer considerable benefit in terms of performance and cost (both OPEX and CAPEX). We highly recommend that before one invests in expensive all-flash arrays, or takes on a full migration to HCI, that an Open Convergence option like Datrium DVX be considered as a potentially simpler, more cost-effective, and immediately rewarding solution.


NOTICE: The information and product recommendations made by the TANEJA GROUP are based upon public information and sources and may also include personal opinions both of the TANEJA GROUP and others, all of which we believe to be accurate and reliable. However, as market conditions change and not within our control, the information and recommendations are made without warranty of any kind. All product names used and mentioned herein are the trademarks of their respective owners. The TANEJA GROUP, Inc. assumes no responsibility or liability for any damages whatsoever (including incidental, consequential or otherwise), caused by your use of, or reliance upon, the information and recommendations presented herein, nor for any inadvertent errors that may appear in this document.

Publish date: 11/23/16
Free Reports

Apache Spark Market Survey: Cloudera Sponsored Research

Apache Spark has quickly grown into one of the major big data ecosystem projects and shows no signs of slowing down. In fact, even though Spark is well connected within the broader Hadoop ecosystem, Spark adoption by itself has enough energy and momentum that it may very well become the center of its own emerging market category. In order to better understand Spark’s growing role in big data, Taneja Group conducted a major Spark market research project. We surveyed nearly seven thousand (6900+) qualified technical and managerial people working with big data from around the world to explore their experiences with and intentions for Spark adoption and deployment, their current perceptions of the Spark marketplace and of the future of Spark itself.

We found that across the broad range of industries, company sizes, and big data maturities represented in the survey, over one-half (54%) of respondents are already actively using Spark. Spark is proving invaluable as 64% of those currently using Spark plan to notably increase their usage within the next 12 months. And new Spark user adoption is clearly growing – 4 out of 10 of those who are already familiar with Spark but not yet using it plan to deploy Spark soon.

The top reported use cases globally for Spark include the expected Data Processing/Engineering/ETL (55%), followed by forward-looking data science applications like Real-Time Stream Processing (44%), Exploratory Data Science (33%), and Machine Learning (33%). The more traditional analytics applications like Customer Intelligence (31%) and BI/DW (29%) were close behind, and illustrate that Spark is capable of supporting many different kinds of organizational big data needs. The main reasons and drivers reported for adopting Spark over other solutions start with Performance (mentioned by 74%), followed by capabilities for Advanced Analytics (49%), Stream Processing (42%) and Ease of Programming (37%).

When it comes to choosing a source for Spark, more than 6 out of 10 Spark users in the survey have considered or evaluated Cloudera, nearly double the 35% that may have looked at the Apache Download or the 33% that considered Hortonworks. Interestingly, almost all (90+%) of those looking at Cloudera Spark adopted it for their most important use case, equating to 57% of those who evaluated Cloudera overall. Organizations cited quality of support (46%) as their most important selection factor, followed by demonstrated commitment to open source (29%), enterprise licensing costs (27%) and the availability of cloud support (also 27%).

Interestingly, while on-premise Spark deployments dominate today (more than 50%), there is a strong interest in transitioning many of those to cloud deployments going forward. Overall Spark deployment in public/private cloud (IaaS or PaaS) is projected to increase significantly from 23% today to 36%, along with a corresponding increase in using Spark SaaS, from 3% to 9%.

The biggest challenge with Spark, similar to what has been previously noted across the broader big data solutions space, is still reported by 6 out of 10 active users to be the big data skills/training gap within their organizations. Similarly, more than one-third mention complexity in learning/integrating Spark as a barrier to adoption. Despite these reservations, we note that compared to many previous big data analytics platforms, Spark today offers a higher—and often already familiar—level of interaction to users through its support of Python, R, SQL, notebooks, and seamless desktop-to-cluster operations, all of which no doubt contribute to its greatly increasing popularity and widespread adoption.

Overall, it’s clear that Spark has gained broad familiarity within the big data world and built significant momentum around adoption and deployment. The data highlights widespread current user success with Spark, validation of its reliability and usefulness to those who are considering adoption, and a growing set of use cases to which Spark can be successfully applied. Other big data solutions can offer some similar and overlapping capabilities (there is always something new just around the corner), but we believe that Spark, having already captured significant mindshare and proven real-world value, will continue to successfully expand on its own vortex of focus and energy for at least the next few years.

Publish date: 11/07/16
Free Reports

For Lowest TCO and Maximum Agility Choose the VMware Cloud Foundation Hybrid SDDC Platform

The race is on at full speed.  What race?  The race to bring public cloud agility and economics to a data center near you. Ever since the first integrated systems came onto the scene in 2010, vendors have been furiously engineering solutions to make on-premises infrastructure as cost effective and as easy to use as the public cloud, while also providing the security, availability, and control that enterprises demand. Fundamentally, two main architectures have evolved within the race to modernize data centers that will create a foundation enabling fully private and hybrid clouds. The first approach uses traditional compute, storage, and networking infrastructure components (traditional 3-tier) overlaid with varying degrees of virtualization and management software. The second more recent approach is to build a fully virtualized data center using industry standard servers and networking and then layer on top of that a full suite of software-based compute, network, and storage virtualization with management software. This approach is often termed a Software-Defined Data Center (SDDC).

The goal of an SDDC is to extend virtualization techniques across the entire data center to enable the abstraction, pooling, and automation of all data center resources. This would allow a business to dynamically reallocate any part of the infrastructure for various workload requirements without forklifting hardware or rewiring. VMware has taken SDDC to a new level with VMware Cloud Foundation.  VMware Cloud Foundation is the only unified SDDC platform for the hybrid cloud, which brings together VMware’s compute, storage, and network virtualization into a natively integrated stack that can be deployed on-premises or run as a service from the public cloud. It establishes a common cloud infrastructure foundation that gives customers a unified and consistent operational model across the private and public cloud.

VMware Cloud Foundation delivers an industry-leading SDDC cloud infrastructure by combining VMware’s highly scalable hyper-converged software (vSphere and VSAN) with the industry leading network virtualization platform, NSX. VMware Cloud Foundation comes with unique lifecycle management capabilities (SDDC Manager) that eliminate the overhead of system operations of the cloud infrastructure stack by automating day 0 to day 2 processes such as bring-up, configuration, workload provisioning, and patching/upgrades. As a result, customers can significantly shorten application time to market, boost cloud admin productivity, reduce risk, and lower TCO.  Customers consume VMware Cloud Foundation software in three ways: factory pre-loaded on integrated systems (VxRack 1000 SDDC); deployed on top qualified Ready Nodes from HPE, QCT, Fujitsu, and others in the future, with qualified networking; and run as a service from the public cloud through IBM, vCAN partners, vCloud Air, and more to come.

In this comparative study, Taneja Group performed an in-depth analysis of VMware Cloud Foundation deployed on qualified Ready Nodes and qualified networking versus several traditional 3-tier converged infrastructure (CI) integrated systems and traditional 3-tier do-it-yourself (DIY) systems. We analyzed the capabilities and contrasted key functional differences driven by the various architectural approaches. In addition, we evaluated the key CapEx and OpEx TCO cost components.  Taneja Group configured each traditional 3-tier system's hardware capacity to be as close as possible to the VMware Cloud Foundation qualified hardware capacity.  Further, since none of the 3-tier systems had a fully integrated SDDC software stack, Taneja Group added the missing SDDC software, making it as close as possible to the VMware Cloud Foundation software stack.  The quantitative comparative results from the traditional 3-tier DIY and CI systems were averaged together into one scenario because the hardware and software components are very similar. 

Our analysis concluded that both types of solutions are more than capable of handling a variety of virtualized workload requirements. However, VMware Cloud Foundation has demonstrated a new level of ease-of-use due to its modular scale-out architecture, native integration, and automatic lifecycle management, giving it a strong value proposition when building out modern next generation data centers.  The following are the five key attributes that stood out during the analysis:

  • Native Integration of the SDDC:  VMware Cloud Foundation natively integrates vSphere, Virtual SAN (VSAN), and NSX network virtualization.
  • Simplest operational experience: VMware SDDC Manager automates the life-cycle of the SDDC stack including bring up, configuration, workload provisioning, and patches/upgrades.
  •  
  • Isolated workload domains: VMware Cloud Foundation provides unique administrator tools to flexibly provision subsets of the infrastructure for multi-tenant isolation and security.
  • Modular linear scalability: VMware Cloud Foundation employs an architecture in which capacity can be scaled by the HCI node, by the rack, or by multiple racks. 
  • Seamless Hybrid Cloud: Deploy VMware Cloud Foundation for private cloud and consume on public clouds to create a seamless hybrid cloud with a consistent operational experience.

Taneja Group’s in-depth analysis indicates that VMware Cloud Foundation will enable enterprises to achieve significant cost savings. Hyper-converged infrastructure, used by many web-scale service providers, with natively integrated SDDC software significantly reduced server, storage, and networking costs.  This hardware cost saving more than offset the incremental SDDC software costs needed to deliver the storage and networking capability that typically is provided in hardware from best of breed traditional 3-tier components. In this study, we measured the upfront CapEx and 3 years of support costs for the hardware and software components needed to build out a VMware Cloud Foundation private cloud on qualified Ready Nodes.  In addition, Taneja Group validated a model that demonstrates the labor and time OpEx savings that can be achieved through the use of integrated end-to-end automatic lifecycle management in the VMware SDDC Manager software.

 

By investing in VMware Cloud Foundation, businesses can be assured that their data center infrastructure can be easily consumed, scaled, managed, upgraded and enhanced to provide the best private cloud at the lowest cost. Using a pre-engineered modular, scale-out approach to building at web-scale means infrastructure is added in hours, not days, and businesses can be assured that adding infrastructure scales linearly without complexity.  VMware Cloud Foundation is the only platform that provides a natively integrated unified SDDC platform for the hybrid cloud with end-to-end management and with the flexibility to provision a wide variety of workloads at the push of a button.

In summary, VMware Cloud Foundation enables at least five unparalleled capabilities, generates a 45% lower 3-year TCO than the alternative traditional 3-tier approaches, and delivers a tremendous value proposition when building out a modern hybrid SDDC platform. Before blindly going down the traditional infrastructure approach, companies should take a close look at VMware Cloud Foundation, a unified SDDC platform for the hybrid cloud.

Publish date: 10/17/16
Free Reports

IT Cloud Management Market Landscape

In this report, Taneja Group presents an evaluation of the current IT Cloud Management market landscape for enterprise customers. We look at this landscape as an evolution of IT operations management grown up into the cloud era. In addition to increasingly smart and capable operational monitoring and systems management, good cloud management also requires sophisticated capabilities in both automation and orchestration at scale to support end-user provisioning and agility, and detailed financial management services that reveal multi-cloud costs for analysis and chargeback or showback. Our objective is to evaluate cloud management offerings from leading vendors to enable senior business and technology leaders to decide which vendors offer the best overall solution.

In this study, we evaluated vendors with offerings in one or more of the three fundamental areas. Several well-known vendors (VMware, Microsoft, ServiceNow, HPE, IBM and BMC) have solutions in all three areas. Other vendors focus on only one or two areas, and because it’s possible to compose a broader solution from parts, we’ve evaluated popular niche solutions within each area. All companies were required to have solutions that were generally available as of April 2016. To fairly assess the offerings, we looked at a set of differentiating factors in each of the categories that we believe enterprise customers should use to qualify cloud management solutions. As a final step, to facilitate optimal enterprise selection, we also evaluated the full solution vendors at a higher level where we looked at additional value derived from integrations across areas and other important enterprise vendor engagement factors.

Within each of the three areas that we will refer to as Cloud Orchestration, Operations Management, and Financial Management, and at the vendor level for full-suite vendors, we’ve applied categories of factors for scoring as determined by our team of experts, based on customer buying criteria, technical innovation, and market drivers. The overall results of the evaluation revealed that VMware has a strong lead in today’s competitive cloud management landscape.

Publish date: 08/29/16
Free Reports

IT Cloud Management Market Landscape - Executive Summary

In this report, Taneja Group presents an evaluation of the current IT Cloud Management market landscape for enterprise customers. We look at this landscape as an evolution of IT operations management grown up into the cloud era. In addition to increasingly smart and capable operational monitoring and systems management, good cloud management also requires sophisticated capabilities in both automation and orchestration at scale to support end-user provisioning and agility, and detailed financial management services that reveal multi-cloud costs for analysis and chargeback or showback. Our objective is to evaluate cloud management offerings from leading vendors to help senior business and technology leaders decide which vendors offer the best solution. In this study, we evaluated vendors with offerings in one or more of the three fundamental areas. Several well-known vendors (VMware, Microsoft, ServiceNow, HPE, IBM and BMC) have solutions in all three areas. Other vendors focus on only one or two areas, and because it’s possible to compose a broader solution from parts, we’ve evaluated popular niche solutions within each area. All companies were required to have solutions that were generally available as of April 2016. To fairly assess the offerings, we looked at a set of differentiating factors in each of the categories that we believe enterprise customers should use to qualify cloud management solutions. As a final step, to facilitate optimal enterprise selection, we also evaluated the full solution vendors at a higher level where we looked at additional value derived from integrations across areas and other important enterprise vendor engagement factors. Within each of the three areas that we will refer to as Cloud Orchestration, Operations Management, and Financial Management, and at the vendor level for full-suite vendors, we’ve applied categories of factors for scoring as determined by our team of experts, based on customer buying criteria, technical innovation, and market drivers. The overall results of the evaluation revealed that VMware has a strong lead in today’s competitive cloud management landscape.

Publish date: 08/26/16
Page 1 of 7 pages  1 2 3 >  Last ›