Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Cloud Management

Includes Cloud Infrastructure Management, encompassing Operations, Automation and Orchestration and Business/Financial Management; Virtual Infrastructure Management (monitoring, optimization and performance); Virtualized Datacenter Operations and strategies (automation and cloud computing); and Legacy Infrastructure Management.

This practice covers all forms of technologies and capabilities that impact and enable cloud and on-premises infrastructure management, including operational management; automation and orchestration; business management and cloud costing. This category also includes management of virtual infrastructure and traditional, non-virtualized on-premises environments. As on-premises infrastructure and apps transition to cloud, new management challenges arise, such as around workload mobility and migration, security and availability.

We track and examine these management challenges in the context of hybrid and multi-cloud environments, and identify opportunities for both vendors and end users to optimize their cloud management platforms and approaches.

Page 2 of 31 pages  < 1 2 3 4 >  Last ›
Profile

Converged IT Infrastructure’s Place in the Internet of Things

All of the trends leading towards the world-wide Internet of Things (IoT) – ubiquitous, embedded computing, mobile, organically distributed nodes, and far-flung networks tying them together - are also coming in full force into the IT data center. These solutions are taking the form of converged and hyperconverged modules of IT infrastructure. Organizations adopting such solutions gain from a simpler building-block way to architect and deploy IT, and forward-thinking vendors now have a unique opportunity to profit from subscription services that while delivering superior customer insight and support, also help build a trusted advisor relationship that promises an ongoing “win-win” scenario for both the client and the vendor.

There are many direct (e.g. revenue impacting) and indirect (e.g. customer satisfaction) benefits we mention in this report, but the key enabler to this opportunity is in establishing an IoT scale data analysis capability. Specifically, by approaching converged and hyperconverged solutions as an IoT “appliance”, and harvesting low-level component data on utilization, health, configuration, performance, availability, faults, and other end point metrics across the full worldwide customer base deployment of appliances, an IoT vendor can then analyze the resulting stream of data with great profit for both the vendor and each individual client. Top-notch analytics can feed support, drive product management, assure sales/account control, inform marketing, and even provide a revenue opportunity directly (e.g. offering a gold level of service to the end customer). 

An IoT data stream from a large pool of appliances is almost literally the definition of “big data” – non-stop machine data at large scale with tremendous variety (even within a single converged solution stack) – and operating and maintaining such a big data solution requires a significant amount of data wrangling, data science and ongoing maintenance to stay current. Unfortunately this means IT vendors looking to position IoT oriented solutions may have to invest a large amount of cash, staff and resources into building out and supporting such analytics. For many vendors, especially those with a varied or complex convergence solution portfolio or established as a channel partner building them from third-party reference architectures, these big data costs can be prohibitive. However, failing to provide these services may result in large friction selling and supporting converged solutions to clients now expecting to manage IT infrastructure as appliances.

In this report, we’ll look at the convergence and hyperconvergence appliance trend, and the increasing customer expectations for such solutions. In particular we’ll see how IT appliances in the market need to be treated as complete, commoditized products as ubiquitous and with the same end user expectations as emerging household IoT solutions. In this context, we’ll look at Glassbeam’s unique B2B SaaS SCALAR that converged and hyperconverged IT appliance vendors can immediately adopt to provide an IoT machine data analytic solution. We’ll see how Glassbeam can help differentiate amongst competing solutions, build a trusted client relationship, better manage and support clients, and even provide additional direct revenue opportunities.

Publish date: 08/18/15
Profile

IT Can Now Deliver What Their “Consumers” Want: CTERA 5.0 Enables Enterprise Distributed Data

End-user mobility and the fast growth of data are increasingly pushing file storage and sharing into the cloud. Sharing data through an easy-to-use cloud service keeps globe-hopping users happy and productive, while technologies like cloud storage gateways enable IT to govern across on-premise and cloud storage as a single infrastructure. However, the big goals of these two groups often crash like fast ships on ramming speed. What end-users want as consumers is fast, easy, and always-on while IT requires fully secure, controlled, and ultimately cost-effective. This imbalance creates challenges: concentrate on end-user usability and governance suffers; concentrate on governance and usability diminishes.

How can live files and business data move with users, and IT control and governance follow the data wherever it goes? Globally mobile end-users need to efficiently access and share files, while corporate IT needs to govern and secure those files. Both parties look hopefully to the cloud to provide the mobility and scalability necessary for this level of file collaboration and control. They are right about cloud mobility and scalability but external cloud services can’t by definition provide IT governed data services and private clouds solutions to-date haven’t exactly been on par with the consumer-grade versions publicly available. End users today demand IT provide services like EFSS that are as good or better then the free services everyone has on their smartphones and laptops, and if IT can’t, end-users will end-run around IT.

The solution isn’t hard to dream up: create a single strategic control point for enterprise-level data mobility and governance that works across enterprise cloud-like services. Sadly, thinking about something does not make it so. Certainly central control is common for specific computing domains. Storage makers create central management consoles for the systems under their control. Virtualization makers create central management for hundreds and thousands of VMs. Backup makers create central control for data replication across remote sites. And the components specific to sharing and protecting file data are common: enterprise file share and sync (EFSS) and cloud gateways. But what has been missing up until now is central control over all those processes in a simple unified manner, especially when supporting a distributed workforce.

CTERA cloud storage gateways have provided file-based protection and mobility since the company stepped onto the cloud-based management scene. Now a major new release further supports and enables integrated end-to-end file and data services. CTERA’s 5.0 version of its Enterprise Data Services Platform powerfully brings together several key capabilities that help IT easily deliver the distributed services their users want and need while ensuring full governance and control. The platform combines their highly available NAS gateway for core and edge services with advanced EFSS and backup. Their seamless integration cements separate solutions into a cohesive CTERA platform. In this report we will examine CTERA 5.0 and its balanced benefits for both user happiness and productivity, and IT governance and control.

Publish date: 05/29/15
Free Reports

Free Report: Galileo’s Cross-Domain Insight Served via Cloud

Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.

When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.

Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.

There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.

Publish date: 01/01/15
Profile

What Admins Choose For Performance Management: Galileo’s Cross-Domain Insight Served via Cloud

Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.

When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.

Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.

There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.

Publish date: 10/29/14
Report

Field Report: Nutanix vs. VCE - Web-Scale Vs. Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix. The Taneja Group analyzed the experiences of seven Nutanix Virtual Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership. VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments.

Publish date: 10/16/14
Technology Validation

Scale Computing HC3 And VMware Virtual SAN Hyperconverged Solutions - Head to Head

Scale Computing was an early proponent of hyperconverged appliances and is one of the innovators in this marketplace. Since the release of Scale Computing’s first hyperconverged appliance, many others have come to embrace the elegance of having storage and compute functionality combined on a single server. Even the virtualization juggernaut VMware has seen the benefits of abstracting, pooling, and running storage and compute on shared commodity hardware. VMware’s current hyperconverged storage initiative, VMware Virtual SAN, seems to be gaining traction in the marketplace. We thought it would be an interesting exercise to compare and contrast Scale Computing’s hyperconverged appliance to a hyperconverged solution built around VMware Virtual SAN. Before we delve into this exercise, however, let’s go over a little background history on the topic.

Taneja Group defines hyperconvergence as the integration of multiple previously separate IT domains into one system in order to serve up an entire IT infrastructure from a single device or system. This means that hyperconverged systems contain all IT infrastructure—networking, compute and storage—while promising to preserve the adaptability of the best traditional IT approaches. Such capability implies an architecture built for seamless and easy scaling over time, in a "grow as needed” fashion.

Scale Computing got its start with scale-out storage appliances and has since morphed these into a hyperconverged appliance—HC3. HC3 was the natural evolution of its well-regarded line of scale-out storage appliances, which includes both a hypervisor and a virtual infrastructure manager. HC3’s strong suit is its ease of use and affordability. The product has seen tremendous growth and now has over 900 deployments.

VMware got its start with compute virtualization software and is by far the largest virtualization company in the world. VMware has always been a software company, and takes pride in its hardware agnosticism. VMware’s first attempt to combine shared direct-attached storage (DAS) storage and compute on the same server resulted in a product called “VMware vSphere Storage Appliance” (VSA), which was released in June of 2011. VSA had many limitations and didn’t seem to gain traction in the marketplace and reached its end of availability (EOA) in June of 2014. VMware’s second attempt, VMware Virtual SAN (VSAN), which was announced at VMworld in 2013, shows a lot of promise and seems to be gaining acceptance, with over 300 paying customers using the product. We will be comparing VMware Virtual SAN to Scale Computing’s hyperconverged appliance, HC3, in this paper.

Here we have two companies: Scale Computing, which has transformed from an early innovator in scale-out storage to a company that provides a hyperconverged appliance; and VMware, which was an early innovator in compute virtualization and since has transformed into a company that provides the software needed to create build-your-own hyperconverged systems. We looked deeply into both systems (HC3 and VSAN) and walked both through a series of exercises to see how they compare. We aimed this review at what we consider a sweet spot for these products: small to medium-sized enterprises with limited dedicated IT staff and a limited budget. After spending time with these two solutions, and probing various facets of them, we came up with some strong conclusions about their ability to provide an affordable, easy to use, scalable solution for this market.

The observations we have made for both products are based on hands-on testing both in our lab and on-site at Scale Computing’s facility in Indianapolis, Indiana. Although we talk about performance in general terms, we do not, and you should not, construe this to be a benchmarking test. We have, in good faith, verified all conclusions made around any timing issues. Moreover, the numbers that we are using are generalities that we believe are widely known and accepted in the virtualization community.

Publish date: 10/01/14
Page 2 of 31 pages  < 1 2 3 4 >  Last ›