Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Cloud Management

Includes Cloud Infrastructure Management, encompassing Operations, Automation and Orchestration and Business/Financial Management; Virtual Infrastructure Management (monitoring, optimization and performance); Virtualized Datacenter Operations and strategies (automation and cloud computing); and Legacy Infrastructure Management.

This practice covers all forms of technologies and capabilities that impact and enable cloud and on-premises infrastructure management, including operational management; automation and orchestration; business management and cloud costing. This category also includes management of virtual infrastructure and traditional, non-virtualized on-premises environments. As on-premises infrastructure and apps transition to cloud, new management challenges arise, such as around workload mobility and migration, security and availability.

We track and examine these management challenges in the context of hybrid and multi-cloud environments, and identify opportunities for both vendors and end users to optimize their cloud management platforms and approaches.

Page 2 of 31 pages  < 1 2 3 4 >  Last ›
Free Reports

IT Cloud Management Market Landscape - Executive Summary

In this report, Taneja Group presents an evaluation of the current IT Cloud Management market landscape for enterprise customers. We look at this landscape as an evolution of IT operations management grown up into the cloud era. In addition to increasingly smart and capable operational monitoring and systems management, good cloud management also requires sophisticated capabilities in both automation and orchestration at scale to support end-user provisioning and agility, and detailed financial management services that reveal multi-cloud costs for analysis and chargeback or showback. Our objective is to evaluate cloud management offerings from leading vendors to help senior business and technology leaders decide which vendors offer the best solution. In this study, we evaluated vendors with offerings in one or more of the three fundamental areas. Several well-known vendors (VMware, Microsoft, ServiceNow, HPE, IBM and BMC) have solutions in all three areas. Other vendors focus on only one or two areas, and because it’s possible to compose a broader solution from parts, we’ve evaluated popular niche solutions within each area. All companies were required to have solutions that were generally available as of April 2016. To fairly assess the offerings, we looked at a set of differentiating factors in each of the categories that we believe enterprise customers should use to qualify cloud management solutions. As a final step, to facilitate optimal enterprise selection, we also evaluated the full solution vendors at a higher level where we looked at additional value derived from integrations across areas and other important enterprise vendor engagement factors. Within each of the three areas that we will refer to as Cloud Orchestration, Operations Management, and Financial Management, and at the vendor level for full-suite vendors, we’ve applied categories of factors for scoring as determined by our team of experts, based on customer buying criteria, technical innovation, and market drivers. The overall results of the evaluation revealed that VMware has a strong lead in today’s competitive cloud management landscape.

Publish date: 08/26/16
Free Reports

Virtual Instruments WorkloadCentral: Free Cloud-Based Resource for Understanding Workload Behavior

Virtual Instruments, the company created by the combination of the original Virtual Instruments and Load DynamiX, recently made available a free cloud-based service and community called WorkloadCentral. The service is designed to help storage professionals understand workload behavior and improve their knowledge of storage performance. Most will find valuable insights into storage performance with the simple use of this free service. For those who want to get a deeper understanding of workload behavior over time, or evaluate different storage products to determine which one is right for their specific application environment, or optimize their storage configurations for maximum efficiency, they can buy additional Load DynamiX Enterprise products available from the company.
The intent with WorkloadCentral is to create a web-based community that can share information about a variety of application workloads, perform workload analysis and create workload simulations. In an industry where workload sharing has been almost absent, this service will be well received by storage developers and IT users alike.
Read on to understand where WorkloadCentral fits into the overall application and storage performance spectrum...

Publish date: 05/26/16
Profile

Now Big Data Works for Every Enterprise: Pepperdata Adds Missing Performance QoS to Hadoop

While a few well-publicized web 2.0 companies are taking great advantage of foundational big data solution that they have themselves created (e.g. Hadoop), most traditional enterprise IT shops are still thinking about how to practically deploy their first business-impacting big data applications – or have dived in and are now struggling mightily to effectively manage a large Hadoop cluster in the middle of their production data center. This has led to the common perception that realistic big data business value may yet be just out of reach for most organizations – especially those that need to run lean and mean on both staffing and resources.   

This new big data ecosystem consists of scale-out platforms, cutting-edge open source solutions, and massive storage that is inherently difficult for traditional IT shops to optimally manage in production – especially with still evolving ecosystem management capabilities. In addition, most organizations need to run large clusters supporting multiple users and applications to control both capital and operational costs. Yet there are no native ways to guarantee, control, or even gain visibility into workload-level performance within Hadoop. Even if there wasn’t a real high-end skills and deep expertise gap for most, there still isn’t any practical way that additional experts could tweak and tune mixed Hadoop workload environments to meet production performance SLA’s.

At the same time, the competitive game of mining of value from big data has moved from day-long batch ELT/ETL jobs feeding downstream BI systems, to more user interactive queries and business process “real time” applications. Live performance matters as much now in big data as it does in any other data center solution. Ensuring multi-tenant workload performance within Hadoop is why Pepperdata, a cluster performance optimization solution, is critical to the success of enterprise big data initiatives.

In this report we’ll look deeper into today’s Hadoop deployment challenges and learn how performance optimization capabilities are not only necessary for big data success in enterprise production environments, but can open up new opportunities to mine additional business value. We’ll look at Pepperdata’s unique performance solution that enables successful Hadoop adoption for the common enterprise. We’ll also examine how it inherently provides deep visibility and reporting into who is doing what/when for troubleshooting, chargeback and other management needs. Because Pepperdata’s function is essential and unique, not to mention its compelling net value, it should be a checklist item in any data center Hadoop implementation.

To read this full report please click here.

Publish date: 12/17/15
Profile

Converged IT Infrastructure’s Place in the Internet of Things

All of the trends leading towards the world-wide Internet of Things (IoT) – ubiquitous, embedded computing, mobile, organically distributed nodes, and far-flung networks tying them together - are also coming in full force into the IT data center. These solutions are taking the form of converged and hyperconverged modules of IT infrastructure. Organizations adopting such solutions gain from a simpler building-block way to architect and deploy IT, and forward-thinking vendors now have a unique opportunity to profit from subscription services that while delivering superior customer insight and support, also help build a trusted advisor relationship that promises an ongoing “win-win” scenario for both the client and the vendor.

There are many direct (e.g. revenue impacting) and indirect (e.g. customer satisfaction) benefits we mention in this report, but the key enabler to this opportunity is in establishing an IoT scale data analysis capability. Specifically, by approaching converged and hyperconverged solutions as an IoT “appliance”, and harvesting low-level component data on utilization, health, configuration, performance, availability, faults, and other end point metrics across the full worldwide customer base deployment of appliances, an IoT vendor can then analyze the resulting stream of data with great profit for both the vendor and each individual client. Top-notch analytics can feed support, drive product management, assure sales/account control, inform marketing, and even provide a revenue opportunity directly (e.g. offering a gold level of service to the end customer). 

An IoT data stream from a large pool of appliances is almost literally the definition of “big data” – non-stop machine data at large scale with tremendous variety (even within a single converged solution stack) – and operating and maintaining such a big data solution requires a significant amount of data wrangling, data science and ongoing maintenance to stay current. Unfortunately this means IT vendors looking to position IoT oriented solutions may have to invest a large amount of cash, staff and resources into building out and supporting such analytics. For many vendors, especially those with a varied or complex convergence solution portfolio or established as a channel partner building them from third-party reference architectures, these big data costs can be prohibitive. However, failing to provide these services may result in large friction selling and supporting converged solutions to clients now expecting to manage IT infrastructure as appliances.

In this report, we’ll look at the convergence and hyperconvergence appliance trend, and the increasing customer expectations for such solutions. In particular we’ll see how IT appliances in the market need to be treated as complete, commoditized products as ubiquitous and with the same end user expectations as emerging household IoT solutions. In this context, we’ll look at Glassbeam’s unique B2B SaaS SCALAR that converged and hyperconverged IT appliance vendors can immediately adopt to provide an IoT machine data analytic solution. We’ll see how Glassbeam can help differentiate amongst competing solutions, build a trusted client relationship, better manage and support clients, and even provide additional direct revenue opportunities.

Publish date: 08/18/15
Profile

IT Can Now Deliver What Their “Consumers” Want: CTERA 5.0 Enables Enterprise Distributed Data

End-user mobility and the fast growth of data are increasingly pushing file storage and sharing into the cloud. Sharing data through an easy-to-use cloud service keeps globe-hopping users happy and productive, while technologies like cloud storage gateways enable IT to govern across on-premise and cloud storage as a single infrastructure. However, the big goals of these two groups often crash like fast ships on ramming speed. What end-users want as consumers is fast, easy, and always-on while IT requires fully secure, controlled, and ultimately cost-effective. This imbalance creates challenges: concentrate on end-user usability and governance suffers; concentrate on governance and usability diminishes.

How can live files and business data move with users, and IT control and governance follow the data wherever it goes? Globally mobile end-users need to efficiently access and share files, while corporate IT needs to govern and secure those files. Both parties look hopefully to the cloud to provide the mobility and scalability necessary for this level of file collaboration and control. They are right about cloud mobility and scalability but external cloud services can’t by definition provide IT governed data services and private clouds solutions to-date haven’t exactly been on par with the consumer-grade versions publicly available. End users today demand IT provide services like EFSS that are as good or better then the free services everyone has on their smartphones and laptops, and if IT can’t, end-users will end-run around IT.

The solution isn’t hard to dream up: create a single strategic control point for enterprise-level data mobility and governance that works across enterprise cloud-like services. Sadly, thinking about something does not make it so. Certainly central control is common for specific computing domains. Storage makers create central management consoles for the systems under their control. Virtualization makers create central management for hundreds and thousands of VMs. Backup makers create central control for data replication across remote sites. And the components specific to sharing and protecting file data are common: enterprise file share and sync (EFSS) and cloud gateways. But what has been missing up until now is central control over all those processes in a simple unified manner, especially when supporting a distributed workforce.

CTERA cloud storage gateways have provided file-based protection and mobility since the company stepped onto the cloud-based management scene. Now a major new release further supports and enables integrated end-to-end file and data services. CTERA’s 5.0 version of its Enterprise Data Services Platform powerfully brings together several key capabilities that help IT easily deliver the distributed services their users want and need while ensuring full governance and control. The platform combines their highly available NAS gateway for core and edge services with advanced EFSS and backup. Their seamless integration cements separate solutions into a cohesive CTERA platform. In this report we will examine CTERA 5.0 and its balanced benefits for both user happiness and productivity, and IT governance and control.

Publish date: 05/29/15
Free Reports

Free Report: Galileo’s Cross-Domain Insight Served via Cloud

Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.

When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.

Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.

There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.

Publish date: 01/01/15
Page 2 of 31 pages  < 1 2 3 4 >  Last ›