Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 1 of 43 pages  1 2 3 >  Last ›
Profile

When Comparing Cloud Alternatives, For the Best TCO Leverage VMware Cloud Foundation

In this paper we examine the relative costs and other advantages of four different cloud infrastructure approaches, two based on private or on-premises clouds and two on public clouds. These public and private approaches can in turn be combined to create a hybrid cloud deployment. The objective is to enable businesses to evaluate which cloud approach makes the most sense for them, based on differences in TCO and other relevant factors.

Public clouds are here to stay, given their large and growing adoption by businesses and consumers alike. Now well over a decade since AWS first launched its infrastructure-as-a-service offerings, public clouds have become a popular deployment choice for both new and legacy business applications. Based on Taneja Group research, nearly every business is now running at least some of its use cases and applications in one or more public clouds. Clouds offer customers greater agility and near-infinite scalability, in addition to a flexible pay-as-you-go consumption model.

However, a large majority of businesses have decided they cannot rely on public clouds alone to satisfy their IT needs. Instead, they see hybrid clouds as a better architectural choice, enabling them to realize all the advantages of a public cloud along with broader use case support and a more flexible deployment model. More than two-thirds of IT professionals who participated in two recent Taneja Group research studies favor hybrid clouds as their long-term architecture.

For the on-premises or private cloud component of a hybrid cloud, the majority of users are starting with VMware technology and typically use two different approaches: a traditional, integrated 3-tier architecture commonly called Converged Infrastructure (CI); or a fully software-defined approach based on Hyperconverged Infrastructure (HCI). The 3-tier, CI approach utilizes loosely integrated compute, storage and networking resources, while the easiest and most comprehensive approach is based on VMware Cloud Foundation, a software-defined data center platform. Our analysis demonstrates that the software-defined VMware Cloud Foundation approach provides a simpler, more cost-effective approach to building on-premises or private cloud infrastructure.

Looking to the public cloud, businesses have a choice of whether to move all or just a subset of their on-premises workloads to the public cloud, and either run them there permanently or in hybrid fashion. We have analyzed the relative costs and advantages of two major ways to migrate and run workloads in the public cloud: moving on-premises workloads to a native public cloud infrastructure, such as native Amazon Web Services, Microsoft Azure or Google Cloud Platform; or moving them to a VMware Cloud Foundation-based public cloud, such as VMware Cloud on AWS or VMware Cloud Foundation offered as a service by one of the VMware Cloud Provider Program (VCPP) partners. As we’ll see, moving to a native public cloud infrastructure requires often significant upfront refactoring and migration effort, which gives the path to a VMware Cloud Foundation-based public cloud a major cost advantage.

Based on our in-depth costing and qualitative analysis of the two private and two public cloud approaches, we found that clouds based on VMware Cloud Foundation technology offer the lowest TCO over a three-year period. VMware Cloud Foundation-based clouds minimize risk by starting with proven and widely deployed VMware technology on premises and enabling full application compatibility and workload portability between your on-premises environment and your choice of one or more VMware-compatible public clouds. VMware Cloud Foundation-enabled clouds will help you to optimize your path to a hybrid cloud deployment.

Publish date: 05/21/19
Profile

HPE 3PAR Performance Insights: Bringing InfoSight Analytics to the Edge

In an era in which every tech company claims to have an AI offering, HPE InfoSight stands out as the genuine article. HPE InfoSight is a best-in-class AI solution that uses cloud-based machine learning to provide global insights into the status and health of infrastructure, removing much of the management burden and helping customers to solve some of their most challenging IT problems. In particular, HPE InfoSight delivers cross-stack insights into a storage array’s health, configuration, capacity and performance based on near-real time analytics and the knowledge gained from a vast treasure trove of field data collected over many years.

Among a seemingly endless set of over-hyped AI solutions, HPE InfoSight is delivering remarkable results and significantly enriching the customer experience. The solution has reduced support incidents across more than 50,000 connected HPE 3PAR storage systems by 85%, while lowering operating expenses by nearly 80%. With its unmatched track record, HPE InfoSight has become the leader in AI-driven operations for the Hybrid Cloud and an essential asset for HPE 3PAR customers.

Now InfoSight technology is being incorporated into a new solution at the edge, which is enabling HPE 3PAR customers to better understand, anticipate and improve array performance. As we’ll see, HPE Performance Insights for 3PAR Storage takes an innovative approach to helping IT managers track storage performance and deliver it when and where it’s needed most, based on the power and intelligence of InfoSight’s AI and machine learning technologies.

Publish date: 11/30/18
Free Reports / Profile

HPE InfoSight: Cross-stack Analytics

Accurate and action-oriented predictive analytics have long been the Holy Grail of IT management. Predictive analytics that bring together large amounts of real-time data with powerful analytical capabilities have the potential to provide IT managers with real-time, data-driven insights into the health and performance of their overall environment, enabling them to anticipate and remediate looming issues and optimize resource utilization. While these potential benefits have long been understood, it has only been recently that major innovations in cloud, Internet of Things (IoT), data science, and AI/machine learning have paved the way for predictive analytics to become a reality in the data center.

The IoT now enables companies to collect and monitor real-time sensory or operational data at the edge—whether in online financial systems, retail locations, or on the factory floor. This raw data is typically streamed to the cloud, where it can be tabulated and analyzed. Powerful advances in edge-to-cloud networks and global learning capabilities make the cloud an optimal location for the analytics to take place. Informed by data science and increasingly driven by AI and machine learning technologies, these analytics can help IT managers to monitor key system metrics and understand how well specific infrastructure elements—such as servers or storage—are performing.


But analytics that are focused on a single infrastructure element at a time can only go so far. Sure, it is helpful to monitor the health and performance of specific IT resources, such as CPU heartbeat or storage latency, but infrastructure resources do not operate independently or in isolation. Analytics must go beyond one dimension, and take into account how resources such as servers and storage interact with and depend on one another. This is especially critical in virtualized infrastructures, in which the interaction of virtual machines with hosts, networks and storage makes IT management even more challenging. Ideally, using the power of AI, analytics can cross these various layers of the IT stack to reveal the impact of resource interactions and interdependencies among all the layers. This would take analytics to a whole new level, transcending the limits of human intelligence to enable dynamic, multi-dimensional analysis of complex, virtualized IT environments.

Think about the implications of AI-driven, cross-stack analytics for IT management. For example, such a capability has the potential to transform technical support from a reactive, always-playing-catch-up function to a proactive and forward-looking capability. In this scenario, built-in analytics are capable of connecting the dots between infrastructure layers to automatically anticipate, diagnose, and fix technical issues before they become major problems. Cross-layer analytics might also help to improve system performance by predicting looming configuration issues and recommending changes to address them.


One product—HPE InfoSight—is already embracing these possibilities, fast-forwarding to bring AI-driven, cross-layer analytics to virtualized environments today. HPE InfoSight has proven its value in delivering predictive storage analytics to customers for many years, while extending its capabilities across the infrastructure stack. In this piece we’ll explore the key characteristics that customers should look for in an analytics solution for virtual infrastructure and then look at the HPE InfoSight architecture and its capabilities, and how they are helping customers transform IT management in virtualized environments today. Specifically, we will demonstrate how one customer uses cross-stack analytics delivered by HPE InfoSight to save tremendous time and money in their HPE 3PAR Storage environment.

Publish date: 06/28/18
Profile

VMware Cloud on AWS:  A new approach to Public Cloud offers more value than Azure alternatives

There is no mistaking that cloud adoption is growing at a phenomenal rate. Infrastructure spending on the public and private cloud is growing at double-digit rates while spending on traditional, non-cloud, IT infrastructure continues to decline and within a few short years will represent less than 50% of the entire infrastructure market. On-premises cloud vendors have been innovating furiously over the past several years to simplify IT using software-defined infrastructure, in an effort to give on-premises solutions the agility and simplicity to compete effectively with the scale of the public cloud vendors. We are rapidly approaching a time where we will find an equilibrium point between infrastructure that belongs on-premises versus infrastructure that belongs in the public cloud.


To gather data and develop insights regarding plans for public and hybrid cloud use, Taneja Group conducted two primary research studies in the summer of 2017. In each case, we surveyed 350+ IT decision makers and practitioners around the globe, representing a wide range of industries and business sizes, to understand their current and planned use cases and deployments of applications to the public cloud. What we found is more than two-thirds of IT practitioners plan on using hybrid clouds as their long-term infrastructure choice, while 16% prefer on-premises clouds only and the remaining 16% want their infrastructure exclusively in the public cloud. Unfortunately, however, we learned that today’s hybrid clouds are not delivering on the attributes that are most important to IT buyers, such as end-to-end security, quality of service, and workload mobility, while maintaining IT control.


What if there were a vendor that could overcome all the current hybrid cloud deficiencies and also provide public-cloud infrastructure that is arguably more efficient than leading public cloud alternatives? That would be what we call “having your cake and eating it too.”  Enter VMware Cloud on AWS. VMware Cloud on AWS has been built on VMware’s Cloud Foundation software and can be deployed as-a-service on AWS as easily as one can do a simple mouse click. The difference now is that the hundreds of thousands of VMware customers that have come to rely on VMware as their key enterprise virtualization provider can instantly get a fully functional hybrid cloud with all the security, control, and features they depend on in their on-premises VMware environments. Also, customers will enjoy seamless workload migration from private to public clouds, advanced disaster recovery capability, and—by being on AWS public cloud—safe and secure access to additional AWS services.
So, what about total solution cost? Can VMware make this cloud service as cost-effective as spinning up IaaS on Microsoft Azure or using a hybrid cloud consisting of Azure in public cloud and Azure stack on-premises? The simple answer is, YES, through transparency and efficiency. Transparency in the fact that when you provision VMware Cloud on AWS, you actually know what you’re getting physically, including the type of server, amount of storage, etc. The dirty little secret to public cloud instances is that you don’t know what the infrastructure is under the covers. And if you provision a vCPU with a certain amount of memory and storage, you are going to pay for that instance no matter how much you use it. With transparency comes the opportunity for efficiency. VMware has long been known for efficiency in operation and provisioning. By combining greater efficiency with infrastructure transparency, VMware can offer customers a solution that is more cost-effective than public cloud alternatives.

Publish date: 12/31/17
Profile

Enterprise Cloud Platform Ideal for Database Apps: Nutanix Hosting Oracle Penetrates Tier 1

Creating an Enterprise Cloud with HyperConverged Infrastructure (HCI) is making terrific sense (and “cents”) for a wide range of corporations tired of integrating and managing complex stacks of IT infrastructure. Replacing siloed infrastructure and going far beyond simple pre-converged racks of traditional hardware, HCI greatly simplifies IT, frees up valuable staff from integration and babysitting heterogeneous solutions to better focus on adding value to the business, and can vastly improve “qualities of service” in all directions. Today, we find HCI solutions being deployed as an Enterprise Cloud platform in corporate data centers even for mission-critical tier-1 database workloads.

However, like public clouds and server virtualization before it, HCI has had to grow and mature. Initially HCI solutions had to prove themselves in small and medium size organizations – and on rank-and-file applications. Now, five plus years of evolution of vendors like Nutanix have matured HCI into a full tier1 enterprise application platform presenting the best features of public clouds including ease of management, modular scalability and agile user provisioning. Perhaps the best example of enterprise mission-critical workloads are business applications layered on Oracle Database, and as well see in this report, Nutanix now makes an ideal platform for enterprise-grade databases and database-powered applications.

In fact, we find that Nutanix’s mature platform not only can, by its natural mixed workload design, host a complete tier1 application stack (including the database), but also offers significant advantages because the whole application stack is “convergently” hosted. The resulting opportunity for both IT (and the business user) is striking. Those feeling tied down to legacy architectures and those previously interested in the benefits of plain Converged Infrastructure will now want to evaluate how mature HCI can now take them farther, faster.

In the full report, we explore in detail how Nutanix supports and accelerates serious Oracle database-driven applications (e.g. ERP, CRM) at the heart of most businesses and production data centers. In this summary, we will review how Nutanix Enterprise Cloud Platform is also an ideal enterprise data center platform for the whole application stack— consolidating many if not most workloads in the data center.

Publish date: 06/30/17
Profile

The Best All-Flash Array for SAP HANA

These days the world operates in real-time all the time. Whether making airline reservations or getting the best deal from an online retailer, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish. These capabilities enable businesses to succeed in the modern age, giving forward-thinking companies a real edge in innovation.

SAP HANA is an example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both OLAP and OLTP database workloads simultaneously. SAP HANA can be deployed on-premises or in the cloud. Originally, on-premises HANA was available only as a dedicated appliance. Recently SAP has expanded support to best in class components through their SAP Tailored Datacenter Integration (TDI) program. In this solution profile, Taneja Group examined the storage requirements needed for HANA TDI environments and evaluated storage alternatives including the HPE 3PAR StoreServ All Flash. We will make a strong case as to why all-flash arrays like the HPE 3PAR version are a great fit for SAP HANA solutions.

Why discuss storage for an in-memory database? The reason is simple: RAM loses its mind when the power goes off. This volatility means that persistent shared storage is at the heart of the HANA architecture for scalability, disaster tolerance, and data protection. The performance attributes of your shared storage dictate how many nodes you can cluster into a SAP HANA environment which in turn affects your business outcomes. Greater scalability capability means more real-time information is processed. SAP HANA workload shared storage requirements are write intensive with low latency for small files and sequential throughput performance for large files. However, the overall storage capacity is not extreme which makes this workload an ideal fit for all-flash arrays that can meet performance requirements with the smallest quantity of SSDs. Typically you would need 10X the equivalent spinning media drives just to meet the performance requirements, which then leaves you with a massive amount of capacity that cannot be used for other purposes.

In this study, we examined five leading all-flash arrays including the HPE 3PAR StoreServ 8450 All Flash. We found that the unique architecture of the 3PAR array could meet HANA workload requirements with up to 73% fewer SSDs, 76% less power, and 60% less rack space than the alternative AFAs we evaluated. 

Publish date: 06/07/17
Page 1 of 43 pages  1 2 3 >  Last ›