Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 1 of 43 pages  1 2 3 >  Last ›
Free Reports / Profile

HPE InfoSight: Cross-stack Analytics

Accurate and action-oriented predictive analytics have long been the Holy Grail of IT management. Predictive analytics that bring together large amounts of real-time data with powerful analytical capabilities have the potential to provide IT managers with real-time, data-driven insights into the health and performance of their overall environment, enabling them to anticipate and remediate looming issues and optimize resource utilization. While these potential benefits have long been understood, it has only been recently that major innovations in cloud, Internet of Things (IoT), data science, and AI/machine learning have paved the way for predictive analytics to become a reality in the data center.

The IoT now enables companies to collect and monitor real-time sensory or operational data at the edge—whether in online financial systems, retail locations, or on the factory floor. This raw data is typically streamed to the cloud, where it can be tabulated and analyzed. Powerful advances in edge-to-cloud networks and global learning capabilities make the cloud an optimal location for the analytics to take place. Informed by data science and increasingly driven by AI and machine learning technologies, these analytics can help IT managers to monitor key system metrics and understand how well specific infrastructure elements—such as servers or storage—are performing.


But analytics that are focused on a single infrastructure element at a time can only go so far. Sure, it is helpful to monitor the health and performance of specific IT resources, such as CPU heartbeat or storage latency, but infrastructure resources do not operate independently or in isolation. Analytics must go beyond one dimension, and take into account how resources such as servers and storage interact with and depend on one another. This is especially critical in virtualized infrastructures, in which the interaction of virtual machines with hosts, networks and storage makes IT management even more challenging. Ideally, using the power of AI, analytics can cross these various layers of the IT stack to reveal the impact of resource interactions and interdependencies among all the layers. This would take analytics to a whole new level, transcending the limits of human intelligence to enable dynamic, multi-dimensional analysis of complex, virtualized IT environments.

Think about the implications of AI-driven, cross-stack analytics for IT management. For example, such a capability has the potential to transform technical support from a reactive, always-playing-catch-up function to a proactive and forward-looking capability. In this scenario, built-in analytics are capable of connecting the dots between infrastructure layers to automatically anticipate, diagnose, and fix technical issues before they become major problems. Cross-layer analytics might also help to improve system performance by predicting looming configuration issues and recommending changes to address them.


One product—HPE InfoSight—is already embracing these possibilities, fast-forwarding to bring AI-driven, cross-layer analytics to virtualized environments today. HPE InfoSight has proven its value in delivering predictive storage analytics to customers for many years, while extending its capabilities across the infrastructure stack. In this piece we’ll explore the key characteristics that customers should look for in an analytics solution for virtual infrastructure and then look at the HPE InfoSight architecture and its capabilities, and how they are helping customers transform IT management in virtualized environments today. Specifically, we will demonstrate how one customer uses cross-stack analytics delivered by HPE InfoSight to save tremendous time and money in their HPE 3PAR Storage environment.

Publish date: 06/28/18
Profile

VMware Cloud on AWS:  A new approach to Public Cloud offers more value than Azure alternatives

There is no mistaking that cloud adoption is growing at a phenomenal rate. Infrastructure spending on the public and private cloud is growing at double-digit rates while spending on traditional, non-cloud, IT infrastructure continues to decline and within a few short years will represent less than 50% of the entire infrastructure market. On-premises cloud vendors have been innovating furiously over the past several years to simplify IT using software-defined infrastructure, in an effort to give on-premises solutions the agility and simplicity to compete effectively with the scale of the public cloud vendors. We are rapidly approaching a time where we will find an equilibrium point between infrastructure that belongs on-premises versus infrastructure that belongs in the public cloud.


To gather data and develop insights regarding plans for public and hybrid cloud use, Taneja Group conducted two primary research studies in the summer of 2017. In each case, we surveyed 350+ IT decision makers and practitioners around the globe, representing a wide range of industries and business sizes, to understand their current and planned use cases and deployments of applications to the public cloud. What we found is more than two-thirds of IT practitioners plan on using hybrid clouds as their long-term infrastructure choice, while 16% prefer on-premises clouds only and the remaining 16% want their infrastructure exclusively in the public cloud. Unfortunately, however, we learned that today’s hybrid clouds are not delivering on the attributes that are most important to IT buyers, such as end-to-end security, quality of service, and workload mobility, while maintaining IT control.


What if there were a vendor that could overcome all the current hybrid cloud deficiencies and also provide public-cloud infrastructure that is arguably more efficient than leading public cloud alternatives? That would be what we call “having your cake and eating it too.”  Enter VMware Cloud on AWS. VMware Cloud on AWS has been built on VMware’s Cloud Foundation software and can be deployed as-a-service on AWS as easily as one can do a simple mouse click. The difference now is that the hundreds of thousands of VMware customers that have come to rely on VMware as their key enterprise virtualization provider can instantly get a fully functional hybrid cloud with all the security, control, and features they depend on in their on-premises VMware environments. Also, customers will enjoy seamless workload migration from private to public clouds, advanced disaster recovery capability, and—by being on AWS public cloud—safe and secure access to additional AWS services.
So, what about total solution cost? Can VMware make this cloud service as cost-effective as spinning up IaaS on Microsoft Azure or using a hybrid cloud consisting of Azure in public cloud and Azure stack on-premises? The simple answer is, YES, through transparency and efficiency. Transparency in the fact that when you provision VMware Cloud on AWS, you actually know what you’re getting physically, including the type of server, amount of storage, etc. The dirty little secret to public cloud instances is that you don’t know what the infrastructure is under the covers. And if you provision a vCPU with a certain amount of memory and storage, you are going to pay for that instance no matter how much you use it. With transparency comes the opportunity for efficiency. VMware has long been known for efficiency in operation and provisioning. By combining greater efficiency with infrastructure transparency, VMware can offer customers a solution that is more cost-effective than public cloud alternatives.

Publish date: 12/31/17
Profile

Enterprise Cloud Platform Ideal for Database Apps: Nutanix Hosting Oracle Penetrates Tier 1

Creating an Enterprise Cloud with HyperConverged Infrastructure (HCI) is making terrific sense (and “cents”) for a wide range of corporations tired of integrating and managing complex stacks of IT infrastructure. Replacing siloed infrastructure and going far beyond simple pre-converged racks of traditional hardware, HCI greatly simplifies IT, frees up valuable staff from integration and babysitting heterogeneous solutions to better focus on adding value to the business, and can vastly improve “qualities of service” in all directions. Today, we find HCI solutions being deployed as an Enterprise Cloud platform in corporate data centers even for mission-critical tier-1 database workloads.

However, like public clouds and server virtualization before it, HCI has had to grow and mature. Initially HCI solutions had to prove themselves in small and medium size organizations – and on rank-and-file applications. Now, five plus years of evolution of vendors like Nutanix have matured HCI into a full tier1 enterprise application platform presenting the best features of public clouds including ease of management, modular scalability and agile user provisioning. Perhaps the best example of enterprise mission-critical workloads are business applications layered on Oracle Database, and as well see in this report, Nutanix now makes an ideal platform for enterprise-grade databases and database-powered applications.

In fact, we find that Nutanix’s mature platform not only can, by its natural mixed workload design, host a complete tier1 application stack (including the database), but also offers significant advantages because the whole application stack is “convergently” hosted. The resulting opportunity for both IT (and the business user) is striking. Those feeling tied down to legacy architectures and those previously interested in the benefits of plain Converged Infrastructure will now want to evaluate how mature HCI can now take them farther, faster.

In the full report, we explore in detail how Nutanix supports and accelerates serious Oracle database-driven applications (e.g. ERP, CRM) at the heart of most businesses and production data centers. In this summary, we will review how Nutanix Enterprise Cloud Platform is also an ideal enterprise data center platform for the whole application stack— consolidating many if not most workloads in the data center.

Publish date: 06/30/17
Profile

The Best All-Flash Array for SAP HANA

These days the world operates in real-time all the time. Whether making airline reservations or getting the best deal from an online retailer, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish. These capabilities enable businesses to succeed in the modern age, giving forward-thinking companies a real edge in innovation.

SAP HANA is an example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both OLAP and OLTP database workloads simultaneously. SAP HANA can be deployed on-premises or in the cloud. Originally, on-premises HANA was available only as a dedicated appliance. Recently SAP has expanded support to best in class components through their SAP Tailored Datacenter Integration (TDI) program. In this solution profile, Taneja Group examined the storage requirements needed for HANA TDI environments and evaluated storage alternatives including the HPE 3PAR StoreServ All Flash. We will make a strong case as to why all-flash arrays like the HPE 3PAR version are a great fit for SAP HANA solutions.

Why discuss storage for an in-memory database? The reason is simple: RAM loses its mind when the power goes off. This volatility means that persistent shared storage is at the heart of the HANA architecture for scalability, disaster tolerance, and data protection. The performance attributes of your shared storage dictate how many nodes you can cluster into a SAP HANA environment which in turn affects your business outcomes. Greater scalability capability means more real-time information is processed. SAP HANA workload shared storage requirements are write intensive with low latency for small files and sequential throughput performance for large files. However, the overall storage capacity is not extreme which makes this workload an ideal fit for all-flash arrays that can meet performance requirements with the smallest quantity of SSDs. Typically you would need 10X the equivalent spinning media drives just to meet the performance requirements, which then leaves you with a massive amount of capacity that cannot be used for other purposes.

In this study, we examined five leading all-flash arrays including the HPE 3PAR StoreServ 8450 All Flash. We found that the unique architecture of the 3PAR array could meet HANA workload requirements with up to 73% fewer SSDs, 76% less power, and 60% less rack space than the alternative AFAs we evaluated. 

Publish date: 06/07/17
Profile

Optimizing VM Storage Performance & Capacity - Tintri Customers Leverage New Predictive Analytics

Today we are seeing big impacts on storage from the huge increase in the scale of an organization’s important data (e.g. Big Data, Internet Of Things) and the growing size of virtualization clusters (e.g. never-ending VM’s, VDI, cloud-building). In addition, virtualization adoption tends to increase the generalization of IT admins. In particular, IT groups are focusing more on servicing users and applications and no longer want to be just managing infrastructure for infrastructure’s sake. Everything that IT does is becoming interpreted, analyzed, and managed in application/business terms, including storage to optimize the return on their total IT investment. To move forward, an organization’s storage infrastructure not only needs to grow internally smarter, it also needs to become both VM and application aware.

While server virtualization made a lot of things better for the over-taxed IT shop, delivering quality storage services in hypervisor infrastructures with traditional storage created difficult challenges. In response Tintri pioneered per-VM storage infrastructure. The Tintri VMstore has eliminated multiple points of storage friction and pain. In fact, it’s now becoming a mandatory checkbox across the storage market for all arrays to claim some kind of VM-centricity. Unfortunately, traditional arrays are mainly focused on checking off rudimentary support for external hypervisor APIs that only serve to re-package the same old storage. The best fit to today’s (and tomorrow’s) virtual storage requirements will only come from fully engineered VM-centric storage and application-aware approaches as Tintri has done.

However, it’s not enough to simply drop in storage that automatically drives best practice policies and handles today’s needs. We all know change is constant, and key to preparing for both growth and change is having a detailed, properly focused view of today’s large scale environments, along with smart planning tools that help IT both optimize current resources and make the best IT investment decisions going forward. To meet those larger needs, Tintri has rolled out a Tintri Analytics SaaS-based offering that applies big data analytical power to the large scale of their customer’s VMstore VM-aware metrics.

In this report we will look briefly at Tintri’s overall “per-VM” storage approach and then take a deeper look at their new Tintri Analytics offering. The new Tintri Analytics management service further optimizes their app-aware VM storage with advanced VM-centric performance and capacity management. With this new service, Tintri is helping their customers receive greater visibility, insight and analysis over large, cloud-scale virtual operations. We’ll see how “big data” enhanced intelligence provides significant value and differentiation, and get a glimpse of the payback that a predictive approach provides both the virtual admin and application owners. 

Publish date: 11/04/16
Profile

Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup

Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.

These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.

And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.

Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).

Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.

In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.

Publish date: 10/18/16
Page 1 of 43 pages  1 2 3 >  Last ›