Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Data Center Systems

Includes HyperConverged, Converged, Disaggregated, and Legacy Infrastructure.

This category focuses on modern, on-premises infrastructure-based architectural approaches at the datacenter level. All aspects of the necessary infrastructure are included such as network, compute and storage. Taneja Group treats these systems as a complete solution for a particular workload whether it be general-purpose IaaS or vertical solutions targeted at specific use cases such as workload consolidation or applications such as SAP. We regularly compare and contrast the various architectural approaches that IT buyers are considering, evaluate their strengths and weaknesses, and discuss which approaches are likely to work best for specific workloads and use cases. We are always looking for shifts in industry thinking or technology adoption that might lead to an evolution of existing data center architectures, and engage with startup and large vendors alike to understand and characterize newly emerging approaches. Where possible, our reports and opinions are backed by primary research, including direct conversations with different classes of IT decision makers and influencers.

Page 3 of 14 pages  < 1 2 3 4 5 >  Last ›
Profile

HyperConverged Infrastructure Powered by Pivot3: Benefits of a More Efficient HCI Architecture

Virtualization has matured and become widely adopted in the enterprise market. HyperConverged Infrastructure (HCI), with virtualization at its core, is taking the market by storm, enabling virtualization for businesses of all sizes. The success of these technologies has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the time and effort required to create custom infrastructure from best-of-breed DIY components.

With HCI, the traditional three-tier architecture has been collapsed into a single system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. The immense success of this approach has led to increased competition in this space and the customers are required to sort through the various offerings, analyzing key attributes to determine which are significant.

One of these competing vendors, Pivot3, was founded in 2002 and has been in the HCI market since 2008, well before the term HyperConverged was used. For many years, Pivot3’s vSTAC architecture has provided the most efficient scale-out Software-Defined Storage (SDS) system available on the market. This efficiency is attributed to three design innovations. The first is their extremely efficient and reliable erasure coding technology called Scalar Erasure Coding. Conversely, many leading HCI implementations use replication-based redundancy techniques which are heavy on storage capacity utilization. Scalar Erasure Coding from Pivot3 can deliver significant capacity savings depending on the level of drive protection selected. The second innovation is Pivot3’s Global Hyperconvergence which creates a cross-cluster virtual SAN, the HyperSAN: in case of appliance failure, a VM migrates to another node and continues operations without the need to divert compute power to copy data over to that node. The third innovation has been a reduction in CPU overhead needed to implement the SDS features and other VM centric management tasks. Implementation of the HCI software uses the same CPU complex as business applications, this additional usage is referred to as the HCI overhead tax. HCI overhead tax is important since the licensing cost for many applications and infrastructure software are based on a per CPU basis. Even with today’s ever- increasing cores per CPU there still can be significant cost saving by keeping the HCI overhead tax low.

The Pivot3 family of HCI products delivering high data efficiency with a very low overhead are an ideal solution for storage-centric business workload environments where storage costs and reliability are critical success factors. One example of this is a VDI implementation where cost per seat determines success. Other examples would be capacity-centric workloads such as big data or video surveillance that could benefit from a Pivot3 HCI approach with leading storage capacity and reliability. In this paper we compare Pivot3 with other leading HCI architectures. We utilized data extracted from the alternative HCI vendor’s reference architectures for VDI implementations. Using real world examples, we have demonstrated that with other solutions, users must purchase up to 136% more raw storage capacity and up to 59% more total CPU cores than are required when using equivalent Pivot3 products. These impressive results can lead to significant costs savings. 

Publish date: 12/10/15
Free Reports

Multiplying the Value of All Existing IT Solutions

Decades of constantly advancing computing solutions have changed the world in tremendous ways, but interestingly, the IT folks running the show have long been stuck with only piecemeal solutions for managing and optimizing all that blazing computing power. Sometimes it seems like IT is a pit crew servicing a modern racing car with nothing but axes and hammers – highly skilled but hampered by their legacy tools.

While that may be a slight exaggeration, there is a serious lack of interoperability or opportunity to create joint insight between the highly varied perspectives that individual IT tools produce (even if  each is useful in its own purpose). There simply has never been a widely adopted standard for creating, storing or sharing system management data, much less a cross-vendor way to holistically merge heterogeneously collected or produced management data together – even for the beneficial use of harried and often frustrated IT owners that might own dozens or more differently sourced system management solutions. That is until now.

OpsDataStore has brought the IT management game to a new level with an easy to deploy, centralized, intelligent – and big data enabled – management data “service”.  It readily sucks in all the lowest level, fastest streaming management data from a plethora of tools (several ready to go at GA, but easily extended to any data source), automatically and intelligently relates data from disparate sources into a single unified “agile” model, directly provides fundamental visualization and analysis, and then can serve that unified and related data back out to enlightened and newly comprehensive downstream management workflows. OpsDataStore drops in and serves as the new systems management “nexus” between formerly disparate vendor and domain management solutions. 

If you have ever been in IT, you’ve no doubt written scripts, fiddled with logfiles, created massive spreadsheets, or otherwise attempted to stitch together some larger coherent picture by marrying and merging data from two (or 18) different management data sources. The more sources you might have, the more the problem (or opportunity) grows non-linearly. OpsDataStore promises to completely fill in this gap, enabling IT to automatically multiply the value of their existing management solutions.

Publish date: 12/03/15
Profile

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
Report

Dell Storage Center Achieves Greater than Five Nines Availability at Mid-Range Cost

Dell Storage SC Series achieved a five nines (5 9s) availability rating years ago. Now the SC Series is displaying 5 9s and greater with technologies that are moving availability even farther up the scale. This is a big achievement based on real, measurable field data: the only numbers that really count.

Not every piece of data requires 5 9s capability. However, critical Tier 1 applications do need it. Outage costs vary by industry but easily total millions of dollars per hour in highly regulated and data-intensive industries. Some of the organizations in these verticals are enterprises, but many more are mid-sized businesses with exceptionally mission-critical data stores.

Consider such applications as e-commerce systems. Online customers are notorious for abandoning shopping carts even when the application is running smoothly. Downing an e-commerce system can easily cost millions of dollars in lost sales over a few days or hours, not to mention a loss of reputation. Other mission-critical applications that must be available include OLTP, CRM or even email systems.

Web applications present another HA mission. SaaS providers with sales support or finance software can hardly afford downtime. Streaming sites with subscribers also lose large amounts of future revenue if they go down. Many customers will ask for refunds or cancel their subscriptions and never return.

However, most highly available 5 9s systems have large purchase prices and high ongoing expenses. Many small enterprise and mid-sized business cannot afford these high-priced systems or the staff that goes with them. They know they need availability and try to save money and time by buying cheaper systems with 4 9s availability or lower. Their philosophy is that these systems are good enough. And they are good enough for general storage, but not for data whose unavailability quickly spirals up into the millions of dollars. Buying less than 5 9s in this type of environment is a false economy.

Still, even the risk of sub-par availability doesn’t raise the money that a business needs for high-end availability systems. This is where the story gets very interesting. Dell Storage SC Series offers 5 9s and higher availability – and it does it at a mid-range cost. Dell does not sacrifice high availability architecture for a lower CAPEX and OPEX but also provides dynamic scalability, management simplicity, redundant storage, space-saving snapshots and automatic tiering.  Thanks to the architecture behind Dell Storage SC Series, Dell has achieved a unique position in the high availability stakes. 

Publish date: 10/19/15
Technology Validation

Scale Computing HC3: A Look at a Hyperconverged Appliance

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by virtual workloads, IT has been able to pack more systems into the data center than before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management, as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before - tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance and bring about more capability. All too often, an increase in capability has come at the cost of complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. 

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

Scale Computing, an early pioneer in HyperConverged solutions, has released multiple versions of HC3 appliances and now includes the 6th generation of Scale’s HyperCore Operating System. Scale Computing continues to push the boundary in regards to simplicity, value and availability that many SMB IT departments everywhere have come to rely on.  HC3 is an integration of storage and virtualized compute within a scale-out building block architecture that couples all of the elements of a virtual data center together inside a hyperconverged appliance. The result is a system that is simple to use and does away with much of the complexity associated with virtualization in the data center. By virtualizing and intermingling compute and storage inside a system that is designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex compute clusters, provision and manage storage, and a bevy of other day to day administrative tasks. Provisioning additional resources - any resource - becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service - our hands-on lab service - to the task of evaluating whether Scale Computing's HC3 could deliver on these promises in the real world. For this task, we put an HC3 cluster through the paces to see how well it deployed, how it held up under use, and what special features it delivered that might go beyond the features found in traditional integrations of discreet compute and storage systems.

While we did touch upon whether Scale's architecture could scale performance as well as capacity, we focused our testing upon how the seamless integration of storage and compute within HC3 tackles key complexity challenges in the traditional virtual infrastructure.

As it turns out, HC3 is a far different system than the traditional compute and storage systems that we've looked at before. HC3's combination of compute and storage takes place within a scale-out paradigm, where adding more resources is simply a matter of adding additional nodes to a cluster. This immediately brings on more storage and compute resources, and makes adapting and growing the IT infrastructure a no-brainer exercise. On top of this adaptability, virtual machines (VMs) can run on any of the nodes, without any complex external networking. This delivers seamless utilization of all datacenter resources, in a dense and power efficient footprint, while significantly enhancing storage performance.

Meanwhile, within an HC3 cluster, these capabilities are all delivered on top of a uniquely robust system architecture that can tolerate any failure - from a disk to an entire cluster node - and guarantee a level of availability seldom seen by mid-sized customers. Moreover, that uniquely robust, clustered, scale-out architecture can also intermix different generation of nodes in a way that will put an end to painful upgrades by reducing them to simply decommissioning old nodes as new ones are introduced.

HC3’s flexibility, ease of deployment, robustness and a management interface is the simplest and easiest to use that we have seen. This makes HC3 a disruptive game changer for SMB and SME businesses. HC3 stands to banish complex IT infrastructure deployment, permanently alter on-going operational costs, and take application availability to a new level. With those capabilities in focus, single bottom-line observations don’t do HC3 justice. In our assessment, HC3 may take as little as 1/10th the effort to setup and install as traditional infrastructure, 1/4th the effort to configure and deploy a virtual machine (VM) versus doing so using traditional infrastructure, and can banish the planning, performance troubleshooting, and reconfiguration exercises that can consume as much as 25-50% of an IT administrator’s time. HC3 is about delivering on all of these promises simultaneously, and with the additional features we'll discuss, transforming the way SMB/SME IT is done.

Publish date: 09/30/15
Report

Business Continuity Best Practices for SMB

Virtualization’s biggest driver is big savings: slashing expenditures on servers, licenses, management, and energy. Another major benefit is the increased ease of disaster recovery and business continuity (DR/BC) in virtualized environments.

Note that disaster recovery and business continuity are closely aligned but not identical. We define disaster recovery as the process of restoring lost data, applications and systems following a profound data loss event, such as a natural disaster, a deliberate data breach or employee negligence. Business continuity takes DR a step further. BC’s goal is not only to recover the computing environment but also to recover them swiftly and with zero data loss. This is where recovery point objectives (RPO) and recovery time objectives (RTO) enter the picture, with IT assigning differing RPO and RTO strategies according to application priority.

DR/BC can be difficult to do well in data centers with traditional physical servers, particularly in SMB with limited IT budgets and generalist IT staff. Many of these servers are siloed with direct-attached storage and individual data protection processes. Mirroring and replication used to require one-to-one hardware correspondence and can be expensive, leading to a universal reliance on localized backup as data protection. In addition, small IT staffs do not always take the time to perfect their backup processes across disparate servers. Either they do not do it at all –rolling the dice and hoping there won’t be a disaster – or they slap backups on tape or USB drives and stick them on a shelf.

Virtualization can transform this environment into a much more efficient and protected data center. Backing up VMs from a handful of host servers is faster and less resource-intensive than backing up tens or hundreds of physical servers. And with scheduled replication, companies achieve faster backup and much improved recovery objectives.

However, many SMBs avoid virtualization. They cite factors such as cost, unfamiliarity with hypervisors, and added complexity. And they are not wrong: virtualization can introduce complexity, it can be expensive, and it can require familiarity with hypervisors. Virtualization cuts down on physical servers but is resource-intensive, especially as the virtualized environment grows. This means capital costs for high performance CPUs and storage. SMBs may also have to deal VM licensing and management costs, administrative burdens, and the challenge of protecting and replicating virtualized data on a strict budget.

For all its complexity and learning curve, is virtualization worth it for SMBs? Definitely. Its benefits far outweigh its problems, particularly its advantages for DR/BC. But for many SMBs, traditional virtualization is often too expensive and complex to warrant the effort. We believe that the answer is HyperConverged Infrastructure: HCI. Of HCI providers, Scale Computing is exceptionally attractive to the SMB. This paper will explain why. 

Publish date: 09/30/15
Page 3 of 14 pages  < 1 2 3 4 5 >  Last ›