Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Free Reports

Page 1 of 5 pages  1 2 3 >  Last ›
Free Reports

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio offering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems.

Publish date: 10/16/14
Free Reports

Executive Summary: VCE and Nutanix in the Real World

Taneja Group prepared a Field Report for Nutanix on the real-world customer experience for seven Nutanix hyperconvergence and seven VCE convergence customers. We did not cherry pick customers for dissatisfaction or delight; we were interested in typical customers’ honest reactions.

The same conclusions kept emerging: VCE users see convergence as a benefit over traditional do-it-yourself infrastructure, but an expensive one. Some of the concerns include high prices, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the partnership between EMC, VMware and Cisco. The Nutanix users also shared valuable hyperconvergence benefits.  In contrast to VCE, they also cited simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion is that VCE convergence is an improvement over traditional architecture, but Nutanix hyperconvergence is an evolutionary improvement over VCE. 

Publish date: 09/29/14
Free Reports

HP StoreVirtual VSA and VMware Virtual SAN - A Closer Look

The age of the software defined datacenter (SDDC) and converged infrastructure is upon us. The benefits of abstracting, pooling and running compute, storage and networking functions together on shared commodity hardware brings unprecedented agility and flexibility to the datacenter while driving actual costs down. The tectonic shift in the datacenter caused by software-defined storage and networking will prove to be as great as, and may prove to be greater than, the shift to virtualized servers during the last decade. While software-defined networking (SDN) is still in its infancy, software-defined storage (SDS) has been developing for quite some time.

LeftHand Networks (now HP StoreVirtual) released its first iSCSI VSA (virtual storage appliance) in 2007, which brought the advantages of software-based storage to small and midsize company environments. LeftHand Networks’ VSA was a virtual machine that hosted a software implementation of LeftHand’s well-regarded iSCSI hardware storage array. Since that time many other vendors have released VSAs, but none have captured the market share of HP’s StoreVirtual VSA. But the release of VMware Virtual SAN (VSAN) in March of 2014 could change that as VSAN, with the backing of the virtualization giant, is poised to be a serious contender in the SDS marketplace. Taneja Group thought that it would be interesting to take a closer look at how a mature, well regarded and widely deployed SDS product such as HP StoreVirtual VSA compares to the newest entry in the SDS market: VMware’s VSAN.

The observations we have made for both products are based on hands-on lab testing, but we do not consider this a Technology Validation exercise because we were not able to conduct an apples-to-apples comparison between the offerings, primarily due to the limited hardware compatibility list (HCL) for VMware VSAN. However, the hands-on testing that we were able to conduct gave us a very good understanding of both products. Both products surprised and, more often than not, did not disappoint us. In an ideal world without budgetary constraints, both products may have a place in your datacenter, but they are not by any means interchangeable. We found that one of the products would be more useful for a variety of datacenter storage needs, including some tier 1 use cases, while the other is more suited today to supporting the needs some of tier 2 and tier 3 applications.

Publish date: 08/21/14
Free Reports

For Lowest Cost and Greatest Agility, Choose Software-Defined Data Center Architectures

The era of the software-defined data center is upon us. The promise of a software-defined strategy is a virtualized data center created from compute, network and storage building blocks. A Software-Defined Data Center (SDDC) moves the provisioning, management, and other advanced features into the software layer so that the entire system delivers improved agility and greater cost savings. This tectonic shift in the data center is as great as the shift to virtualized servers during the last decade and may prove to be greater in the long run.

This approach to IT infrastructure started over a decade ago when compute virtualization - through use of hypervisors - turned compute and server platforms into software objects. This same approach to virtualizing resources is now gaining acceptance in networking and storage architectures. When combined with overarching automation software, a business can now virtualize and manage an entire data center. The abstraction, pooling and running of compute, storage and networking functions, virtually, on shared hardware brings unprecedented agility and flexibility to the data center while driving costs down.

In this paper, Taneja Group takes an in-depth look at the capital expenditure (CapEx) savings that can be achieved by creating a state-of-the-art SDDC, based on currently available technology. We performed a comparative cost study of two different environments: one using the latest software solutions from VMware running on industry standard and white label hardware components; and the other running a more typical VMware virtualization environment, on mostly traditional, feature rich, hardware components, which we will describe as the Hardware-Dependent Data Center (HDDC). The CapEx saving we calculated were based on creating brand new (Greenfield) data centers for each scenario (an additional comparison for upgrading an existing data center is included at the end of this white paper).

Our analysis indicates that a dramatic cost savings, up to 49%, can be realized when using today’s SDDC capabilities combined with low cost white-label hardware, compared to a best in class HDDC. In addition, just by adopting VMware Virtual SAN and NSX in their current virtualized environment users can lower CapEx by 32%. By investing in SDDC technology, businesses can be assured their data center solution can be more easily upgraded and enhanced over the life of the hardware, providing considerable investment protection. Rapidly improving SDDC software capabilities, combined with declining hardware prices, promise to reduce total costs even further as complex embedded hardware features are moved into a more agile and flexible software environment.

Depending on customers’ needs and the choice of deployment model, an SDDC architecture offers a full spectrum of savings. VMware Virtual SAN is software-defined storage that pools inexpensive hard drives and common solid state drives installed in the virtualization hosts to lower capital expenses and simplify the overall storage architecture. VMware NSX aims to make these same advances for network virtualization by moving security and network functions to a software layer that can run on top of any physical network equipment. An SDDC approach is to “virtualize everything” along with data center automation that enables a private cloud with connectors to the public cloud if needed.

Publish date: 08/19/14
Free Reports

Redefining the Economics of Enterprise Storage

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers.

Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 05/05/14
Free Reports

Data Defined Storage: Building on the Benefits of Software Defined Storage

At its core, Software Defined Storage decouples storage management from the physical storage system. In practice Software Defined Storage vendors implement the solution using a variety of technologies: orchestration layers, virtual appliances and server-side products are all in the market now. They are valuable for storage administrators who struggle to manage multiple storage systems in the data center as well as remote data repositories.

What Software Defined Storage does not do is yield more value for the data under its control, or address global information governance requirements. To that end, Data Defined Storage yields the benefits of Software Defined Storage while also reducing data risk and increasing data value throughout the distributed data infrastructure. In this report we will explore how Tarmin’s GridBank Data Management Platform provides Software Defined Storage benefits and also drives reduced risk and added business value for distributed unstructured data with Data Defined Storage. 

Publish date: 03/17/14
Page 1 of 5 pages  1 2 3 >  Last ›