Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Technology

Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.

All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.

Page 2 of 48 pages  < 1 2 3 4 >  Last ›
Report

Qumulo Tackles the Machine Data Challenge: Six Customers Explain How

We are moving into a new era of data storage. The traditional storage infrastructure that we know (and do not necessarily love) was designed to process and store input from human beings. People input emails, word processing documents and spreadsheets. They created databases and recorded business transactions. Data was stored on tape, workstation hard drives, and over the LAN.

In the second stage of data storage development, humans still produced most content but there was more and more of it, and file sizes got larger and larger. Video and audio, digital imaging, websites streaming entertainment content to millions of users; and no end to data growth. Storage capacity grew to encompass large data volumes and flash became more common in hybrid and all-flash storage systems.

Today, the storage environment has undergone another major change. The major content producers are no longer people, but machines. Storing and processing machine data offers tremendous opportunities: Seismic and weather sensors that may lead to meaningful disaster warnings. Social network diagnostics that display hard evidence of terrorist activity. Connected cars that could slash automotive fatalities. Research breakthroughs around the human brain thanks to advances in microscopy.

However, building storage systems that can store raw machine data and process it is not for the faint of heart. The best solution today is massively scale-out, general purpose NAS. This type of storage system has a single namespace capable of storing billions of differently sized files, linearly scales performance and capacity, and offers data-awareness and real-time analytics using extended metadata.

There are a very few vendors in the world today who offer this solution. One of them is Qumulo. Qumulo’s mission is to provide high volume storage to business and scientific environments that produce massive volumes of machine data.

To gauge how well Qumulo works in the real world of big data, we spoke with six customers from life sciences, media and entertainment, telco/cable/satellite, higher education and the automotive industries. Each customer deals with massive machine-generated data and uses Qumulo to store, manage, and curate mission-critical data volumes 24x7. Customers cited five major benefits to Qumulo: massive scalability, high performance, data-awareness and analytics, extreme reliability, and top-flight customer support.

Read on to see how Qumulo supports large-scale data storage and processing in these mission-critical, intensive machine data environments.

Publish date: 10/26/16
Free Reports

For Lowest TCO and Maximum Agility Choose the VMware Cloud Foundation Hybrid SDDC Platform

The race is on at full speed.  What race?  The race to bring public cloud agility and economics to a data center near you. Ever since the first integrated systems came onto the scene in 2010, vendors have been furiously engineering solutions to make on-premises infrastructure as cost effective and as easy to use as the public cloud, while also providing the security, availability, and control that enterprises demand. Fundamentally, two main architectures have evolved within the race to modernize data centers that will create a foundation enabling fully private and hybrid clouds. The first approach uses traditional compute, storage, and networking infrastructure components (traditional 3-tier) overlaid with varying degrees of virtualization and management software. The second more recent approach is to build a fully virtualized data center using industry standard servers and networking and then layer on top of that a full suite of software-based compute, network, and storage virtualization with management software. This approach is often termed a Software-Defined Data Center (SDDC).

The goal of an SDDC is to extend virtualization techniques across the entire data center to enable the abstraction, pooling, and automation of all data center resources. This would allow a business to dynamically reallocate any part of the infrastructure for various workload requirements without forklifting hardware or rewiring. VMware has taken SDDC to a new level with VMware Cloud Foundation.  VMware Cloud Foundation is the only unified SDDC platform for the hybrid cloud, which brings together VMware’s compute, storage, and network virtualization into a natively integrated stack that can be deployed on-premises or run as a service from the public cloud. It establishes a common cloud infrastructure foundation that gives customers a unified and consistent operational model across the private and public cloud.

VMware Cloud Foundation delivers an industry-leading SDDC cloud infrastructure by combining VMware’s highly scalable hyper-converged software (vSphere and VSAN) with the industry leading network virtualization platform, NSX. VMware Cloud Foundation comes with unique lifecycle management capabilities (SDDC Manager) that eliminate the overhead of system operations of the cloud infrastructure stack by automating day 0 to day 2 processes such as bring-up, configuration, workload provisioning, and patching/upgrades. As a result, customers can significantly shorten application time to market, boost cloud admin productivity, reduce risk, and lower TCO.  Customers consume VMware Cloud Foundation software in three ways: factory pre-loaded on integrated systems (VxRack 1000 SDDC); deployed on top qualified Ready Nodes from HPE, QCT, Fujitsu, and others in the future, with qualified networking; and run as a service from the public cloud through IBM, vCAN partners, vCloud Air, and more to come.

In this comparative study, Taneja Group performed an in-depth analysis of VMware Cloud Foundation deployed on qualified Ready Nodes and qualified networking versus several traditional 3-tier converged infrastructure (CI) integrated systems and traditional 3-tier do-it-yourself (DIY) systems. We analyzed the capabilities and contrasted key functional differences driven by the various architectural approaches. In addition, we evaluated the key CapEx and OpEx TCO cost components.  Taneja Group configured each traditional 3-tier system's hardware capacity to be as close as possible to the VMware Cloud Foundation qualified hardware capacity.  Further, since none of the 3-tier systems had a fully integrated SDDC software stack, Taneja Group added the missing SDDC software, making it as close as possible to the VMware Cloud Foundation software stack.  The quantitative comparative results from the traditional 3-tier DIY and CI systems were averaged together into one scenario because the hardware and software components are very similar. 

Our analysis concluded that both types of solutions are more than capable of handling a variety of virtualized workload requirements. However, VMware Cloud Foundation has demonstrated a new level of ease-of-use due to its modular scale-out architecture, native integration, and automatic lifecycle management, giving it a strong value proposition when building out modern next generation data centers.  The following are the five key attributes that stood out during the analysis:

  • Native Integration of the SDDC:  VMware Cloud Foundation natively integrates vSphere, Virtual SAN (VSAN), and NSX network virtualization.
  • Simplest operational experience: VMware SDDC Manager automates the life-cycle of the SDDC stack including bring up, configuration, workload provisioning, and patches/upgrades.
  •  
  • Isolated workload domains: VMware Cloud Foundation provides unique administrator tools to flexibly provision subsets of the infrastructure for multi-tenant isolation and security.
  • Modular linear scalability: VMware Cloud Foundation employs an architecture in which capacity can be scaled by the HCI node, by the rack, or by multiple racks. 
  • Seamless Hybrid Cloud: Deploy VMware Cloud Foundation for private cloud and consume on public clouds to create a seamless hybrid cloud with a consistent operational experience.

Taneja Group’s in-depth analysis indicates that VMware Cloud Foundation will enable enterprises to achieve significant cost savings. Hyper-converged infrastructure, used by many web-scale service providers, with natively integrated SDDC software significantly reduced server, storage, and networking costs.  This hardware cost saving more than offset the incremental SDDC software costs needed to deliver the storage and networking capability that typically is provided in hardware from best of breed traditional 3-tier components. In this study, we measured the upfront CapEx and 3 years of support costs for the hardware and software components needed to build out a VMware Cloud Foundation private cloud on qualified Ready Nodes.  In addition, Taneja Group validated a model that demonstrates the labor and time OpEx savings that can be achieved through the use of integrated end-to-end automatic lifecycle management in the VMware SDDC Manager software.

 

By investing in VMware Cloud Foundation, businesses can be assured that their data center infrastructure can be easily consumed, scaled, managed, upgraded and enhanced to provide the best private cloud at the lowest cost. Using a pre-engineered modular, scale-out approach to building at web-scale means infrastructure is added in hours, not days, and businesses can be assured that adding infrastructure scales linearly without complexity.  VMware Cloud Foundation is the only platform that provides a natively integrated unified SDDC platform for the hybrid cloud with end-to-end management and with the flexibility to provision a wide variety of workloads at the push of a button.

In summary, VMware Cloud Foundation enables at least five unparalleled capabilities, generates a 45% lower 3-year TCO than the alternative traditional 3-tier approaches, and delivers a tremendous value proposition when building out a modern hybrid SDDC platform. Before blindly going down the traditional infrastructure approach, companies should take a close look at VMware Cloud Foundation, a unified SDDC platform for the hybrid cloud.

Publish date: 10/17/16
Profile

Towards the Ultimate Goal of IT Resilience: A Look at the Zerto Cloud Continuity Platform

We live in a digital world where online services, applications and data must always be available. Yet the modern data center remains very susceptible to interruptions. These opposing realities are challenging traditional backup applications and disaster recovery solutions and causing companies to rethink what is needed to ensure 100% uptime of their IT environments.

The need for availability goes well beyond recovering from disasters. Companies must be able to rapidly recover from many real world disruptions such as ransomware, device failures and power outages as well as natural disasters. Add to this the dynamic nature of virtualization and cloud computing, and it’s not hard to see the difficulty of providing continuous availability while managing a highly variable IT environment that is susceptible to trouble.

Some companies feel their backup devices will give them adequate data protection and others believe their disaster recovery solutions will help them restore normal business operations if an incident occurs. Regrettably, far too often these solutions fall short of meeting user expectations because they don’t provide the rapid recovery and agility needed for full business continuance.

Fortunately, there is a way to ensure a consistent experience in an inconsistent world. It’s called IT resilience. IT resilience is the ability to ensure business services are always on, applications are available and data is accessible no matter what human errors, events, failures or disasters occur. And true IT resilience goes a step further to provide continuous data protection (CDP), end-to-end recovery automation irrespective of the makeup of a company’s IT environment and the flexibility to evolve IT strategies and incorporate new technology.

Intrigued by the promise of IT resilience, companies are seeking data protection solutions that can withstand any disaster to enable a reliable online experience and excellent business performance. In a recent Taneja Group survey, nearly half the companies selected “high availability and resilient infrastructure” as one of their top two IT priorities. In the same survey, 67% of respondents also indicated that unplanned application downtime compromised their ability to satisfy customer needs, meet partner and supplier commitments and close new business.

This strong customer interest in IT resilience has many data protection vendors talking about “resilience.” Unfortunately, many backup and disaster recovery solutions don’t provide continuous data protection plus hardware independence, strong virtualization support and tight cloud integration. This is a tough combination and presents a big challenge for data protection vendors striving to provide enterprise-grade IT resilience.

There is however one data protection vendor that has replication and disaster recovery technologies designed from the ground up for IT resilience. The Zerto Cloud Continuity Platform built on Zerto Virtual Replication offers CDP, failover (for higher availability), end-to-end process automation, heterogeneous hypervisor support and native cloud integration. As a result, IT resilience with continuous availability, rapid recovery and agility is a core strength of the Zerto Cloud Continuity Platform.

This paper will explore the functionality needed to tackle modern data protection requirements. We will also discuss the challenges of traditional backup and disaster recovery solutions, outline the key aspects of IT resilience and provide an overview of the Zerto Cloud Continuity Platform as well as the hypervisor-based replication that Zerto pioneered.

Publish date: 09/30/16
Report

5 9’s Availability in a Lower Cost Dell SC4020 Product? Yes, Really!

Every year Dell measures the availability level of its Storage Center Series of products by analyzing the actual failure data in the field. For the past few years Dell has asked Taneja Group to audit the results to ensure that these systems were indeed meeting the celebrated 5 9s availability levels. And they have. This year Dell asked us to audit the results specifically on the relatively new model, SC4020.

Even though the SC4020 is a lower cost member of the SC family, it meets 5 9s criteria just like its bigger family members. Dell did not cut costs by sacrificing availability, but by space-saving design like a single enclosure for media and controllers instead of two separate enclosures. Even with the smaller footprint – 2U to the SC8000’s 6U -- the SC4020 still achieves 5 9s using the same strict test measurement criteria.

Frankly, many vendors choose not to subject their lower cost models to 5 9s testing. The vendor may not have put a lot of development dollars into the lower cost product in an effort to reduce cost and maintain profitability on a lower-priced system.

Dell didn’t do it this way with the SC4020. Instead of watering it down by stripping features, they architected high efficiency into a smaller footprint. The resulting array is smaller and more affordable, and retains the SC Series enterprise features: high availability and reliability, performance, centralized management, not only across all SC models but also across the Dell EqualLogic PS and FS models. This level of availability and efficiency makes the SC4020 an economical and highly efficient system for the mid-market and the distributed enterprise.

Publish date: 08/31/16
Profile

Dell SC7000 Series Unified Storage Platform

The challenge for mid-sized business is that they have smaller IT staffs and smaller budgets than the enterprise, yet still need high availability, high performance, and robust capacity on their storage systems. Every storage system will deliver parts of the solution but very, very few will deliver simplicity, efficiency, performance, availability, and capacity on a low-cost system.

We’re not blaming the storage system makers, since it’s hard to offer a storage system with all of these benefits and still maintain acceptable profit. It has been difficult for the storage manufacturers to design enterprise storage features into an affordable mid-ranged storage system that is enterprise-capable yet still yield enough profit to sustain the research and development needed to keep the product viable.

Dell is a master at this game with its Storage Center Intel-based portfolio. SC series range from an entry-level model up to enterprise datacenter class, with most of the middle line devoted to delivering enterprise features for the mid-market business. A few months ago Taneja Group reviewed and validated high availability features across the economical SC line. Dell is able to deliver those features because the SC operating system (SCOS) and FluidFS software stacks operate across every system in the SC family. Features are developed in such a way that a broad range of products can be deployed with enterprise data services each with highly tuned cost versus performance balance.

Dell’s new SC7000 series carries on with this successful game plan as the first truly unified storage platform for the popular SC line. Starting with the SC7020, this series now unifies block and file data in an extremely efficient and affordable architecture. And like all SC family members, the SC7020 comes with enterprise capabilities including high performance and availability, centralized management, storage efficiencies and more; all at mid-market pricing.

What distinguishes the SC7020 though is the level of efficiency and affordability that is rare for enterprise capable systems. Simple and efficient deployment, consistent management across all Storage Center platforms and investment protection through in-chassis upgrades (SC series can support multiple types of media within the same enclosure), makes the SC7020 an ideal choice for mid-market businesses. Add on auto-tiering (effectively right-sizing most frequently used data to the fastest media tier), built-in compression and multi-protocol support, provides these customers with a storage solution that evolves with their business needs

In this Solution Profile, Taneja Group explores how the cost-effective SC7020 delivers enterprise features to the data-intensive mid-market, and how Dell’s approach mitigates tough customer challenges. 

Publish date: 08/30/16
Report

HPE StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.

Publish date: 06/30/16
Page 2 of 48 pages  < 1 2 3 4 >  Last ›