Taneja Group | reliability
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: reliability

Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
Profiles/Reports

The Hyperconverged Data Center: Nutanix Customers Explain Why They Replaced Their EMC SANS

Taneja Group spoke with several Nutanix customers in order to understand why they switched from EMC storage to the Nutanix platform. All of the respondent’s articulated key architectural benefits of hyperconvergence versus a traditional 3-tier solutions. In addition, specific Nutanix features for mission-critical production environments were often cited.

Hyperconverged systems have become a mainstream alternative to traditional 3-tier architecture consisting of separate compute, storage and networking products. Nutanix collapses this complex environment into software-based infrastructure optimized for virtual environments. Hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual assets. Hyperconvergence offers a key value proposition over 3-tier architecture:  instead of deploying, managing and integrating separate components – storage, servers, networking, data services, and hypervisors – these components are combined into a modular high performance system.

The customers we interviewed operate in very different industries. In common, they all maintained data centers undergoing fundamental changes, typically involving an opportunity to refresh some portion of their 3-tier infrastructure. This enabled the evaluation of hyperconvergence in supporting those changes. Customers interviewed found that Nutanix hyperconvergence delivered benefits in the areas of scalability, simplicity, value, performance, and support. If we could use one phrase to explain why Nutanix’ is winning over EMC customers in the enterprise market it would be “Ease of Everything.” Nutanix works, and works consistently with small and large clusters, in single and multiple datacenters, with specialist or generalist IT support, and across hypervisors.

The five generations of Nutanix products span many years of product innovation. Web-scale architecture has been the key to Nutanix platform’s enterprise capable performance, simplicity and scalability. Building technology like this requires years of innovation and focus and is not an add-on for existing products and architectures.

The modern data center is quickly changing. Extreme data growth and complexity are driving data center directors toward innovative technology that will grow with them. Given the benefits of Nutanix web-scale architecture – and the Ease of Everything – data center directors can confidently adopt Nutanix as their partner in data center transformation just as the following EMC customers did.

Publish date: 03/31/16
Profiles/Reports

The Modern Data-Center: Why Nutanix Customers are Replacing Their NetApp Storage

Several Nutanix customers shared with Taneja Group why they switched from traditional NetApp storage to the hyperconverged Nutanix platform. Each customer talked about the value of hyperconvergence versus a traditional server/networking/storage stack, and the specific benefits of Nutanix in mission-critical production environments.

Hyperconverged systems are a popular alternative to traditional computing architectures that are built with separate compute, storage, and networking components. Nutanix turns this complex environment into an efficient, software-based infrastructure where hypervisor, compute, storage, networking, and data services run on scalable nodes that seamlessly scale across massive virtual environments.  

The customers we spoke with came from very different industries, but all of them faced major technology refreshes for legacy servers and NetApp storage. Each decided that hyperconvergence was the right answer, and each chose the Nutanix hyperconvergence platform for its major benefits including scalability, simplicity, value, performance, and support. The single key achievement running through all these benefits is “Ease of Everything”: ease of scaling, ease of management, ease of realizing value, ease of performance, and ease of upgrades and support. Nutanix simply works across small clusters and large, single and multiple datacenters, specialist or generalist IT, and different hypervisors.

The datacenter is not static. Huge data growth and increasing complexity are motivating IT directors from every industry to invest in scalable hyperconvergence. Given Nutanix benefits across the board, these directors can confidently adopt Nutanix to transform their data-centers, just as these NetApp customers did.

Publish date: 03/31/16
Profiles/Reports

Got mid-sized workloads? Storwize family to the rescue

Myths prevail in the IT industry just as they do in every other facet of life. One common myth is that mid-sized workloads, exemplified by smaller versions of mission critical applications, are only to be found in mid-size companies. The reality is mid-sized workloads exist in businesses of all sizes. Another common fallacy is that small and mid-size enterprises (SMEs) or departments within large organizations and Remote/Branch Offices (ROBOs) have lesser storage requirements than their larger enterprise counterparts. The reality is companies and groups of every size have business-critical applications and these workloads require enterprise-grade storage solutions that offer high-performance, reliability and strong security. The only difference is IT groups managing mid-sized workloads frequently have significant budget constraints. This is a tough combination and presents a big challenge for storage vendors striving to satisfy mid-sized workload needs.

A recent survey conducted by Taneja Group showed mid-size and enterprise needs for high-performance storage were best met by highly virtualized systems that minimize disruption to their current environment. Storage virtualization is key because it abstracts away all the differences of various storage boxes to create 1) a single virtualized storage pool 2) a common set of data services and 3) a common interface to manage storage resources. These storage virtualization capabilities are beneficial to the overall enterprise storage market and they are especially attractive to mid-sized storage customers because storage virtualization is the core underlying capability that drives efficiency and affordability.

The combination of affordability, manageability and enterprise-grade functionality is the core strength of the IBM Storwize family built upon IBM Spectrum Virtualize, the quintessential virtualization software that has been hardened for over a decade with IBM SAN Volume Controller (SVC). Simply stated – few enterprise storage solutions match IBM Storwize’s ability to deliver enterprise-grade functionality at such a low cost. From storage virtualization and auto-tiering to real-time data compression and proven reliability, Storwize with Spectrum Virtualize offers an end-to-end storage footprint and centralized management that delivers highly efficient storage for mid-sized workloads, regardless of whether they exist in small or large companies.

In this paper, we will look at the key requirements for mid-sized storage and we will evaluate IBM Storwize with Spectrum Virtualize’s ability to tackle mid-sized workload requirements. We will also present an overview of IBM Storwize family and provide a comparison of the various models in the Storwize portfolio.

Publish date: 06/24/16
news

Conquer unstructured data: Scale-out NAS vs. object storage

Learn how scale-out NAS and object storage systems work, their pros and cons, manageability and how they integrate with your storage infrastructure.

  • Premiered: 08/04/16
  • Author: Jeff Kato
  • Published: TechTarget: Search Storage
Topic(s): TBA scale-out TBA scale-out NAS TBA Storage TBA object storage TBA storage infrastructure TBA NAS TBA cloud object storage TBA Cloud Storage TBA Cloud TBA Public Cloud TBA scalability TBA scalable TBA Metadata TBA Portable Operating System Interface TBA POSIX TBA fsync TBA real-time data TBA EMC TBA EMC Isilon TBA Isilon TBA IBM TBA IBM Spectrum TBA IBM Spectrum Scale TBA Qumulo TBA Scality TBA global scalability TBA Amazon TBA reliability TBA erasure coding TBA Dropbox
Profiles/Reports

5 9's Availability in a Lower Cost Dell SC4020 Product? Yes, Really!

Every year Dell measures the availability level of its Storage Center Series of products by analyzing the actual failure data in the field. For the past few years Dell has asked Taneja Group to audit the results to ensure that these systems were indeed meeting the celebrated 5 9s availability levels. And they have. This year Dell asked us to audit the results specifically on the relatively new model, SC4020.

Even though the SC4020 is a lower cost member of the SC family, it meets 5 9s criteria just like its bigger family members. Dell did not cut costs by sacrificing availability, but by space-saving design like a single enclosure for media and controllers instead of two separate enclosures. Even with the smaller footprint – 2U to the SC8000’s 6U -- the SC4020 still achieves 5 9s using the same strict test measurement criteria.

Frankly, many vendors choose not to subject their lower cost models to 5 9s testing. The vendor may not have put a lot of development dollars into the lower cost product in an effort to reduce cost and maintain profitability on a lower-priced system.

Dell didn’t do it this way with the SC4020. Instead of watering it down by stripping features, they architected high efficiency into a smaller footprint. The resulting array is smaller and more affordable, and retains the SC Series enterprise features: high availability and reliability, performance, centralized management, not only across all SC models but also across the Dell EqualLogic PS and FS models. This level of availability and efficiency makes the SC4020 an economical and highly efficient system for the mid-market and the distributed enterprise.

Publish date: 08/31/16
news

Taneja Group Recognizes Qumulo Data-Aware Scale-Out NAS As Top Solution for Tackling Machine Data

Qumulo, the leader in data-aware scale-out NAS, today announced that Taneja Group has published a new report titled "How Qumulo Technology Tackles Machine Data Storage Challenges."

  • Premiered: 11/17/16
  • Author: Taneja Group
  • Published: Yahoo! Finance
Topic(s): TBA Qumulo TBA scale-out TBA data-aware TBA NAS TBA machine data TBA Storage TBA Big Data TBA scalability TBA High Performance TBA analytics TBA reliability
news

Showback the value of your data storage expertise

To demonstrate value, IT must provide an easy-to-understand cost model to its business leaders. This has fostered IT showback projects. Yet showback isn't easy to achieve.

  • Premiered: 01/09/17
  • Author: Mike Matchett
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Storage TBA Data Storage TBA Virtualization TBA Data protection TBA Cloud TBA Public Cloud TBA Automation TBA convergence TBA Cloud Computing TBA IO acceleration TBA IO acceleration TBA IO latency TBA IO latency TBA Capacity TBA data resiliency TBA resiliency TBA IO access TBA IO access TBA bandwith TBA application performance TBA Backup TBA reliability TBA Disaster Recovery TBA DR TBA data governance TBA Security TBA Encryption TBA hybrid integration TBA Migration TBA global distribution
Profiles/Reports

IBM Cloud Object Storage Provides the Scale and Integration Needed for Modern Genomics Infra.

For hospitals and medical research institutes, the ability to interpret genomics data and identify relevant therapies is key to provide better patient care through personalized medicine. Many such organizations are racing forward, analyzing patients’ genomic profiles to match more clinically actionable treatments using artificial intelligence (AI).

These rapid advancements in genomic research and personalized medicine are very exciting, but they are creating enormous data challenges for healthcare and life sciences organizations. High-throughput DNA sequencing machines can now process a human genome in a matter of hours at a cost approaching one thousand dollars. This is a huge drop from a cost of ten million dollars ten years ago and means the decline in genome sequencing cost has outpaced Moore’s Law (see chart). The result is an explosion in genomic data – driving the need for solutions that can affordably and securely store, access, share, analyze and archive enormous amounts of data in a timely manner.

Challenges include moving large volumes of genomic data from cost-effective archival storage to low latency storage for analysis to reduce the time needed to analyze genetic data. Currently, it takes days to do a comprehensive DNA sequence analysis.

Sharing and interpreting vast amounts of unstructured data to find relationships between a patient’s genetic characteristics and potential therapies adds another layer of complexity. Determining connections requires evaluating data across numerous unstructured data sources, such as genomic sequencing data, medical articles, drug information and clinical trial data from multiple sources.

Unfortunately, the traditional file storage within most medical organizations doesn’t meet the needs of modern genomics. These systems can’t accommodate massive amounts of unstructured data and they don’t support both data archival and high-performance compute. They also don’t facilitate broad collaboration. Today, organizations require a new approach to genomics storage, one that enables:

  • Scalable and convenient cloud storage to accommodate rapid unstructured data growth
  • Seamless integration between affordable unstructured data storage, low latency storage, high performance compute, big data analytics and a cognitive healthcare platform to quickly analyze and find relationships among complex life science data types
  • A multi-tenant hybrid cloud to share and collaborate on sensitive patient data and findings
  • Privacy and protection to support regulatory compliance

Publish date: 03/22/17
Profiles/Reports

Cloud Object Storage for the Healthcare Data Blues

The healthcare industry continues to face tremendous cost challenges. The U.S. government estimates national health expenditures in the United States accounted for $3.2 trillion last year – nearly 18% of the country’s total GDP. There are many factors that drive up the cost of healthcare, such as the cost of new drug development and hospital readmissions. In addition, there’s compelling studies that show medical organizations will need to evolve their IT environment to curb healthcare costs and improve patient care in new ways, such as cloud-based healthcare models aimed at research community collaboration, coordinated care and remote healthcare delivery.

For example, Goldman Sachs recently predicted that the digital revolution can save $300 billion in spending in the healthcare sector by powering new patient options, such as home-based patient monitoring and patient self-management. Moreover, the most significant progress may come from a medical organization transforming their healthcare data infrastructure. Here’s why:

  • Advancements in digital medical imaging has resulted in an explosion of data that sits in  picture archiving and communications systems (PACS) and vendor neutral archives (VNAs).
  • Patient care initiatives such as personalized medicine and genomics require storing, sharing and analyzing massive amounts of unstructured data.
  • Regulations such as the Health Insur­ance Portability and Accountability Act (HIPAA) require organizations to have policies for long term image retention and business continuity.

Unfortunately, traditional file storage approaches aren’t well-suited to manage vast amounts of unstructured data and present several barriers to modernizing healthcare infrastructure. A recent Taneja Group survey found the top three challenges to be:

  • Lack of flexibility: Traditional file storage appliances require dedicated hardware and don’t offer tight integration with collaborative cloud storage environments.
  • Poor utilization: Traditional file storage requires too much storage capacity for system fault tolerance, which reduces usable storage.
  • Inability to scale: Traditional storage solutions such as RAID-based arrays are gated by controllers and simply aren’t designed to easily expand to petabyte storage levels.

As a result, healthcare organizations are moving to object storage solutions that offer an architecture inherently designed for web scale storage environments. Specifically, object storage offers healthcare organizations the following advantages:

  • Simplified management, hardware independence and a choice of deployment options – private, public or hybrid cloud – lowers operational and hardware storage costs
  • Web-scale storage platform provides scale as needed and enables a pay as you go model
  • Efficient fault tolerance protects against site failures, node failures and multiple disk failures
  • Built in security protects against digital and physical breeches

Publish date: 03/22/17