Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Report

Page 1 of 44 pages  1 2 3 >  Last ›
Report

Qumulo File Fabric extends high-performance file services to the cloud

The timing for Qumulo to extend its software-defined scalable file services to the cloud could not be better as public cloud utilization continues to grow at a phenomenal rate. Infrastructure spending on the public and private cloud is growing at double-digit rates while spending on traditional, non-cloud, IT infrastructure continues to decline and within a few short years will represent less than 50% of the entire infrastructure market. This trend is not surprising and has been widely predicted for several years. The surprising element now is how strong the momentum has become toward public cloud adoption, and the question is where the long-term equilibrium point will be between public clouds and on-premises infrastructure.

AWS was a pioneer in public cloud storage services when it introduced S3 (Simple Storage Service) over ten years ago. The approach of public cloud vendors has been to offer storage services at cut-rate pricing in what we call the “Hotel California” strategy – once they have your data, it can never leave. Recently, we have been hearing increased grumbling from customers that they are very concerned about losing the option to change infrastructure vendors and the resulting reduction in competition. In response to this, Taneja Group initiated multiple public and hybrid cloud research studies to gain insight on what storage services are needed across heterogenous cloud infrastructures. What we found is that IT practitioners are not only concerned about data security in the cloud; they are concerned about vendor lock-in created by the lack of data mobility between on-premises and public cloud infrastructures. Another surprising element we found is that IT practitioners predominately want file services across clouds and that object storage such as AWS S3 cannot meet their future cloud storage needs. This is actually not that surprising, as our research showed that many applications that businesses want to move to the cloud (to benefit from a highly dynamic compute environment) still rely on high-performance files access.

Enter Qumulo File Fabric (QF2). QF2 is a modern, highly scalable file storage system that runs in the data center and now in the public cloud. Unlike legacy scale-out NAS products, QF2 provides capacity for billions of files, closely matching the scale that could only previously be achieved with object storage solutions, but with the benefit of supporting file access protocols. Qumulo’s modern SDS, flash-first approach allows it to provide a very high-performance file storage system that can cover a wide variety of workloads. Its built-in, real-time analytics let administrators easily manage data no matter how large the footprint or where it is globally located. Continuous replication enables data to move where and when it’s required depending on business need. Qumulo refers to this unmatched file scalability and performance as universal-scale file storage.

Qumulo, founded in 2012, is rapidly growing its market presence and we recently validated their very high customer satisfaction and product capability through an extensive interview process with several customers. Qumulo recently extended their go-to-market ecosystem support through a partnership with Hewlett Packard Enterprise (HPE). Now with the launch of QF2 and support for  AWS, we expect Qumulo to continue its rapid rise as a leading provider of file services with universal scale. They are also well positioned to capture a significant share of the emerging multi-cloud storage market. We found many companies still prefer file access and there are plenty of reasons why scalable file will continue to grow and compete effectively versus object storage centric architectures.

Publish date: 09/22/17
Report

Companies Improve Data Protection and More with Cohesity

We talked to six companies that have implemented Cohesity DataProtect and/or the Cohesity DataPlatform. When these companies evaluated Cohesity, their highest priority was reducing storage costs and improving data protection. To truly modernize their secondary storage infrastructure, they also recognized the importance of having a scalable, all-in-one solution that could both consolidate and better manage their entire secondary data environment.

Prior to implementing Cohesity, many of the companies we interviewed had significant challenges with the high cost of their secondary storage. Several factors contributed to the high costs including the need to license multiple products, inadequate storage reduction, the need for professional services and extensive training, difficulty scaling and maintaining systems and adding capacity to expensive primary storage for lower-performance services, such as group file shares.

In addition to lower storage costs, all the companies we talked to also wanted a better data protection solution. Many companies were struggling with slow backup speeds, insufficient recovery times and cumbersome data archival methods. Solution complexity and high operational overhead was also a major issue. To address these issues, companies wanted a unified data protection solution that offered better backup performance, instant data recovery, simplified management, and seamless cloud integration for long-term data retention.

Companies also wanted to improve overall secondary storage management and they shared a common goal of combining secondary storage workloads under one roof. Depending on their environment and their operational needs, their objectives outside of data protection included providing self-service access to copies of production data for on-demand environments (such as test/dev), using secondary storage for file services and leveraging indexing and advanced search and analytics to find out-of-place confidential data and ensure data compliance.

Cohesity customers found that the key to addressing these challenges and needs is Cohesity’s Hyperconverged Secondary Storage. Cohesity is a pioneer of Hyperconverged Secondary Storage, a new category of secondary storage based on a webscale, distributed file system that scales linearly and provides global data deduplication and automatic indexing as well as advanced search and analytics and policy-based management of all secondary storage workloads. These capabilities combine to provide a single system that efficiently stores, manages, and understands all data copies and workflows residing in a secondary storage environment – whether the data is on-premises or in the cloud. There are no point products, therefore less complexity and lower licensing costs.

It’s a compelling value proposition, and importantly, every company we talked to stated that Cohesity has met and exceeded their expectations and has helped them rapidly evolve their data protection and overall secondary data management. To learn about each customer’s journey, we examined their business needs, their data center environment, their key challenges, the reasons they chose Cohesity, and the value they have derived. Read on to learn more about their experience.

Publish date: 04/28/17
Report

Cloud Object Storage for the Healthcare Data Blues

The healthcare industry continues to face tremendous cost challenges. The U.S. government estimates national health expenditures in the United States accounted for $3.2 trillion last year – nearly 18% of the country’s total GDP. There are many factors that drive up the cost of healthcare, such as the cost of new drug development and hospital readmissions. In addition, there’s compelling studies that show medical organizations will need to evolve their IT environment to curb healthcare costs and improve patient care in new ways, such as cloud-based healthcare models aimed at research community collaboration, coordinated care and remote healthcare delivery.

For example, Goldman Sachs recently predicted that the digital revolution can save $300 billion in spending in the healthcare sector by powering new patient options, such as home-based patient monitoring and patient self-management. Moreover, the most significant progress may come from a medical organization transforming their healthcare data infrastructure. Here’s why:

  • Advancements in digital medical imaging has resulted in an explosion of data that sits in  picture archiving and communications systems (PACS) and vendor neutral archives (VNAs).
  • Patient care initiatives such as personalized medicine and genomics require storing, sharing and analyzing massive amounts of unstructured data.
  • Regulations such as the Health Insur­ance Portability and Accountability Act (HIPAA) require organizations to have policies for long term image retention and business continuity.

Unfortunately, traditional file storage approaches aren’t well-suited to manage vast amounts of unstructured data and present several barriers to modernizing healthcare infrastructure. A recent Taneja Group survey found the top three challenges to be:

  • Lack of flexibility: Traditional file storage appliances require dedicated hardware and don’t offer tight integration with collaborative cloud storage environments.
  • Poor utilization: Traditional file storage requires too much storage capacity for system fault tolerance, which reduces usable storage.
  • Inability to scale: Traditional storage solutions such as RAID-based arrays are gated by controllers and simply aren’t designed to easily expand to petabyte storage levels.

As a result, healthcare organizations are moving to object storage solutions that offer an architecture inherently designed for web scale storage environments. Specifically, object storage offers healthcare organizations the following advantages:

  • Simplified management, hardware independence and a choice of deployment options – private, public or hybrid cloud – lowers operational and hardware storage costs
  • Web-scale storage platform provides scale as needed and enables a pay as you go model
  • Efficient fault tolerance protects against site failures, node failures and multiple disk failures
  • Built in security protects against digital and physical breeches
Publish date: 03/22/17
Report

IBM Cloud Object Storage Provides the Scale and Integration Needed for Modern Genomics Infra.

For hospitals and medical research institutes, the ability to interpret genomics data and identify relevant therapies is key to provide better patient care through personalized medicine. Many such organizations are racing forward, analyzing patients’ genomic profiles to match more clinically actionable treatments using artificial intelligence (AI).

These rapid advancements in genomic research and personalized medicine are very exciting, but they are creating enormous data challenges for healthcare and life sciences organizations. High-throughput DNA sequencing machines can now process a human genome in a matter of hours at a cost approaching one thousand dollars. This is a huge drop from a cost of ten million dollars ten years ago and means the decline in genome sequencing cost has outpaced Moore’s Law (see chart). The result is an explosion in genomic data – driving the need for solutions that can affordably and securely store, access, share, analyze and archive enormous amounts of data in a timely manner.

Challenges include moving large volumes of genomic data from cost-effective archival storage to low latency storage for analysis to reduce the time needed to analyze genetic data. Currently, it takes days to do a comprehensive DNA sequence analysis.

Sharing and interpreting vast amounts of unstructured data to find relationships between a patient’s genetic characteristics and potential therapies adds another layer of complexity. Determining connections requires evaluating data across numerous unstructured data sources, such as genomic sequencing data, medical articles, drug information and clinical trial data from multiple sources.

Unfortunately, the traditional file storage within most medical organizations doesn’t meet the needs of modern genomics. These systems can’t accommodate massive amounts of unstructured data and they don’t support both data archival and high-performance compute. They also don’t facilitate broad collaboration. Today, organizations require a new approach to genomics storage, one that enables:

  • Scalable and convenient cloud storage to accommodate rapid unstructured data growth
  • Seamless integration between affordable unstructured data storage, low latency storage, high performance compute, big data analytics and a cognitive healthcare platform to quickly analyze and find relationships among complex life science data types
  • A multi-tenant hybrid cloud to share and collaborate on sensitive patient data and findings
  • Privacy and protection to support regulatory compliance
Publish date: 03/22/17
Report

HPE 3PAR Enables Highly Resilient All-Flash Data Centers: Latest Release Solidifies AFA Leadership

If you are an existing customer of HPE 3PAR, this latest release of 3PAR capabilities will leave you smiling. If you are looking for an All Flash Array (AFA) to transform your data center, now might be the time to take a closer at HPE 3PAR. Since AFAs first emerged on the scene at the turn of this decade, the products have gone through various waves of innovation to achieve the market acceptance it has today. In the first wave, it was all about raw performance for niche applications. In the second wave, it was about making flash more cost effective versus traditional disk-based arrays to broaden economic appeal. Now in the final wave, it is about giving these arrays all the enterprise features and ecosystem support to completely replace all legacy Tier 0/1 arrays still in production today. 

HPE 3PAR StoreServ is one of the leading AFAs on the market today. HPE 3PAR uses a modern architectural design that includes multi-controller scalability, a highly-virtualized data layer with three levels of abstraction, system-wide striping, a highly-specialized ASIC and numerous flash innovations. HPE 3PAR engineers pioneered this very efficient architecture well before flash technology became mainstream and proved that this architecture approach has been timeless by demonstrating a seamless transition to support all-flash technology. During this same time, other vendors ran into architectural controller-bound bottlenecks for flash, making them reinvent existing products or completely start from scratch with new architectures. 

HPE’s 3PAR timeless architecture has meant that features introduced years before are still relevant today and features introduced today are available to current 3PAR customers that purchased arrays previously. This continuous innovation of features available to old and new customers alike provides the ultimate in investment protection unmatched by most vendors in the industry today. In this Technology Brief, Taneja Group will explore some of the latest developments from HPE that build upon the rich feature set that already exists in the 3PAR architecture. These new features and simplicity enhancements will show that HPE continues to put customer’s investment protection first and continues to expand its capabilities around enterprise-grade business continuity and resilience. The combination of economic value of HPE 3PAR AFAs with years of proven mission critical features promises to accelerate the final wave of the much-anticipated All-Flash Data Center for Tier 0/1 workloads.

Publish date: 02/17/17
Report

Qumulo Tackles the Machine Data Challenge: Six Customers Explain How

We are moving into a new era of data storage. The traditional storage infrastructure that we know (and do not necessarily love) was designed to process and store input from human beings. People input emails, word processing documents and spreadsheets. They created databases and recorded business transactions. Data was stored on tape, workstation hard drives, and over the LAN.

In the second stage of data storage development, humans still produced most content but there was more and more of it, and file sizes got larger and larger. Video and audio, digital imaging, websites streaming entertainment content to millions of users; and no end to data growth. Storage capacity grew to encompass large data volumes and flash became more common in hybrid and all-flash storage systems.

Today, the storage environment has undergone another major change. The major content producers are no longer people, but machines. Storing and processing machine data offers tremendous opportunities: Seismic and weather sensors that may lead to meaningful disaster warnings. Social network diagnostics that display hard evidence of terrorist activity. Connected cars that could slash automotive fatalities. Research breakthroughs around the human brain thanks to advances in microscopy.

However, building storage systems that can store raw machine data and process it is not for the faint of heart. The best solution today is massively scale-out, general purpose NAS. This type of storage system has a single namespace capable of storing billions of differently sized files, linearly scales performance and capacity, and offers data-awareness and real-time analytics using extended metadata.

There are a very few vendors in the world today who offer this solution. One of them is Qumulo. Qumulo’s mission is to provide high volume storage to business and scientific environments that produce massive volumes of machine data.

To gauge how well Qumulo works in the real world of big data, we spoke with six customers from life sciences, media and entertainment, telco/cable/satellite, higher education and the automotive industries. Each customer deals with massive machine-generated data and uses Qumulo to store, manage, and curate mission-critical data volumes 24x7. Customers cited five major benefits to Qumulo: massive scalability, high performance, data-awareness and analytics, extreme reliability, and top-flight customer support.

Read on to see how Qumulo supports large-scale data storage and processing in these mission-critical, intensive machine data environments.

Publish date: 10/26/16
Page 1 of 44 pages  1 2 3 >  Last ›