Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Recently Added

Page 1 of 151 pages  1 2 3 >  Last ›
Report

Qumulo File Fabric extends high-performance file services to the cloud

The timing for Qumulo to extend its software-defined scalable file services to the cloud could not be better as public cloud utilization continues to grow at a phenomenal rate. Infrastructure spending on the public and private cloud is growing at double-digit rates while spending on traditional, non-cloud, IT infrastructure continues to decline and within a few short years will represent less than 50% of the entire infrastructure market. This trend is not surprising and has been widely predicted for several years. The surprising element now is how strong the momentum has become toward public cloud adoption, and the question is where the long-term equilibrium point will be between public clouds and on-premises infrastructure.

AWS was a pioneer in public cloud storage services when it introduced S3 (Simple Storage Service) over ten years ago. The approach of public cloud vendors has been to offer storage services at cut-rate pricing in what we call the “Hotel California” strategy – once they have your data, it can never leave. Recently, we have been hearing increased grumbling from customers that they are very concerned about losing the option to change infrastructure vendors and the resulting reduction in competition. In response to this, Taneja Group initiated multiple public and hybrid cloud research studies to gain insight on what storage services are needed across heterogenous cloud infrastructures. What we found is that IT practitioners are not only concerned about data security in the cloud; they are concerned about vendor lock-in created by the lack of data mobility between on-premises and public cloud infrastructures. Another surprising element we found is that IT practitioners predominately want file services across clouds and that object storage such as AWS S3 cannot meet their future cloud storage needs. This is actually not that surprising, as our research showed that many applications that businesses want to move to the cloud (to benefit from a highly dynamic compute environment) still rely on high-performance files access.

Enter Qumulo File Fabric (QF2). QF2 is a modern, highly scalable file storage system that runs in the data center and now in the public cloud. Unlike legacy scale-out NAS products, QF2 provides capacity for billions of files, closely matching the scale that could only previously be achieved with object storage solutions, but with the benefit of supporting file access protocols. Qumulo’s modern SDS, flash-first approach allows it to provide a very high-performance file storage system that can cover a wide variety of workloads. Its built-in, real-time analytics let administrators easily manage data no matter how large the footprint or where it is globally located. Continuous replication enables data to move where and when it’s required depending on business need. Qumulo refers to this unmatched file scalability and performance as universal-scale file storage.

Qumulo, founded in 2012, is rapidly growing its market presence and we recently validated their very high customer satisfaction and product capability through an extensive interview process with several customers. Qumulo recently extended their go-to-market ecosystem support through a partnership with Hewlett Packard Enterprise (HPE). Now with the launch of QF2 and support for  AWS, we expect Qumulo to continue its rapid rise as a leading provider of file services with universal scale. They are also well positioned to capture a significant share of the emerging multi-cloud storage market. We found many companies still prefer file access and there are plenty of reasons why scalable file will continue to grow and compete effectively versus object storage centric architectures.

Publish date: 09/22/17
Technology Validation

Providing Secondary Storage at Cloud-Scale: Cohesity Performance Scales Linearly in 256 Node Test

Are we doomed to drown in our own data? Enterprise storage is growing fast enough with today’s data demands to threaten service levels, challenge IT expertise and often eat up a majority of new IT spending. And the amount of competitively useful data could possibly grow magnitudes faster with new trends in web-scale applications, IoT and big data. On top of that, assuring full enterprise requirements for data protection with traditional fragmented secondary storage designs means that more than a dozen copies of important data are often inefficiently eating up even more capacity at an alarming rate.

Cohesity, a feature-rich secondary storage data management solution based on a core parallel file system, promises to completely break through traditional secondary storage scaling limitations with its inherently scale-out approach. This is a big claim, and so we’ve executed a validation test of Cohesity under massive scaling – pushing their storage cluster to sizes far past what they’ve previously publicly tested.

The result is striking (though perhaps not internally surprising given their engineering design goals).  We documented linearly accumulating performance across several types of IO, all the way up to our cluster test target of 256 Cohesity storage nodes. Other secondary storage designs can be expected to drop off at a far earlier point, either hitting a hard constraint (e.g. a limited cluster size) or with severely decreasing returns in performance.

We also took the opportunity to validate some important storage requirements at scale. For example, we also tested that Cohesity ensured global file consistency and full cluster resilience even at the largest scale of deployment. Given overall test performance as validated in this report, Cohesity has certainly demonstrated it is inherently a web-scale system that can deliver advanced secondary storage functionality at any practical enterprise scale of deployment.

Publish date: 07/31/17
Profile

Enterprise Cloud Platform Ideal for Database Apps: Nutanix Hosting Oracle Penetrates Tier 1

Creating an Enterprise Cloud with HyperConverged Infrastructure (HCI) is making terrific sense (and “cents”) for a wide range of corporations tired of integrating and managing complex stacks of IT infrastructure. Replacing siloed infrastructure and going far beyond simple pre-converged racks of traditional hardware, HCI greatly simplifies IT, frees up valuable staff from integration and babysitting heterogeneous solutions to better focus on adding value to the business, and can vastly improve “qualities of service” in all directions. Today, we find HCI solutions being deployed as an Enterprise Cloud platform in corporate data centers even for mission-critical tier-1 database workloads.

However, like public clouds and server virtualization before it, HCI has had to grow and mature. Initially HCI solutions had to prove themselves in small and medium size organizations – and on rank-and-file applications. Now, five plus years of evolution of vendors like Nutanix have matured HCI into a full tier1 enterprise application platform presenting the best features of public clouds including ease of management, modular scalability and agile user provisioning. Perhaps the best example of enterprise mission-critical workloads are business applications layered on Oracle Database, and as well see in this report, Nutanix now makes an ideal platform for enterprise-grade databases and database-powered applications.

In fact, we find that Nutanix’s mature platform not only can, by its natural mixed workload design, host a complete tier1 application stack (including the database), but also offers significant advantages because the whole application stack is “convergently” hosted. The resulting opportunity for both IT (and the business user) is striking. Those feeling tied down to legacy architectures and those previously interested in the benefits of plain Converged Infrastructure will now want to evaluate how mature HCI can now take them farther, faster.

In the full report, we explore in detail how Nutanix supports and accelerates serious Oracle database-driven applications (e.g. ERP, CRM) at the heart of most businesses and production data centers. In this summary, we will review how Nutanix Enterprise Cloud Platform is also an ideal enterprise data center platform for the whole application stack— consolidating many if not most workloads in the data center.

Publish date: 06/30/17
Free Reports

The Easy Cloud For Complete File Data Protection: Igneous Systems Backs-up AND Archives All Your NAS

When we look at all the dollars being spent on endless NAS capacity growth, and the increasingly complex (and mostly unrealistic) file protection schemes that go along, we find most enterprises aren’t happy with their status quo. And while cloud storage seems so attractive theoretically, making big changes to a stable NAS storage architecture can be both costly and present enough risk to keep many stuck endlessly growing their primary filers. Cloud gateways can help offload capacity, yet they add operational complexity and are ultimately only half-way solutions.

What file-dependent enterprises really need is a true hybrid solution that backs on-premise primary NAS with hybrid cloud secondary storage to provide a secure, reliable, elastic, scalable, and – above all – a simply seamless solution. Ideally we would want a drop-in solution that was remotely managed, paid for by subscription and highly performant – all while automatically backing-up (and/or archiving) all existing primary NAS storage. In other words, enterprises don’t want to rip and replace working primary NAS solutions, but they do want to easily offload and extend them with superior cloud-shaped secondary storage.

When it comes to an enterprise’s cloud adoption journey, we can recommend that “hybrid cloud” storage services be adopted first to address longstanding challenges with NAS file data protection. While many enterprises have reasonable backup solutions for block (and sometimes VM images/disks), reliable data protection for fast growing file is much harder due to trends towards bigger data repositories, faster streaming data, global sharing requirements, and increasingly tighter SLA’s (not to mention shrinking backup windows). Igneous Systems promises to help filer-dependent enterprises keep up with all their file protection challenges with their Igneous Hybrid Storage Cloud that features integrated enterprise file backup and archive.

Igneous Systems’ storage layer integrates several key technologies – highly efficient, scalable, remotely managed object storage; built-in tiering and file movement not just on the back-end to public clouds, but also on the front-end from existing primary NAS arrays; remote management as-a-service to offload IT staff; and all necessary file archive and backup automation.

And Igneous Hybrid Storage Cloud is also a first-class object store with valuable features like built-in global meta-data search. However, here we’ll focus on how Igneous Backup and Igneous Archive services is used to solve gaping holes in traditional approaches to NAS backup and archive.

Download the Solution Profile today!

Publish date: 06/20/17
Page 1 of 151 pages  1 2 3 >  Last ›