Taneja Group | Qumulo+Core
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Qumulo+Core

Profiles/Reports

Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
news

Qumulo emerges with data-aware scale-out NAS

Isilon founders launch Qumulo Core, software designed to manage scale-out NAS with real-time analytics.

  • Premiered: 03/16/15
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Qumulo TBA Qumulo Core TBA Qumulo Scalable File System TBA QSFS TBA Isilon TBA NAS TBA scale-out TBA analytics TBA hybrid flash TBA Storage TBA Hybrid Array TBA NFS TBA SMB TBA IOPS TBA Arun Taneja
news

Data Storage Startup Provides Real-Time Analytics on Billions of Files

This first look shows how powerful Qumulo Core is.

  • Premiered: 03/17/15
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA Qumulo TBA Qumulo Core TBA QSFS TBA Qumulo Scalable File System TBA data-aware TBA scale-out TBA NAS TBA Isilon TBA Storage TBA scalability TBA data awareness TBA ease of use TBA real-time analytics TBA analytics TBA IOPS
Profiles/Reports

TVS: Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
news

Storage: The Next Generation

Qumulo demonstrates its new file system, the heart of its software-defined-storage product.

  • Premiered: 03/25/15
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA Storage TBA Qumulo TBA Qumulo Core TBA Qumulo Scalable File System TBA QSFS TBA software-defined TBA data-aware TBA scale-out TBA NAS TBA Enterprise Storage TBA File System TBA Datacenter TBA hybrid storage TBA Hybrid Array TBA SDS TBA software-defined storage TBA SSD TBA Flash TBA HDD TBA API TBA VM TBA Virtual Machine TBA scalability TBA analytics
news

Qumulo Core Data-Aware Scale-Out NAS Software Now Available on a Capacity-Optimized 4U Commodity

Qumulo, pioneer of the world's first data-aware scale-out NAS, today announced the Qumulo QC208 hybrid storage appliance, the second hardware product in Qumulo's Q-series portfolio. Qumulo Core, the company's flagship software solution that was introduced to the market last month, is now available on a second platform for capacity-optimized large-scale deployments.

  • Premiered: 04/13/15
  • Author: Taneja Group
  • Published: Virtual-Strategy Magazine
Topic(s): TBA Qumulo TBA Qumulo Core TBA Storage TBA NAS TBA hybrid storage TBA Flash
Profiles/Reports

Qumulo Core: Data-Aware Scale-Out NAS Software (Product Profile)

Let's face it: Today’s storage is dumb. Mostly it is a dumping ground for data. As we produce more data we simply buy more storage and fill it up. We don't know who is using what storage at a given point in time, which applications are hogging storage or have gone rogue, what and how much sensitive information is stored, moved or accessed by whom, and so on. Basically, we are blind to whatever is happening inside that storage array. On the other hand, storage should just work, users of storage should see it as an endless invisible resource, while the administrators of storage should be able to unlock the value of data itself through real-time analytical insight, not fighting fires just to keep storage running and provisioned.

Storage systems these days are often quoted in petabytes and will eventually move to exabytes and beyond. Businesses are being crushed under the weight of this data sprawl and a new tsunami of data is coming their way as the Internet of Things fully comes online in the next decade. How are administrators dealing with this ever increasing appetite to store more data? It is time for a radical new approach to building a storage system, one that is aware of the information stored within while dramatically reducing the time administrators spend managing the system.

Welcome to the new era of data aware storage. This could not have come at a better time. Storage growth, as we all know, is out of control. Granted the cost per GB keeps falling at about a 40% per year rate, but we keep growing capacity at about a 60% growth rate. This causes both the cost and capacity to keep increasing every year. While cost increase is certainly an issue, the bigger issue is manageability. And not knowing what we have buried in those mounds of data is a bigger issue. Instead of data being an asset, it is a dead weight that keeps getting heavier. If we didn’t do something about it, we would simply be overwhelmed, if we are not already.

The question we ask is why is it possible to develop data aware storage today when we couldn’t yesterday? The answer is simple: flash technology, virtualization, and the availability of “free” CPU cycles make it possible for us to build storage today that can do a lot of heavy lifting from the inside. While this was possible yesterday, if implemented, it would have slowed down the performance of primary storage to a point where it would be useless. So, in the past, we simply let it store data. But today, we can build in a lot of intelligence without impacting performance or quality of service. We call this new type of storage Data Aware Storage.

When implemented correctly, data aware storage can provide insights that were not possible yesterday. It would reduce risk for non-compliance. It would improve governance. It would automate many of the storage management processes that are manual today. It would provide insights into how well the storage is being utilized. It would identify if a dangerous situation was about to occur, either for compliance or capacity or performance or SLA. You get the point. Storage that is inherently smart and knows: what type of data it has, how it is growing, who is using it, who is abusing it, and so on.

In this profile, we dive deep into a new technology, called Qumulo Core, the industry’s first data-aware scale-out NAS platform. Qumulo Core promises to radically change the scale-out NAS product category by using built-in data awareness to massively scale a distributed file system, while at the same time radically reducing the time to administer a system than can hold billions of files. File systems in the past could not scale to this level because administrative tools would crush under the weight of the system.

Publish date: 05/14/15
news

Qumulo to Team with Taneja Group on ‘How to Store 10 Billion Files’ Webinar

Qumulo, pioneer of the world’s first data-aware scale-out NAS, today announced that the company will participate in an upcoming webinar hosted by the Taneja Group, which will focus on how to design a 10 billion file storage system and what it means for the modern day datacenter.

  • Premiered: 05/19/15
  • Author: Taneja Group
  • Published: Qumulo
Topic(s): TBA Qumulo TBA Qumulo Core TBA NAS TBA scale-out TBA Datacenter TBA Taneja Group TBA real-time analytics TBA Storage
news

What’s In A Number?

Well, when that number is 10 billion, I’d say it can mean quite a lot.

  • Premiered: 05/20/15
  • Author: Taneja Group
  • Published: Qumulo
Topic(s): TBA Qumulo TBA Qumulo Core TBA High Performance Computing TBA High Performance TBA Tom Fenton TBA Storage TBA Chris Hoffman TBA David Bailey
Resources

How to store 10 BILLION files

Join us for a fast-paced and informative 30-minute webinar in which the Taneja Group will talk with David Bailey about how he was able to store over 10 billion files. We will discuss with David how he was able to design a 10 Billion file storage system, who is using systems of this size, how he tracks the analytics for a system this large and what this means for the datacenter. David works for Qumulo, a leader in “data-aware” storage systems. Data-aware systems have real-time analytics that enable users to instantly obtain information about data and how it is being used. Attendees will be encouraged to submit their questions during the session.

Panelist:

David Bailey, Director, Systems Engineering; Qumulo

  • Premiered: 05/21/15
  • Location: OnDemand
  • Speaker(s): Moderator: Tom Fenton, Taneja Group; David Bailey, Qumulo
Topic(s): TBA Topic(s): Qumulo Topic(s): TBA Topic(s): NAS Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): data-aware Topic(s): TBA Topic(s): High Performance Topic(s): TBA Topic(s): Qumulo Core Topic(s): TBA Topic(s): Tom Fenton
news

Qumulo Core updated, 10 TB helium drives supported

Qumulo's data-aware Core 2.0 supports 10 TB helium hard drives, erasure coding for faster drive rebuilds and analytics to solve capacity bottleneck mysteries.

  • Premiered: 04/12/16
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Qumulo TBA Qumulo Core TBA data-aware TBA data awareness TBA NAS TBA erasure coding TBA analytics TBA Capacity TBA Hybrid TBA real-time analytics TBA data analytics TBA Performance TBA Block Storage TBA Data protection TBA DP TBA hybrid storage TBA DataGravity TBA scale-out TBA scale-out NAS TBA scale-out storage TBA Microsoft NetApp TBA NetApp TBA FAS TBA API TBA NFS TBA Arun Taneja TBA object storage TBA IoT TBA Internet of Things
Profiles/Reports

Qumulo Tackles the Machine Data Challenge: Six Customers Explain How

We are moving into a new era of data storage. The traditional storage infrastructure that we know (and do not necessarily love) was designed to process and store input from human beings. People input emails, word processing documents and spreadsheets. They created databases and recorded business transactions. Data was stored on tape, workstation hard drives, and over the LAN.

In the second stage of data storage development, humans still produced most content but there was more and more of it, and file sizes got larger and larger. Video and audio, digital imaging, websites streaming entertainment content to millions of users; and no end to data growth. Storage capacity grew to encompass large data volumes and flash became more common in hybrid and all-flash storage systems.

Today, the storage environment has undergone another major change. The major content producers are no longer people, but machines. Storing and processing machine data offers tremendous opportunities: Seismic and weather sensors that may lead to meaningful disaster warnings. Social network diagnostics that display hard evidence of terrorist activity. Connected cars that could slash automotive fatalities. Research breakthroughs around the human brain thanks to advances in microscopy.

However, building storage systems that can store raw machine data and process it is not for the faint of heart. The best solution today is massively scale-out, general purpose NAS. This type of storage system has a single namespace capable of storing billions of differently sized files, linearly scales performance and capacity, and offers data-awareness and real-time analytics using extended metadata.

There are a very few vendors in the world today who offer this solution. One of them is Qumulo. Qumulo’s mission is to provide high volume storage to business and scientific environments that produce massive volumes of machine data.

To gauge how well Qumulo works in the real world of big data, we spoke with six customers from life sciences, media and entertainment, telco/cable/satellite, higher education and the automotive industries. Each customer deals with massive machine-generated data and uses Qumulo to store, manage, and curate mission-critical data volumes 24x7. Customers cited five major benefits to Qumulo: massive scalability, high performance, data-awareness and analytics, extreme reliability, and top-flight customer support.

Read on to see how Qumulo supports large-scale data storage and processing in these mission-critical, intensive machine data environments.

Publish date: 10/26/16
news

Qumulo QC update adds flexible file quotas, PB array

Qumulo NAS upgrade allows data to move between quota domains without rewriting the file system. QC360 hardware is a petabyte-scale addition to its QC-Series hybrid disk lineup.

  • Premiered: 02/08/17
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Storage TBA Qumulo TBA NAS TBA scale-out TBA QSFS TBA Qumulo Scalable File System TBA data-aware TBA HPE TBA Mike Matchett TBA NFS TBA SMB TBA scale-out storage TBA Qumulo Core TBA capacity management TBA analytics