Taneja Group | scale-out+storage
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: scale-out+storage

news

Scale-out NAS design now rivals object storage

Jeff Kato takes a closer look at ideal scale-out NAS design principles and vendors that are emerging with modern scale-out NAS designs.

  • Premiered: 02/12/16
  • Author: Jeff Kato
  • Published: TechTarget: Search Storage
Topic(s): TBA scale-out TBA NAS TBA scale-out NAS TBA Block Storage TBA File Storage TBA Storage TBA Oracle TBA Fibre Channel TBA FC TBA scalability TBA web-scale TBA web-scale storage TBA object storage TBA High Performance TBA HPC TBA High Performance Computing TBA Flash TBA SSD TBA data-aware TBA Metadata TBA IOPS TBA Performance TBA NetApp TBA EMC TBA software-defined TBA Virtual Machine TBA VM TBA Public Cloud TBA Cloud TBA hyperscale
news

Scale-out architecture and new data protection capabilities in 2016

What are the next big things for the data center in 2016? Applications will pilot the course to better data protection and demand more resources from scale-out architecture.

  • Premiered: 02/17/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA scale-out TBA Data protection TBA DP TBA scale-out architecture TBA analysis TBA Data Center TBA data lake TBA Hadoop TBA hadoop cluster TBA cluster TBA Backup TBA Talena TBA HPE TBA 3PAR TBA flat backup TBA Snapshot TBA Snapshots TBA StoreOnce TBA Oracle TBA Oracle ZDLRA TBA ZDLRA TBA Zero Data Loss Recovery Appliance TBA converged TBA convergence TBA Cloud TBA backup server TBA Virtualization TBA Storage TBA Big Data TBA Lustre
news

Qumulo Core updated, 10 TB helium drives supported

Qumulo's data-aware Core 2.0 supports 10 TB helium hard drives, erasure coding for faster drive rebuilds and analytics to solve capacity bottleneck mysteries.

  • Premiered: 04/12/16
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Qumulo TBA Qumulo Core TBA data-aware TBA data awareness TBA NAS TBA erasure coding TBA analytics TBA Capacity TBA Hybrid TBA real-time analytics TBA data analytics TBA Performance TBA Block Storage TBA Data protection TBA DP TBA hybrid storage TBA DataGravity TBA scale-out TBA scale-out NAS TBA scale-out storage TBA Microsoft NetApp TBA NetApp TBA FAS TBA API TBA NFS TBA Arun Taneja TBA object storage TBA IoT TBA Internet of Things
news

Object-file system combo rapidly expands

Object-based storage adoption took longer than expected, so object vendors have added file systems to their products to make them look more familiar to users.

  • Premiered: 04/27/16
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA object TBA object storage TBA Storage TBA Caringo TBA Caringo Swarm TBA FileFly TBA Windows TBA NTFS TBA Network File System TBA NFS TBA scale-out TBA scale-out storage TBA Scality TBA all-flash TBA SSD TBA AFA TBA Pure Storage TBA SAN TBA elasticity TBA Data reduction TBA Encryption TBA erasure coding TBA Metadata TBA metadata engine TBA Exablox TBA OneBlox TBA latency TBA Arun Taneja TBA Amplidata TBA Cleversafe
Profiles/Reports

Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup

Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.

These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.

And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.

Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).

Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.

In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.

Publish date: 10/18/16
news

Qumulo Introduces Qumulo Core Data-Aware Scale-Out File and Object Storage Software On 3rd Party

Qumulo, the leader in data-aware scale-out NAS, today announced that it is introducing the availability of Qumulo Core scale-out file and object storage software on Hewlett Packard Enterprise (HPE) Apollo servers.

  • Premiered: 11/15/16
  • Author: Taneja Group
  • Published: MarketWired
Topic(s): TBA Qumulo TBA File Storage TBA object storage TBA Storage TBA scale-out TBA scale-out storage TBA scale-out NAS TBA data-aware TBA HPE TBA Arun Taneja
news

Qumulo Exhibits Rapid Innovation and Proven Track Record as the Trusted Partner for Large Scale

Qumulo, the leader in data-aware scale-out NAS, showcased continued company momentum, with impressive year over year growth, customer traction and continuous product innovation.

  • Premiered: 11/15/16
  • Author: Taneja Group
  • Published: MarketWired
Topic(s): TBA Qumulo TBA scale-out TBA scale-out NAS TBA scale-out storage TBA data-aware TBA NAS TBA HPE TBA Snapshots TBA Snapshot TBA Jeff Kato
news

Qumulo QC update adds flexible file quotas, PB array

Qumulo NAS upgrade allows data to move between quota domains without rewriting the file system. QC360 hardware is a petabyte-scale addition to its QC-Series hybrid disk lineup.

  • Premiered: 02/08/17
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Storage TBA Qumulo TBA NAS TBA scale-out TBA QSFS TBA Qumulo Scalable File System TBA data-aware TBA HPE TBA Mike Matchett TBA NFS TBA SMB TBA scale-out storage TBA Qumulo Core TBA capacity management TBA analytics
Profiles/Reports

HPE StoreVirtual 3200: A Look at the Only Entry Array with Scale-out and Scale-up

Innovation in traditional external storage has recently taken a back seat to the current market darlings of All Flash Arrays and Software-defined Scale-out Storage. Can there be a better way to redesign the mainstream dual-controller array that has been a popular choice for entry-level shared storage for the last 20 years?  Hewlett-Packard Enterprise (HPE) claims the answer is a resounding Yes.

HPE StoreVirtual 3200 (SV3200) is a new entry storage device that leverages HPE’s StoreVirtual Software-defined Storage (SDS) technology combined with an innovative use of low-cost ARM-based controller technology found in high-end smartphones and tablets. This approach allows HPE to leverage StoreVirtual technology to create an entry array that is more cost effective than using the same software on a set of commodity X86 servers. Optimizing the cost/performance ratio using ARM technology instead of excessive power hungry processing and memory using X86 computers unleashes an attractive SDS product unmatched in affordability. For the first time, an entry storage device can both Scale-up and Scale-out efficiently and also have the additional flexibility to be compatible with a full complement of hyper-converged and composable infrastructure (based on the same StoreVirtual technology). This unique capability gives businesses the ultimate flexibility and investment protection as they transition to a modern infrastructure based on software-defined technologies. The SV3200 is ideal for SMB on-premises storage and enterprise remote office deployments. In the future, it will also enable low-cost capacity expansion for HPE’s Hyper Converged and Composable infrastructure offerings.

Taneja Group evaluated the HPE SV3200 to validate the fit as an entry storage device. Ease-of-use, advanced data services, and supportability were just some of the key attributes we validated with hands-on testing. What we found was that the SV3200 is an extremely easy-to-use device that can be managed by IT generalists. This simplicity is good news for both new customers that cannot afford dedicated administrators and also for those HPE customers that are already accustomed to managing multiple HPE products that adhere to the same HPE OneView infrastructure management paradigm. We also validated that the advanced data services of this entry array match that of the field proven enterprise StoreVirtual products already in the market. The SV3200 can support advanced features such as linear scale-out and multi-site stretch cluster capability that enables advanced business continuity techniques rarely found in storage products of this class. What we found is that HPE has raised the bar for entry arrays, and we strongly recommend businesses that are looking at either SDS technology or entry storage strongly consider HPE’s SV3200 as a product that has the flexibility to provide the best of both. A starting price at under $10,000 makes it very affordable to start using this easy, powerful, and flexible array. Give it a closer look.

Publish date: 04/11/17