Items Tagged: REST
The Object Evolution: EMC Object-based storage for active archiving and application development
Cloud-based object architecture offers big benefits for storing unstructured data for active archiving, global access to data, fast application development and much lower cost compared to the high computing and data protection costs of on-premise NAS. EMC has engineered Atmos to provide these capabilities and many more as a massively scalable, distributed cloud-based system. In this Technology in Brief we will examine the fast-changing world of archiving and development on the web, and how object-based storage is the best way to go for these monumental tasks.
Before putting their data into public cloud storage, IT shops need to think about how they're going to get their data out. Simple data access is typically straightforward, according to Jeff Byrne, senior analyst and consultant at Hopkinton, Mass.-based Taneja Group Inc.
Click this link to listen to the podcast.
- Premiered: 02/04/13 at OnDemand
- Location: OnDemand
- Speaker(s): Jeff Byrne
- Sponsor(s): TechTarget: SearchCloudStorage.com
Here’s a key benefit of that shiny new hyperconverged box you just bought: it’s supposed to speak the cloud’s language.
- Premiered: 09/22/16
- Author: Taneja Group
- Published: The Register
Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup
Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.
These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.
And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.
Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).
Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.
In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.
The lines between primary and secondary storage and applications such as hyper-convergence remain blurry. But they are a starting point for further discussion.
- Premiered: 02/10/17
- Author: Arun Taneja
- Published: TechTarget: Search Storage