Taneja Group | REST
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: REST

Profiles/Reports

The Object Evolution: EMC Object-based storage for active archiving and application development

Cloud-based object architecture offers big benefits for storing unstructured data for active archiving, global access to data, fast application development and much lower cost compared to the high computing and data protection costs of on-premise NAS. EMC has engineered Atmos to provide these capabilities and many more as a massively scalable, distributed cloud-based system. In this Technology in Brief we will examine the fast-changing world of archiving and development on the web, and how object-based storage is the best way to go for these monumental tasks.

Publish date: 12/20/12
Resources

Cloud data access: Transfer, retrieval not so simple

Before putting their data into public cloud storage, IT shops need to think about how they're going to get their data out. Simple data access is typically straightforward, according to Jeff Byrne, senior analyst and consultant at Hopkinton, Mass.-based Taneja Group Inc.

Click this link to listen to the podcast.

Bookmark and Share

  • Premiered: 02/04/13 at OnDemand
  • Location: OnDemand
  • Speaker(s): Jeff Byrne
  • Sponsor(s): TechTarget: SearchCloudStorage.com
Topic(s): TBA Topic(s): Cloud Topic(s): TBA Topic(s): TechTarget Topic(s): TBA Topic(s): Jeff Byrne Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): gateway Topic(s): TBA Topic(s): REST
news

Going hyperconverged? Don't forget to burst into the cloud

Here’s a key benefit of that shiny new hyperconverged box you just bought: it’s supposed to speak the cloud’s language.

  • Premiered: 09/22/16
  • Author: Taneja Group
  • Published: The Register
Topic(s): TBA hyperconverged TBA hyperconvergence TBA hyperconverged infrastructure TBA Jeff Kato TBA Cloud TBA Private Cloud TBA Storage TBA API TBA Public Cloud TBA Azure TBA Amazon AWS TBA AWS TBA Hybrid Cloud TBA Compute TBA Security TBA cloud bursting TBA VDI TBA Disaster Recovery TBA DR TBA Backup TBA software-defined TBA software defined TBA REST API TBA REST TBA VM TBA Virtual Machine TBA Virtualization TBA IT infrastructure TBA DevOps TBA workflow automation
Profiles/Reports

Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup

Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.

These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.

And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.

Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).

Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.

In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.

Publish date: 10/18/16
news

Hyper-convergence: It's for more than primary data storage

The lines between primary and secondary storage and applications such as hyper-convergence remain blurry. But they are a starting point for further discussion.

news

Use the cloud to enhance the functions of primary storage

Learn some of the best ways to leverage public cloud as a storage tier to complement primary storage and make data centers more efficient.

  • Premiered: 05/04/17
  • Author: Jeff Byrne
  • Published: TechTarget: Search Storage
Topic(s): TBA Storage TBA Cloud TBA Primary Storage TBA Public Cloud TBA Amazon TBA Amazon S3 TBA Simple Storage Service TBA Elastic Block Storage TBA Amazon EBS TBA Cloud Security TBA Security TBA Availability TBA Cloud Storage TBA Backup TBA Disaster Recovery TBA DR TBA Archive TBA cloud tiering TBA hybrid storage TBA IOPS TBA High Performance TBA SAN TBA SAS TBA latency TBA SATA TBA SATA drives TBA Migration TBA automated storage tiering TBA NVDIMM TBA SSD
news

Object storage use cases coming to a data service near you

Object storage, unlike traditional file and block storage, is well-suited to manage vast amounts of unstructured data.

  • Premiered: 07/06/17
  • Author: Steve Ricketts
  • Published: TechTarget: Search Storage
Topic(s): TBA Steve Ricketts TBA object storage TBA Storage TBA Cloud TBA Block Storage TBA unstructured data TBA REST TBA Data Storage TBA web 2.0 TBA data analytics TBA Amazon TBA Amazon S3 TBA Cleversafe TBA IBM Cloud TBA IBM Cloud Object Storage TBA IBM TBA Scality TBA Western Digital TBA Private Cloud TBA Public Cloud TBA Hybrid Cloud TBA Backup TBA Archive TBA scalability TBA scalable TBA data retention TBA compliance TBA File Storage TBA Capacity TBA storage-as-a-service