Taneja Group | RAID
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: RAID

Profiles/Reports

Adaptec Snap Server 650

The Snap Server product line has long been one of the most heralded workhorses of cost-effective NAS. With the most recent release of the Snap Server 650 from Adaptec, we believe that the company continues to move the Snap line into a critical transition point in price/performance and features.

Publish date: 08/15/07
Profiles/Reports

Adaptec Snap Server 700i

The Snap Server 700i is an iSCSI storage system with dedicated iSCSI, hardware RAID, and the OnTarget operating system providing a rich storage management feature set. The series does not replace the existing dual dialect NAS/iSCSI Snap Servers, but lets Adaptec take a hard charge at the growing marketplace of enterprise-level iSCSI SANs.

Publish date: 09/01/07
news

External storage might make sense for Hadoop

Using Hadoop to drive big data analytics doesn't necessarily mean building clusters of distributed storage -- good old external storage might be a better choice.

  • Premiered: 02/28/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Storage
Topic(s): TBA Hadoop TBA Big Data TBA analytics TBA SAN TBA NAS TBA scale-out TBA HDFS TBA MapReduce TBA DAS TBA RAID TBA replication TBA Sentry TBA Accumulo TBA scalability
news

5 Tips For Working With HP's StoreVirtual VSA

Having had the chance to spend some serious time with the latest StoreVirtual VSA (11.5) in Taneja Group's validation lab, I thought I'd share five tips that made my life a little easier, and may make your experience with it more productive.

  • Premiered: 09/10/14
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA HP TBA StoreVirtual TBA StoreVirtual VSA TBA VSA TBA LeftHand Networks TBA DAS TBA Storage TBA RAID TBA ESX Server TBA ESX TBA Data Center TBA Hyper-V TBA KVM TBA vSphere TBA iSCSI TBA LUN TBA SSD TBA Flash TBA Adaptive Optimization TBA AO TBA VM TBA Virtual Machine TBA SRM
news

Purchase criteria for all-flash storage arrays

All-flash storage arrays share the common trait of being fast, but once you get past the speed, there's still a lot to consider.

  • Premiered: 08/29/14
  • Author: Arun Taneja
  • Published: Tech Target: Search Storage
Topic(s): TBA Arun Taneja TBA tech target TBA All Flash TBA all flash array TBA Flash TBA SSD TBA Deduplication TBA Data Deduplication TBA Compression TBA Data protection TBA RAID
news

Hadoop Storage Options: Time to Ditch DAS?

Hadoop is immensely popular today because it makes big data analysis cheap and simple: you get a cluster of commodity servers and use their processors as compute nodes to do the number crunching, while their internal direct attached storage (DAS) operate as very low cost storage nodes.

  • Premiered: 02/19/15
  • Author: Taneja Group
  • Published: Infostor
Topic(s): TBA Hadoop TBA Storage TBA DAS TBA Direct attached storage TBA Compute TBA SATA TBA HDFS TBA Hadoop Distributed File System TBA data TBA MapReduce TBA YARN TBA Hadoop 2 TBA data lake TBA data refinery TBA Enterprise Storage TBA DR TBA Disaster Recovery TBA compliance TBA Security TBA Business Continuity TBA Performance TBA FC TBA Fibre Channel TBA SAN TBA NAS TBA Virtualization TBA Cloud TBA VM TBA Virtual Machine TBA MapR
news

New approaches to scalable storage

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.

  • Premiered: 03/16/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA Mike Matchett TBA TechTarget TBA Storage TBA scalable TBA scalability TBA analytics TBA Data Storage TBA Big Data TBA Block Storage TBA File Storage TBA object storage TBA scale-out TBA scale-up TBA Performance TBA Capacity TBA HA TBA high availability TBA latency TBA IOPS TBA Flash TBA SSD TBA File System TBA Security TBA NetApp TBA Data ONTAP TBA ONTAP TBA EMC TBA Isilon TBA OneFS TBA Cloud
Profiles/Reports

HP ConvergedSystem: Solution Positioning for HP ConvergedSystem Hyper-Converged Products

Converged infrastructure systems – the integration of compute, networking, and storage - have rapidly become the preferred foundational building block adopted by businesses of all shapes and sizes. The success of these systems has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the effort and time to custom build their infrastructure from best of breed D-I-Y components. Purpose built converged infrastructure systems have been optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI).

Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, integrated at the rack level gave businesses the flexibility to cover the widest range of solution workload requirements while still using well-known infrastructure components. Emerging onto the scene recently has been a more modular approach to convergence using what we term Hyper-Convergence. With hyper-convergence, the three-tier architecture has been collapsed into a single system appliance that is purpose-built for virtualization with hypervisor, compute, and storage with advanced data services all integrated into an x86 industry-standard building block.

In this paper we will examine the ideal solution environments where Hyper-Converged products have flourished. We will then give practical guidance on solution positioning for HP’s latest ConvergedSystem Hyper-Converged product offerings.

Publish date: 05/07/15
news

Data aware storage yields insights into business info

Storage isn't just a bunch of dumb disks anymore. In fact, storage infrastructure is smarter than ever.

  • Premiered: 05/20/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA data-aware TBA Storage TBA storage infrastructure TBA Datacenter TBA convergence TBA converged TBA Flash TBA SSD TBA Performance TBA High Performance TBA Metadata TBA Hadoop TBA HDFS TBA object storage TBA Intelligent Storage TBA QoS TBA Oracle TBA Oracle FS1 TBA VMWare TBA VVOL TBA VVOLs TBA VMware VVOLs TBA RAID TBA Hypervisor TBA API TBA Tintri TBA Tarmin TBA GridBank TBA Lucene TBA Solr
news

How can storage arrays take advantage of vSphere VVOLs?

Virtual Volumes allow storage features to be provisioned to VMs, but the available feature set depends on the hardware.

  • Premiered: 05/21/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VMware VVOLs TBA VMware Virtual Volumes TBA Virtual Volumes TBA VVOLs TBA Tom Fenton TBA VM TBA Virtual Machine TBA vSphere TBA VMware vSphere TBA Storage TBA Dell TBA EqualLogic TBA RAID TBA Snapshots TBA Performance
news

Expert Video: Copy data management methods for storage

On average, how many copies of data get made between production and test/dev? Probably more than you think, according to Mike Matchett.

  • Premiered: 06/29/15
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Data Management TBA Mike Matchett TBA Storage TBA Virtualization TBA RAID TBA object storage TBA DP TBA Data protection
news

VMware VVOLs could lift external storage systems

VMware VVOLs could be the elixir traditional external storage systems need to gain ground on VM-centric storage and hyper-converged products.

  • Premiered: 08/03/15
  • Author: Jeff Kato
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VVOL TBA VVOLs TBA VMware VVOLs TBA VM-centric TBA Storage TBA hyper-converged TBA Cloud TBA Virtualization TBA Virtual Machine TBA VM TBA convergence TBA converged TBA Dell TBA EMC TBA HP TBA IBM TBA NetApp TBA VMWare TBA vSphere TBA vSphere 6 TBA Nutanix TBA SimpliVity TBA Tintri TBA QoS TBA Performance TBA high availability TBA HA TBA RAID TBA Capacity
Profiles/Reports

Enterprise Storage that Simply Runs and Runs: Infinidat Infinibox Delivers Incredible New Standard

Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.

Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.

Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.

In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.

Publish date: 09/29/15
news

Navigate today's hyper-converged market

The hyper-converged market is rife with products from competing vendors, making choosing a hyper-converged system a difficult proposition.

  • Premiered: 11/05/15
  • Author: Jeff Kato
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA hyper convergence TBA hyper-converged TBA Storage TBA HCI TBA Dell TBA EMC TBA HP TBA VMWare TBA Nutanix TBA Scale Computing TBA SimpliVity TBA DataCore TBA Gridstore TBA Maxta TBA Nimboxx TBA Pivot3 TBA modular TBA scalability TBA scalable TBA VDI TBA Virtual Desktop Infrastructure TBA Performance TBA Virtual Machine TBA VM TBA VM-centric TBA Virtualization TBA Microsoft TBA KVM TBA Hypervisor TBA Open Source
Profiles/Reports

HyperConverged Infrastructure Powered by Pivot3: Benefits of a More Efficient HCI Architecture

Virtualization has matured and become widely adopted in the enterprise market. HyperConverged Infrastructure (HCI), with virtualization at its core, is taking the market by storm, enabling virtualization for businesses of all sizes. The success of these technologies has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the time and effort required to create custom infrastructure from best-of-breed DIY components.

With HCI, the traditional three-tier architecture has been collapsed into a single system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. The immense success of this approach has led to increased competition in this space and the customers are required to sort through the various offerings, analyzing key attributes to determine which are significant.

One of these competing vendors, Pivot3, was founded in 2002 and has been in the HCI market since 2008, well before the term HyperConverged was used. For many years, Pivot3’s vSTAC architecture has provided the most efficient scale-out Software-Defined Storage (SDS) system available on the market. This efficiency is attributed to three design innovations. The first is their extremely efficient and reliable erasure coding technology called Scalar Erasure Coding. Conversely, many leading HCI implementations use replication-based redundancy techniques which are heavy on storage capacity utilization. Scalar Erasure Coding from Pivot3 can deliver significant capacity savings depending on the level of drive protection selected. The second innovation is Pivot3’s Global Hyperconvergence which creates a cross-cluster virtual SAN, the HyperSAN: in case of appliance failure, a VM migrates to another node and continues operations without the need to divert compute power to copy data over to that node. The third innovation has been a reduction in CPU overhead needed to implement the SDS features and other VM centric management tasks. Implementation of the HCI software uses the same CPU complex as business applications, this additional usage is referred to as the HCI overhead tax. HCI overhead tax is important since the licensing cost for many applications and infrastructure software are based on a per CPU basis. Even with today’s ever- increasing cores per CPU there still can be significant cost saving by keeping the HCI overhead tax low.

The Pivot3 family of HCI products delivering high data efficiency with a very low overhead are an ideal solution for storage-centric business workload environments where storage costs and reliability are critical success factors. One example of this is a VDI implementation where cost per seat determines success. Other examples would be capacity-centric workloads such as big data or video surveillance that could benefit from a Pivot3 HCI approach with leading storage capacity and reliability. In this paper we compare Pivot3 with other leading HCI architectures. We utilized data extracted from the alternative HCI vendor’s reference architectures for VDI implementations. Using real world examples, we have demonstrated that with other solutions, users must purchase up to 136% more raw storage capacity and up to 59% more total CPU cores than are required when using equivalent Pivot3 products. These impressive results can lead to significant costs savings. 

Publish date: 12/10/15
news / Blog

Cloud-Enabling the Mainframe - Oracle VSM7 Unlocks Mainframe Storage Options

Today Oracle launches the 7th generation of its enterprise-class virtual tape library, the StorageTek VSM7.

news

Disaggregation marks an evolution in hyper-convergence

Hyper-convergence vendors are pushing forward with products that will offer disaggregation, the latest entry into the data center paradigm.

  • Premiered: 08/03/16
  • Author: Arun Taneja
  • Published: TechTarget: Search Storage
Topic(s): TBA disaggregated storage TBA Storage TBA disaggregation TBA hyperconverged TBA hyperconvergence TBA hyper-converged TBA hyper-convergence TBA Datacenter TBA Data Center TBA converged TBA convergence TBA Moore's Law TBA Hypervisor TBA software-defined TBA DataCore TBA EMC TBA ScaleIO TBA HPE TBA StoreVirtual TBA StoreVirtual VSA TBA VSA TBA VMWare TBA VMware VSAN TBA Virtual SAN TBA SAN TBA Virtualization TBA cluster TBA Nutanix TBA SimpliVity TBA VxRack
Profiles/Reports

5 9's Availability in a Lower Cost Dell SC4020 Product? Yes, Really!

Every year Dell measures the availability level of its Storage Center Series of products by analyzing the actual failure data in the field. For the past few years Dell has asked Taneja Group to audit the results to ensure that these systems were indeed meeting the celebrated 5 9s availability levels. And they have. This year Dell asked us to audit the results specifically on the relatively new model, SC4020.

Even though the SC4020 is a lower cost member of the SC family, it meets 5 9s criteria just like its bigger family members. Dell did not cut costs by sacrificing availability, but by space-saving design like a single enclosure for media and controllers instead of two separate enclosures. Even with the smaller footprint – 2U to the SC8000’s 6U -- the SC4020 still achieves 5 9s using the same strict test measurement criteria.

Frankly, many vendors choose not to subject their lower cost models to 5 9s testing. The vendor may not have put a lot of development dollars into the lower cost product in an effort to reduce cost and maintain profitability on a lower-priced system.

Dell didn’t do it this way with the SC4020. Instead of watering it down by stripping features, they architected high efficiency into a smaller footprint. The resulting array is smaller and more affordable, and retains the SC Series enterprise features: high availability and reliability, performance, centralized management, not only across all SC models but also across the Dell EqualLogic PS and FS models. This level of availability and efficiency makes the SC4020 an economical and highly efficient system for the mid-market and the distributed enterprise.

Publish date: 08/31/16
Profiles/Reports

Petabyte-Scale Backup Storage Without Compromise: A Look at Scality RING for Enterprise Backup

Traditional backup storage is being challenged by the immense growth of data. These solutions including tape, RAID devices that are gated by controllers and dedicated storage appliances simply aren’t designed for today’s enterprise backup storage at petabyte levels, especially when that data lives in geographically distributed environments. This insufficiency is due in large part to inefficiency and limited data protection, as well as the limited scalability and the lack of flexibility of these traditional storage solutions.

These constraints can lead to multiple processes and many storage systems to manage. Storage silos develop as a result, creating complexity, increasing operational costs and adding risk. It is not unusual for companies to have 10-20 different storage systems to achieve petabyte storage capacity, which is inefficient from a management point of view. And if companies want to move data from one storage system to another, the migration process can take a lot of time and place even more demand on data center resources.

And the concerns go beyond management complexity. Companies face higher capital costs due to relatively high priced proprietary storage hardware, and worse, limited fault tolerance, which can lead to data loss if a system incurs simultaneous disk failures. Slow access speeds also present a major challenge if IT teams need to restore large amounts of data from tape while maintaining production environments. As a result, midsized companies, large enterprises and service providers that experience these issues have begun to shift to software-defined storage solutions and scale-out object storage technology that addresses the shortcomings of traditional backup storage.

Software-defined scale out storage is attractive for large-scale data backup because these storage solutions offer linear performance and hardware independence – two core capabilities that drive tremendous scalability and enable cost-effective storage solutions. Add to this the high fault tolerance of object storage platforms, and it’s easy to see why software-defined object storage solutions are rapidly becoming the preferred backup storage approach for petabyte-scale data environments. A recent Taneja Group survey underscores the benefits of software-defined scale out storage. IT professionals indicated that the top benefits of software-defined, scale-out architecture on industry standard servers are a high level of flexibility (34%), low cost of deployment (34%), modular scalability (32%), and ability to purchase hardware separate from software (32%).

Going a step further, the Scality backup storage solution built upon the Scality RING platform offers the rare combination of scalability, durability and affordability plus the flexibility to handle mixed workloads at petabyte-scale. Scality backup storage achieves this by supporting multiple file and object protocols so companies can backup files, objects and VMs, leveraging a scale-out file system that delivers linear performance as system capacity increases, offering advanced data protection for extreme fault tolerance, enabling hardware independence for better price performance and providing auto balancing that enables migration-free hardware upgrades.

In this paper, we will look at the limitations of backup appliances and Network-Attached Storage (NAS) and the key requirements for backup storage at petabyte-scale. We will also study the Scality RING software-defined architecture and provide an overview of the Scality backup storage solution.

Publish date: 10/18/16
Profiles/Reports

The Best All-Flash Array for SAP HANA

These days the world operates in real-time all the time. Whether making airline reservations or getting the best deal from an online retailer, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish. These capabilities enable businesses to succeed in the modern age, giving forward-thinking companies a real edge in innovation.

SAP HANA is an example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both OLAP and OLTP database workloads simultaneously. SAP HANA can be deployed on-premises or in the cloud. Originally, on-premises HANA was available only as a dedicated appliance. Recently SAP has expanded support to best in class components through their SAP Tailored Datacenter Integration (TDI) program. In this solution profile, Taneja Group examined the storage requirements needed for HANA TDI environments and evaluated storage alternatives including the HPE 3PAR StoreServ All Flash. We will make a strong case as to why all-flash arrays like the HPE 3PAR version are a great fit for SAP HANA solutions.

Why discuss storage for an in-memory database? The reason is simple: RAM loses its mind when the power goes off. This volatility means that persistent shared storage is at the heart of the HANA architecture for scalability, disaster tolerance, and data protection. The performance attributes of your shared storage dictate how many nodes you can cluster into a SAP HANA environment which in turn affects your business outcomes. Greater scalability capability means more real-time information is processed. SAP HANA workload shared storage requirements are write intensive with low latency for small files and sequential throughput performance for large files. However, the overall storage capacity is not extreme which makes this workload an ideal fit for all-flash arrays that can meet performance requirements with the smallest quantity of SSDs. Typically you would need 10X the equivalent spinning media drives just to meet the performance requirements, which then leaves you with a massive amount of capacity that cannot be used for other purposes.

In this study, we examined five leading all-flash arrays including the HPE 3PAR StoreServ 8450 All Flash. We found that the unique architecture of the 3PAR array could meet HANA workload requirements with up to 73% fewer SSDs, 76% less power, and 60% less rack space than the alternative AFAs we evaluated. 

Publish date: 06/07/17