Taneja Group | latency
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: latency

Profiles/Reports

The Cost of Performance

What’s an IO worth to you? Is it worth more than a gigabyte? Less? That’s a hard question for many IT and business professionals to begin to consider; yet we often see it bandied about. It certainly has merit, it just isn’t easily understood. In this industry article, Taneja Group takes a look at how big the cost of performance is, and with that understanding in mind, we’ll look at two examples of new solutions and what they suggest is a changing way to get cost-effective performance inside the data center walls.

Publish date: 04/22/11
Profiles/Reports

Riverbed Optimization for the Cloud – Weaving together the connectivity behind the hybrid cloud

There’s little doubt today that the concept of cloud is an irresistible force that is altering the landscape upon which IT systems are built and integrated. The fact is, the cloud is already here in data centers today, as the footprint of IT has rapidly changed and is quickly becoming a mix of services drawn from both public and private clouds. Whether it is by a small degree, or a large degree, the potential of using many of this first generation of Internet-scattered, public cloud services has opportunistically drawn in many users, even within the most stoic and unchangeable of institutions. In this Solution Brief, we’ll examine how one vendor – Riverbed – is turning their technology to the integration of these cloud services with the traditional infrastructure; an area where they are particularly enabled to deliver with their rich history of optimizing connectivity and capabilities across systems scattered around the globe.

Publish date: 07/25/11
news

InfiniBand’s Data Center March

Today’s enterprise data center is challenged with managing growing data, hosting denser computing clusters, and meeting increasing performance demands. As IT architects work to design efficient solutions for Big Data processing, web-scale applications, elastic clouds, and the virtualized hosting of mission-critical applications they are realizing that key infrastructure design “patterns” include scale-out compute and storage clusters, switched fabrics, and low-latency I/O.

  • Premiered: 07/18/12
  • Author: Mike Matchett
  • Published: IBTA BLOG
Topic(s): TBA IBTA TBA Mike Matchett TBA InfiniBand TBA Storage Clusters TBA fabric TBA latency
news

Nimble launches all-flash shelf instead of all-flash array

Nimble Storage, looking to break into the enterprise, today launched its highest capacity hybrid flash array and a solid-state drive (SSD) shelf as part of what it calls its Adaptive Flash platform.

  • Premiered: 06/11/14
  • Author: Taneja Group
  • Published: Tech Target: Search Solid State Storage
Topic(s): TBA Nimble Storage TBA Nimble TBA SSD TBA Flash TBA Adaptive Flash TBA InfoSight TBA analytics TBA latency TBA all flash array
Profiles/Reports

Fibre Channel: The Proven and Reliable Workhorse for Enterprise Storage Networks

Mission-critical assets such as virtualized and database applications demand a proven enterprise storage protocol to meet their performance and reliability needs. Fibre Channel has long filled that need for most customers, and for good reason. Unlike competing protocols, Fibre Channel was specifically designed for storage networking, and engineered to deliver high levels of reliability and availability as well as consistent and predictable performance for enterprise applications. As a result, Fibre Channel has been the most widely used enterprise protocol for many years.

But with the widespread deployment of 10GbE technology, some customers have explored the use of other block protocols, such as iSCSI and Fibre Channel over Ethernet (FCoE), or file protocols such as NAS. Others have looked to Infiniband, which is now being touted as a storage networking solution. In marketing the strengths of these protocols, vendors often promote feeds and speeds, such as raw line rates, as a key advantage for storage networking. However, as we’ll see, there is much more to storage networking than raw speed.

It turns out that on an enterprise buyer’s scorecard, raw speed doesn’t even make the cut as an evaluation criteria. Instead, decision makers focus on factors such as a solution’s demonstrated reliability, latency, and track record in supporting Tier 1 applications. When it comes to these requirements, no other protocol can measure up to the inherent strengths of Fibre Channel in enterprise storage environments.

Despite its long, successful track record, Fibre Channel does not always get the attention and visibility that other protocols receive. While it may not be winning the media wars, Fibre Channel offers customers a clear and compelling value proposition as a storage networking solution. Looking ahead, Fibre Channel also presents an enticing technology roadmap, even as it continues to meet the storage needs of today’s most critical business applications.

In this paper, we’ll begin by looking at the key requirements customers should look for in a commercial storage protocol. We’ll then examine the technology capabilities and advantages of Fibre Channel relative to other protocols, and discuss how those translate to business benefits. Since not all vendor implementations are created equal, we’ll call out the solution set of one vendor – QLogic – as we discuss each of the requirements, highlighting it as an example of a Fibre Channel offering that goes well beyond the norm.

Publish date: 02/28/14
news

SSD controllers may run your applications someday

It's time for enterprise applications and storage to work more closely together, even to the point where SSDs become a pool of computing power, according to Samsung Semiconductor.

  • Premiered: 08/06/14
  • Author: Taneja Group
  • Published: ComputerWorld
Topic(s): TBA SSD TBA Flash TBA SSD Controller TBA CPU TBA Samsung TBA Storage TBA Performance TBA latency TBA Arun Taneja
news

Figuring out the real price of flash technology

Sometimes comparing the costs of flash arrays is an apples-to-oranges affair -- interesting, but not very helpful.

  • Premiered: 11/04/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Solid State Storage
Topic(s): TBA Mike Matchett TBA Flash TBA SSD TBA Storage TBA hybrid flash TBA Hybrid Array TBA TCO TBA latency TBA bandwith TBA Protection TBA Data protection TBA Load DynamiX TBA Violin Memory TBA Pure Storage TBA NAND TBA HP TBA VDI TBA Virtual Desktop Infrastructure TBA Deduplication TBA Compression TBA EMC TBA XtremeIO TBA StoreServ TBA IBM TBA IBM FlashSystem TBA FlashSystem TBA Kaminario TBA Nimble Storage TBA Tegile
news

New approaches to scalable storage

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.

  • Premiered: 03/16/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA Mike Matchett TBA TechTarget TBA Storage TBA scalable TBA scalability TBA analytics TBA Data Storage TBA Big Data TBA Block Storage TBA File Storage TBA object storage TBA scale-out TBA scale-up TBA Performance TBA Capacity TBA HA TBA high availability TBA latency TBA IOPS TBA Flash TBA SSD TBA File System TBA Security TBA NetApp TBA Data ONTAP TBA ONTAP TBA EMC TBA Isilon TBA OneFS TBA Cloud
news

Big data analytics applications impact storage systems

Analytics applications for big data have placed extensive demands on storage systems, which Mike Matchett says often requires new or modified storage structures.

  • Premiered: 09/03/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Mike Matchett TBA Big Data TBA analytics TBA Storage TBA Primary Storage TBA scalability TBA Business Intelligence TBA BI TBA AWS TBA Amazon AWS TBA S3 TBA HPC TBA High Performance Computing TBA High Performance TBA ETL TBA HP Haven TBA HP TBA Hadoop TBA Vertica TBA convergence TBA converged TBA IOPS TBA Capacity TBA latency TBA scale-out TBA software-defined TBA software-defined storage TBA SDS TBA YARN TBA Spark
news

Object-file system combo rapidly expands

Object-based storage adoption took longer than expected, so object vendors have added file systems to their products to make them look more familiar to users.

  • Premiered: 04/27/16
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA object TBA object storage TBA Storage TBA Caringo TBA Caringo Swarm TBA FileFly TBA Windows TBA NTFS TBA Network File System TBA NFS TBA scale-out TBA scale-out storage TBA Scality TBA all-flash TBA SSD TBA AFA TBA Pure Storage TBA SAN TBA elasticity TBA Data reduction TBA Encryption TBA erasure coding TBA Metadata TBA metadata engine TBA Exablox TBA OneBlox TBA latency TBA Arun Taneja TBA Amplidata TBA Cleversafe
Profiles/Reports

High Capacity SSDs are Driving the Shift to the All Flash Data Center

All Flash Arrays (AFAs) have had an impressive run of growth. From less than 5% of total array revenue in 2011, they’re expected to approach 50% of total revenue by the end of 2016, roughly a 60% CAGR. This isn’t surprising, really. Even though they’ve historically cost more on a $$/GB level (the gap is rapidly narrowing), they offer large advantages over hybrid and HDD-based arrays in every other area.

The most obvious advantage that SSDs have over HDDs is in performance. With no moving parts to slow them down, they can be over a thousand times faster than HDDs by some measures. Using them to eliminate storage bottlenecks, CIOs can squeeze more utility out of their servers. The high performance of SSD’s has allowed storage vendors to implement storage capacity optimization techniques such as thin deduplication within AFAs. Breathtaking performance combined with affordable capacity optimization has been the major driving force behind AFA market gains to date.

While people are generally aware that SSDs outperform HDDs by a large margin, they usually have less visibility into the other advantages that they bring to the table. SSDs are also superior to HDDs in the areas of reliability (and thus warranty), power consumption, cooling requirements and physical footprint. As we’ll see, these TCO advantages allow users to run at significantly lower OPEX levels when switching to AFAs from traditional, HDD-based arrays.

When looking at the total cost envelope, factoring in their superior performance, AFAs are already the intelligent purchase decision, particularly for Tier 1 mission critical workloads. Now, a new generation of high capacity SSDs is coming and it’s poised to accelerate the AFA takeover. We believe the Flash revolution in storage that started in 2011will outpace even the most optimistic forecast in 2016 easily eclipsing the 50% of total revenue predicted for external arrays. Let’s take a look at how and why.

Publish date: 06/10/16
news

Big Data Storage Solutions: Options Abound

Hadoop, Spark and other big data analysis tools all have one thing in common: they need some form of big data storage to hold the vast quantities of data that they crunch through. The good news is that big data storage options are proliferating.

  • Premiered: 08/09/16
  • Author: Taneja Group
  • Published: InfoStor
Topic(s): TBA Hadoop TBA Spark TBA Big Data TBA big data storage TBA DAS TBA Compute TBA cluster TBA flexibility TBA Mike Matchett TBA Hadoop Distributed File System TBA HDFS TBA NFS TBA MapReduce TBA API TBA SAN TBA NAS TBA TCO TBA DDN TBA EMC TBA EMC Isilon TBA Isilon TBA SDS TBA software-defined TBA software-defined storage TBA ViPR TBA DriveScale TBA hScaler TBA Cisco TBA HPE TBA IBM
news

SimpliVity launches all-flash appliance models for hyper-convergence

Far from the first to do so, SimpliVity adds all-flash hyper-converged appliances, but waited until price levels dropped to avoid investing 'too early into the buzz.'

  • Premiered: 08/23/16
  • Author: Taneja Group
  • Published: TechTarget: Search Converged Infrastructure
Topic(s): TBA SimpliVity TBA all-flash TBA All Flash TBA hyper-converged TBA hyper-converge TBA hyper-convergence TBA AFA TBA Disaster Recovery TBA DR TBA Backup TBA OmniStack TBA Cisco TBA Unified Computing System TBA UCS TBA Compute TBA Storage TBA Virtualization TBA Lenovo TBA SSD TBA Flash TBA Data Deduplication TBA Deduplication TBA Nutanix TBA VxRack TBA VxRail TBA Pivot3 TBA vSTAC TBA HCI TBA hyperconverged infrastructure TBA HyperGrid
news

Tintri OS storage upgrade focuses on cloud, containers for DevOps

Tintri storage moves 'in lockstep' with VMware for cloud, container and DevOps support with a vRealize Orchestrator plug-in and vSphere Integrated Containers support.

  • Premiered: 11/01/16
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Tintri TBA DevOps TBA VMWare TBA Cloud TBA container TBA containers TBA vRealize TBA vSphere TBA VMware vSphere TBA Docker TBA IBM TBA cloud object storage TBA object storage TBA Storage TBA Public Cloud TBA S3 TBA Backup TBA copy data management TBA copy data TBA CDM TBA VM TBA Virtual Machine TBA Flocker TBA Google Kubernetes TBA Docker Swarm TBA Apache TBA Apache Mesos TBA Snapshot TBA Snapshots TBA Mike Matchett
news

Four big data and AI trends to keep an eye on

AI is making a comeback - and it's going to affect your data center soon.

  • Premiered: 11/17/16
  • Author: Mike Matchett
  • Published: TechTarget: Search IT Operations
Topic(s): TBA AI TBA Artificial Intelligence TBA Big Data TBA Data Center TBA Datacenter TBA Machine Learning TBA Apache TBA Apache Spark TBA Spark TBA Hadoop TBA MapReduce TBA latency TBA In-Memory TBA big data analytics TBA Business Intelligence TBA Python TBA Dataiku TBA Cask TBA ETL TBA data flow management TBA Virtualization TBA Storage TBA scale-up TBA scale-out TBA scalability TBA GPU TBA IBM TBA NVIDIA TBA Virtual Machine TBA VM
Profiles/Reports

The Best All-Flash Array for SAP HANA

These days the world operates in real-time all the time. Whether making airline reservations or getting the best deal from an online retailer, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish. These capabilities enable businesses to succeed in the modern age, giving forward-thinking companies a real edge in innovation.

SAP HANA is an example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both 

OLAP and OLTP database workloads simultaneously. SAP HANA can be deployed on-premises or in the cloud. Originally, on-premises HANA was available only as a dedicated appliance. Recently SAP has expanded support to best in class components through their SAP Tailored Datacenter Integration (TDI) program. In this solution profile, Taneja Group examined the storage requirements needed for HANA TDI environments and evaluated storage alternatives including the HPE 3PAR StoreServ All Flash. We will make a strong case as to why all-flash arrays like the HPE 3PAR version are a great fit for SAP HANA solutions.

Why discuss storage for an in-memory database? The reason is simple: RAM loses its mind when the power goes off. This volatility means that persistent shared storage is at the heart of the HANA architecture for scalability, disaster tolerance, and data protection. The performance attributes of your shared storage dictate how many nodes you can cluster into a SAP HANA environment which in turn affects your business outcomes. Greater scalability capability means more real-time information is processed. SAP HANA workload shared storage requirements are write intensive with low latency for small files and sequential throughput performance for large files. However, the overall storage capacity is not extreme which makes this workload an ideal fit for all-flash arrays that can meet performance requirements with the smallest quantity of SSDs. Typically you would need 10X the equivalent spinning media drives just to meet the performance requirements, which then leaves you with a massive amount of capacity that cannot be used for other purposes.

In this study, we examined five leading all-flash arrays including the HPE 3PAR StoreServ 8450 All Flash. We found that the unique architecture of the 3PAR array could meet HANA workload requirements with up to 73% fewer SSDs, 76% less power, and 60% less rack space than the alternative AFAs we evaluated. 

Publish date: 02/14/17
news

Hyper-convergence: It's for more than primary data storage

The lines between primary and secondary storage and applications such as hyper-convergence remain blurry. But they are a starting point for further discussion.

Profiles/Reports

HPE 3PAR Enables Highly Resilient All-Flash Data Centers: Latest Release Solidifies AFA Leadership

If you are an existing customer of HPE 3PAR, this latest release of 3PAR capabilities will leave you smiling. If you are looking for an All Flash Array (AFA) to transform your data center, now might be the time to take a closer at HPE 3PAR. Since AFAs first emerged on the scene at the turn of this decade, the products have gone through various waves of innovation to achieve the market acceptance it has today. In the first wave, it was all about raw performance for niche applications. In the second wave, it was about making flash more cost effective versus traditional disk-based arrays to broaden economic appeal. Now in the final wave, it is about giving these arrays all the enterprise features and ecosystem support to completely replace all legacy Tier 0/1 arrays still in production today. 

HPE 3PAR StoreServ is one of the leading AFAs on the market today. HPE 3PAR uses a modern architectural design that includes multi-controller scalability, a highly-virtualized data layer with three levels of abstraction, system-wide striping, a highly-specialized ASIC and numerous flash innovations. HPE 3PAR engineers pioneered this very efficient architecture well before flash technology became mainstream and proved that this architecture approach has been timeless by demonstrating a seamless transition to support all-flash technology. During this same time, other vendors ran into architectural controller-bound bottlenecks for flash, making them reinvent existing products or completely start from scratch with new architectures. 

HPE’s 3PAR timeless architecture has meant that features introduced years before are still relevant today and features introduced today are available to current 3PAR customers that purchased arrays previously. This continuous innovation of features available to old and new customers alike provides the ultimate in investment protection unmatched by most vendors in the industry today. In this Technology Brief, Taneja Group will explore some of the latest developments from HPE that build upon the rich feature set that already exists in the 3PAR architecture. These new features and simplicity enhancements will show that HPE continues to put customer’s investment protection first and continues to expand its capabilities around enterprise-grade business continuity and resilience. The combination of economic value of HPE 3PAR AFAs with years of proven mission critical features promises to accelerate the final wave of the much-anticipated All-Flash Data Center for Tier 0/1 workloads.

Publish date: 02/17/17
news

Overcome problems with public cloud storage providers

Security and compliance concerns are chief obstacles to public cloud storage adoption, as IT managers are hesitant to have their critical data reside outside the data center.

  • Premiered: 03/03/17
  • Author: Jeff Byrne
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Storage TBA Cloud TBA Public Cloud TBA Cloud Storage TBA Data Center TBA scalable TBA scalability TBA secondary storage TBA Backup TBA Archiving TBA analytics TBA cloud adoption TBA Cloud Security TBA Security TBA compliance TBA SLA TBA AWS TBA Amazon TBA Amazon AWS TBA Microsoft Azure TBA Google TBA Google Cloud TBA Data protection TBA API TBA S3 TBA SSL TBA IO latency TBA latency TBA Deduplication TBA Compression
Profiles/Reports

Is Object Storage Right For Your Organization?

Is object storage right for your organization? Many companies are asking this question as they seek out storage solutions that support vast unstructured data growth throughout their organizations. Object storage is ideal for large-scale unstructured data storage because it easily scales to several petabytes and beyond by simply adding storage nodes. Object storage also provides high fault tolerance, simplified storage management and hardware independence – core capabilities that are essential to cost-effectively manage large-scale storage environments. Add to this built-in support for geographically distributed environments and it’s easy to see why object storage solutions are the preferred storage approach for multiple use cases such as cloud-native applications, highly scalable file backup, secure enterprise collaboration, active archival, content repositories and increasingly cognitive computing workloads such as Big Data analytics.

To help you decide if object storage is right for your company and to help you understand how to apply various storage technologies, we have created a table below that positions object storage relative to block storage and file storage.

As the table shows, there are several factors that differentiate block, file and object storage. An easy way to think about the differences is the following; block storage is necessary for critical applications where storage performance is the key consideration, file storage is well-suited for highly scalable shared file systems and object storage is ideal when cloud-scale capacity and convenience as well as reliability and geographically distributed access are the major storage requirements. 

Publish date: 12/30/16