Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Primary Storage

Includes Storage SAN Arrays, NAS, and other purpose-built, on-premises primary storage devices. Also included are key value-added storage technologies such as Flash, NVMe, Storage Class Memory and other relevant Storage Acceleration technologies.

In this category, Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All emerging key strategic components that make up the primary storage systems are covered, soup-to-nuts. We look at specific storage acceleration technologies in a range of form factors, and assess how vendors and users can best take advantage of them to improve performance for specific types of use cases and workloads. We pay special attention to newly emerging technologies such as Storage Class Memory, and assess how they will work and interact with existing infrastructures and impact primary storage capabilities going forward.

Page 1 of 36 pages  1 2 3 >  Last ›
Profile

The Best All-Flash Array for SAP HANA

These days the world operates in real-time all the time. Whether making airline reservations or getting the best deal from an online retailer, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish. These capabilities enable businesses to succeed in the modern age, giving forward-thinking companies a real edge in innovation.

SAP HANA is an example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both OLAP and OLTP database workloads simultaneously. SAP HANA can be deployed on-premises or in the cloud. Originally, on-premises HANA was available only as a dedicated appliance. Recently SAP has expanded support to best in class components through their SAP Tailored Datacenter Integration (TDI) program. In this solution profile, Taneja Group examined the storage requirements needed for HANA TDI environments and evaluated storage alternatives including the HPE 3PAR StoreServ All Flash. We will make a strong case as to why all-flash arrays like the HPE 3PAR version are a great fit for SAP HANA solutions.

Why discuss storage for an in-memory database? The reason is simple: RAM loses its mind when the power goes off. This volatility means that persistent shared storage is at the heart of the HANA architecture for scalability, disaster tolerance, and data protection. The performance attributes of your shared storage dictate how many nodes you can cluster into a SAP HANA environment which in turn affects your business outcomes. Greater scalability capability means more real-time information is processed. SAP HANA workload shared storage requirements are write intensive with low latency for small files and sequential throughput performance for large files. However, the overall storage capacity is not extreme which makes this workload an ideal fit for all-flash arrays that can meet performance requirements with the smallest quantity of SSDs. Typically you would need 10X the equivalent spinning media drives just to meet the performance requirements, which then leaves you with a massive amount of capacity that cannot be used for other purposes.

In this study, we examined five leading all-flash arrays including the HPE 3PAR StoreServ 8450 All Flash. We found that the unique architecture of the 3PAR array could meet HANA workload requirements with up to 73% fewer SSDs, 76% less power, and 60% less rack space than the alternative AFAs we evaluated. 

Publish date: 06/07/17
Technology Validation

HPE StoreVirtual 3200: A Look at the Only Entry Array with Scale-out and Scale-up

Innovation in traditional external storage has recently taken a back seat to the current market darlings of All Flash Arrays and Software-defined Scale-out Storage. Can there be a better way to redesign the mainstream dual-controller array that has been a popular choice for entry-level shared storage for the last 20 years?  Hewlett-Packard Enterprise (HPE) claims the answer is a resounding Yes.

HPE StoreVirtual 3200 (SV3200) is a new entry storage device that leverages HPE’s StoreVirtual Software-defined Storage (SDS) technology combined with an innovative use of low-cost ARM-based controller technology found in high-end smartphones and tablets. This approach allows HPE to leverage StoreVirtual technology to create an entry array that is more cost effective than using the same software on a set of commodity X86 servers. Optimizing the cost/performance ratio using ARM technology instead of excessive power hungry processing and memory using X86 computers unleashes an attractive SDS product unmatched in affordability. For the first time, an entry storage device can both Scale-up and Scale-out efficiently and also have the additional flexibility to be compatible with a full complement of hyper-converged and composable infrastructure (based on the same StoreVirtual technology). This unique capability gives businesses the ultimate flexibility and investment protection as they transition to a modern infrastructure based on software-defined technologies. The SV3200 is ideal for SMB on-premises storage and enterprise remote office deployments. In the future, it will also enable low-cost capacity expansion for HPE’s Hyper Converged and Composable infrastructure offerings.

Taneja Group evaluated the HPE SV3200 to validate the fit as an entry storage device. Ease-of-use, advanced data services, and supportability were just some of the key attributes we validated with hands-on testing. What we found was that the SV3200 is an extremely easy-to-use device that can be managed by IT generalists. This simplicity is good news for both new customers that cannot afford dedicated administrators and also for those HPE customers that are already accustomed to managing multiple HPE products that adhere to the same HPE OneView infrastructure management paradigm. We also validated that the advanced data services of this entry array match that of the field proven enterprise StoreVirtual products already in the market. The SV3200 can support advanced features such as linear scale-out and multi-site stretch cluster capability that enables advanced business continuity techniques rarely found in storage products of this class. What we found is that HPE has raised the bar for entry arrays, and we strongly recommend businesses that are looking at either SDS technology or entry storage strongly consider HPE’s SV3200 as a product that has the flexibility to provide the best of both. A starting price at under $10,000 makes it very affordable to start using this easy, powerful, and flexible array. Give it a closer look.

Publish date: 04/11/17
Report

HPE 3PAR Enables Highly Resilient All-Flash Data Centers: Latest Release Solidifies AFA Leadership

If you are an existing customer of HPE 3PAR, this latest release of 3PAR capabilities will leave you smiling. If you are looking for an All Flash Array (AFA) to transform your data center, now might be the time to take a closer at HPE 3PAR. Since AFAs first emerged on the scene at the turn of this decade, the products have gone through various waves of innovation to achieve the market acceptance it has today. In the first wave, it was all about raw performance for niche applications. In the second wave, it was about making flash more cost effective versus traditional disk-based arrays to broaden economic appeal. Now in the final wave, it is about giving these arrays all the enterprise features and ecosystem support to completely replace all legacy Tier 0/1 arrays still in production today. 

HPE 3PAR StoreServ is one of the leading AFAs on the market today. HPE 3PAR uses a modern architectural design that includes multi-controller scalability, a highly-virtualized data layer with three levels of abstraction, system-wide striping, a highly-specialized ASIC and numerous flash innovations. HPE 3PAR engineers pioneered this very efficient architecture well before flash technology became mainstream and proved that this architecture approach has been timeless by demonstrating a seamless transition to support all-flash technology. During this same time, other vendors ran into architectural controller-bound bottlenecks for flash, making them reinvent existing products or completely start from scratch with new architectures. 

HPE’s 3PAR timeless architecture has meant that features introduced years before are still relevant today and features introduced today are available to current 3PAR customers that purchased arrays previously. This continuous innovation of features available to old and new customers alike provides the ultimate in investment protection unmatched by most vendors in the industry today. In this Technology Brief, Taneja Group will explore some of the latest developments from HPE that build upon the rich feature set that already exists in the 3PAR architecture. These new features and simplicity enhancements will show that HPE continues to put customer’s investment protection first and continues to expand its capabilities around enterprise-grade business continuity and resilience. The combination of economic value of HPE 3PAR AFAs with years of proven mission critical features promises to accelerate the final wave of the much-anticipated All-Flash Data Center for Tier 0/1 workloads.

Publish date: 02/17/17
Profile

Optimizing VM Storage Performance & Capacity - Tintri Customers Leverage New Predictive Analytics

Today we are seeing big impacts on storage from the huge increase in the scale of an organization’s important data (e.g. Big Data, Internet Of Things) and the growing size of virtualization clusters (e.g. never-ending VM’s, VDI, cloud-building). In addition, virtualization adoption tends to increase the generalization of IT admins. In particular, IT groups are focusing more on servicing users and applications and no longer want to be just managing infrastructure for infrastructure’s sake. Everything that IT does is becoming interpreted, analyzed, and managed in application/business terms, including storage to optimize the return on their total IT investment. To move forward, an organization’s storage infrastructure not only needs to grow internally smarter, it also needs to become both VM and application aware.

While server virtualization made a lot of things better for the over-taxed IT shop, delivering quality storage services in hypervisor infrastructures with traditional storage created difficult challenges. In response Tintri pioneered per-VM storage infrastructure. The Tintri VMstore has eliminated multiple points of storage friction and pain. In fact, it’s now becoming a mandatory checkbox across the storage market for all arrays to claim some kind of VM-centricity. Unfortunately, traditional arrays are mainly focused on checking off rudimentary support for external hypervisor APIs that only serve to re-package the same old storage. The best fit to today’s (and tomorrow’s) virtual storage requirements will only come from fully engineered VM-centric storage and application-aware approaches as Tintri has done.

However, it’s not enough to simply drop in storage that automatically drives best practice policies and handles today’s needs. We all know change is constant, and key to preparing for both growth and change is having a detailed, properly focused view of today’s large scale environments, along with smart planning tools that help IT both optimize current resources and make the best IT investment decisions going forward. To meet those larger needs, Tintri has rolled out a Tintri Analytics SaaS-based offering that applies big data analytical power to the large scale of their customer’s VMstore VM-aware metrics.

In this report we will look briefly at Tintri’s overall “per-VM” storage approach and then take a deeper look at their new Tintri Analytics offering. The new Tintri Analytics management service further optimizes their app-aware VM storage with advanced VM-centric performance and capacity management. With this new service, Tintri is helping their customers receive greater visibility, insight and analysis over large, cloud-scale virtual operations. We’ll see how “big data” enhanced intelligence provides significant value and differentiation, and get a glimpse of the payback that a predictive approach provides both the virtual admin and application owners. 

Publish date: 11/04/16
Report

Qumulo Tackles the Machine Data Challenge: Six Customers Explain How

We are moving into a new era of data storage. The traditional storage infrastructure that we know (and do not necessarily love) was designed to process and store input from human beings. People input emails, word processing documents and spreadsheets. They created databases and recorded business transactions. Data was stored on tape, workstation hard drives, and over the LAN.

In the second stage of data storage development, humans still produced most content but there was more and more of it, and file sizes got larger and larger. Video and audio, digital imaging, websites streaming entertainment content to millions of users; and no end to data growth. Storage capacity grew to encompass large data volumes and flash became more common in hybrid and all-flash storage systems.

Today, the storage environment has undergone another major change. The major content producers are no longer people, but machines. Storing and processing machine data offers tremendous opportunities: Seismic and weather sensors that may lead to meaningful disaster warnings. Social network diagnostics that display hard evidence of terrorist activity. Connected cars that could slash automotive fatalities. Research breakthroughs around the human brain thanks to advances in microscopy.

However, building storage systems that can store raw machine data and process it is not for the faint of heart. The best solution today is massively scale-out, general purpose NAS. This type of storage system has a single namespace capable of storing billions of differently sized files, linearly scales performance and capacity, and offers data-awareness and real-time analytics using extended metadata.

There are a very few vendors in the world today who offer this solution. One of them is Qumulo. Qumulo’s mission is to provide high volume storage to business and scientific environments that produce massive volumes of machine data.

To gauge how well Qumulo works in the real world of big data, we spoke with six customers from life sciences, media and entertainment, telco/cable/satellite, higher education and the automotive industries. Each customer deals with massive machine-generated data and uses Qumulo to store, manage, and curate mission-critical data volumes 24x7. Customers cited five major benefits to Qumulo: massive scalability, high performance, data-awareness and analytics, extreme reliability, and top-flight customer support.

Read on to see how Qumulo supports large-scale data storage and processing in these mission-critical, intensive machine data environments.

Publish date: 10/26/16
Profile

FlashSoft 4 for vSphere 6: Acceleration Technology Tailor-Made for VMware Environments

For all the gains server virtualization has brought in compute utilization, flexibility and efficiency, it has created an equally weighty set of challenges on the storage side, particularly in traditional storage environments. As servers become more consolidated, virtualized workloads must increasingly contend for scarce storage and IO resources, preventing them from consistently meeting throughput and response time objectives. On top of that, there is often no way to ensure that the most critical apps or virtual machines can gain priority access to data storage as needed, even in lightly consolidated environments. With a majority (70+%) of all workloads now running virtualized, it can be tough to achieve strong and predictable app performance with traditional shared storage.

To address these challenges, many VMware customers are now turning to server-side acceleration solutions, in which the flash storage resource can be placed closer to the application. But server-side acceleration is not a panacea. While some point solutions have been adapted to work in virtualized infrastructures, they generally lack enterprise features, and are often not well integrated with vSphere and the vCenter management platform. Such offerings are at best band-aid treatments, and at worst second-class citizens in the virtual infrastructure, proving difficult to scale, deploy and manage. To provide true enterprise value, a solution should seamlessly deliver performance to your critical VMware workloads, but without compromising availability, workload portability, or ease of deployment and management.

This is where FlashSoft 4 for VMware vSphere 6 comes in. FlashSoft is an intelligent, software-defined caching solution that accelerates your critical VMware workloads as an integrated vSphere data service, while still allowing you to take full advantage of all the vSphere enterprise capabilities you use today.

In this paper we examine the technology underlying the FlashSoft 4 for vSphere 6 solution, describe the features and capabilities it enables, and articulate the benefits customers can expect to realize upon deploying the solution.

Publish date: 08/31/16
Page 1 of 36 pages  1 2 3 >  Last ›