Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Technology

Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.

All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.

Page 1 of 42 pages  1 2 3 >  Last ›
Profile

Acaveo Smart Information Server: Bringing Dark Data into Light

In 2009, a fully burdened computing infrastructure figured storage at about 20% of all components.  By 2015, we’ve surged to 40% storage in the infrastructure (and counting) as companies pour in more and more data. And most of this data is hard-to-manage unstructured data, which typically represents 75%-80% of corporate data. This burdened IT infrastructure presents two broad and serious consequences: it increases capital and operating expenses, and cripples unstructured data management.  Capital and operating expenses scale up sharply with the swelling storage tide. Today’s storage costs alone include buying and deploying storage for file shares, email, and ECMs like SharePoint. Additional services such as third-party file sharing services and cloud-based storage add to cost and complexity.

And growing storage and complexity make managing unstructured data extraordinarily difficult. A digital world is delivering more data to more applications than ever before. IT’s inability to visualize and act upon widely distributed data impacts retention, compliance, value, and security. In fact, this visibility (or invisibility) problem is so prevalent that it has gained it own stage name: dark data. Dark data plagues IT with hard-to-answer questions: What data is on those repositories? How old is it? What application does it belong to? Which users can access it?

IT may be able to answer those questions on a single storage system with file management tools. But across a massive storage infrastructure including the cloud? No. Instead, IT must do what it can to tier aging data, to safely delete when possible, and try to keep up with application storage demands across the map. The status quo is not going to get any better in the face of data growth. Data is growing at 55% and higher per year in the enterprise. The energy ramifications alone of storing that much data are sobering. Data growth is getting to the point that it is overrunning the storage budget’s capacity to pay for it. And managing that data for cost control and business processes is harder still.

Conventional wisdom would have IT simply move data to the cloud. But conventional wisdom is mistaken. The problem is not how to store all of that data – IT can solve that problem with a cloud subscription. The problem is that once stored, IT lacks the tools to intelligently manage that data where it resides.

This is where highly scalable, unstructured file management comes into the picture: the ability to find, classify, and act upon files spread throughout the storage universe. In this Product Profile we’ll present Acaveo, a file management platform that discovers and acts on data-in-place, and federates classification and search activities across the enterprise storage infrastructure. The result is highly intelligent and highly scalable file management that cuts cost and adds value to business processes across the enterprise. 

Publish date: 02/27/15
Profile

Enterprise Flash - Scalable, Smart, and Economical

There is a serious re-hosting effort going on in data center storage as flash-filled systems replace large arrays of older spinning disks for tier 1 apps. Naturally as costs drop and the performance advantages of flash-accelerated IO services become irresistible, they also begin pulling in a widening circle of applications with varying QoS needs. Yet this extension leads to a wasteful tug-of-war between high-end flash only systems that can’t effectively serve a wide variety of application workloads and so-called hybrid solutions originally architected for HDDs that are often challenged to provide the highest performance required by those tier 1 applications.

Someday in its purest form all-flash storage theoretically could drop in price enough to outright replace all other storage tiers even at the largest capacities, although that is certainly not true today. Here at Taneja Group we think storage tiering will always offer a better way to deliver varying levels of QoS by balancing the latest in performance advances appropriately with the most efficient capacities. In any case, the best enterprise storage solutions today need to offer a range of storage tiers, often even when catering to a single application’s varying storage needs.

There are many entrants in the flash storage market, with the big vendors now rolling out enterprise solutions upgraded for flash. Unfortunately many of these systems are shallow retreads of older architectures, perhaps souped-up a bit to better handle some hybrid flash acceleration but not able to take full advantage of it. Or they are new dedicated flash-only point products with big price tags, immature or minimal data services, and limited ability to scale out or serve a wider set of data center QoS needs.

Oracle saw an opportunity for a new type of cost-effective flash-speed storage system that could meet the varied QoS needs of multiple enterprise data center applications – in other words, to take flash storage into the mainstream of the data center. Oracle decided they had enough storage chops (from Exadata, ZFS, Pillar, Sun, etc.) to design and build a “flash-first” enterprise system intended to take full advantage of flash as a performance tier, but also incorporate other storage tiers naturally including slower “capacity” flash, performance HDD, and capacity HDD. Tiering by itself isn’t a new thing – all the hybrid solutions do it and there are other vendor solutions that were designed for tiering – but Oracle built the FS1 Flash Storage System from the fast “flash” tier down, not by adding flash to a slower or existing HDD-based architecture working “upwards.” This required designing intelligent automated management to take advantage of flash for performance while leveraging HDD to balance out cost. This new architecture has internal communication links dedicated to flash media with separate IO paths for HDDs, unlike traditional hybrids that might rely solely on their older, standard HDD-era architectures that can internally constrain high-performance flash access.

Oracle FS1 is a highly engineered SAN storage system with key capabilities that set it apart from other all-flash storage systems, including built in QoS management that incorporates business priorities, best-practices provisioning, and a storage alignment capability that is application aware – for Oracle Database naturally, but that can also address a growing body of other key enterprise applications (such as Oracle JD Edwards, PeopleSoft, Siebel, MS Exchange/SQL Server, and SAP) – and a “service provider” capability to carve out multi-tenant virtual storage “domains” while online that are actually enforced at the hardware partitioning level for top data security isolation.

In this report, we’ll dive in and examine some of the great new capabilities of the Oracle FS1. We’ll look at what really sets it apart from the competition in terms of its QoS, auto-tiering, co-engineering with Oracle Database and applications, delivered performance, capacity scaling and optimization, enterprise availability, and OPEX reducing features, all at a competitive price point that will challenge the rest of the increasingly flash-centric market.

Publish date: 02/02/15
Report

EMC PowerPath: Optimized IO Multipathing for All Flash Arrays

All-flash arrays are changing the datacenter for the better. No longer do we worry about IOPS bottlenecks from the array: all-flash arrays (AFA) can deliver a staggering amount of IOPs. AFAs with the ability to deliver hundreds of thousands of IOPs are not uncommon. The problem now, however, is how to get the IOPS from the array to the servers. We recently had a chance to see how well an AFA using EMC PowerPath driver works to eliminate this bottleneck—and we were blown away. Most comparisons with datacenter infrastructure show a 10-30% improvement in performance; but, the performance improvement that we saw with PowerPath was extraordinary.

Getting bits from an array to server is easy —very easy, in fact. The trick is getting the bits from a server to an array in an efficient manner when you have many virtual machines (VM) on multiple physical hosts that are transmitting the bits over a physical network with a virtual fabric overlay; this is much more difficult. Errors can get introduced and must be dealt with, the most efficient path must be obtained and established, re-evaluated and reestablished continually, and any misconfiguration can produce less than optimal performance. In some cases, this can cause outages or even data loss. In order to deal with the “pathing,” or how the I/O travels from the VM to storage, the OS running on the host needs a driver, or in the case where multiple paths can be taken from the server to the array, a multipathing driver needs to be used to direct the traffic.

Windows, Linux, VMware and most other modern operating systems include a basic multipath driver; however, these drivers tend to be generic and not code optimized to extract the maximum performance from an array and come with only rudimentary traffic optimization and management functions. In some cases these generic drivers are fine, but in the majority of datacenters the infrastructure is overtaxed and its equipment needs to be used in the most efficient manner possible. Fortunately, storage companies such as EMC are committed to making their arrays work as performant as possible and spend a considerable amount of time and research to develop multipathing drivers optimized for their arrays. EMC invited us to take a look at how PowerPath, their optimized “intelligent” multipath driver, performed on an XtremIO flash array connected to a Dell PowerEdge R710 server running ESX 5.5 while simulating an Oracle workload. We looked at the results of the various tests EMC ran comparing PowerPath/VE multipath driver against VMware’s ESXi Native Multipath driver and we were impressed—very impressed—by the difference that an optimized, multipath driver like PowerPath can make in a high IO traffic scenario

Publish date: 01/01/15
Profile

Dell XC Web-Scale Hyperconverged Series: A Solution for your Most Dynamic Virtualized Environments

Over the past few years, to reduce cost and to improve time-to-value, converged infrastructure systems – the integration of compute, networking and storage - have been readily adopted by large enterprise users. The success of these systems results from the deployment of purpose built integrated converged infrastructure optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI). Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, while integrated together in same rack, still consisted of best-in-breed standalone devices. These systems work well in stable, predictable environments, however, when a virtualization environment is dynamic with unpredictable growth, traditional three-tier architectures often times lack the simplicity, scalability and flexibility needed to operate in such environment.

Enter HyperConvergence, where the three-tier architecture has been collapsed into a single system that is purpose-built for virtualization from the ground up with virtualization, compute and storage, along with advanced features such as deduplication, compression and data protection, are all integrated into an x86 industry-standard building block node. These devices are built upon scale-out architectures with a 100% VM centric management paradigm. The simplicity, scalability and flexibility of this architecture make it a perfect fit for dynamic virtualized environments.

Dell XC Web-scale Converged Appliances powered by Nutanix software are delivered as a series of HyperConverged models that are extremely flexible and scalable. In this solution brief we will examine what constitutes a dynamic virtualized environment and how the Dell XC Web-scale Appliance series fits into such an environment. We can confidently state that by implementing Dell’s XC flexible range of Web-scale appliances, businesses can deploy solutions across a broad spectrum of virtualized workloads where flexibility, scalability and simplicity are critical requirements. Dell is an ideal partner to deliver Nutanix software because of its global reach, streamlined operations and enterprise systems solutions expertise. The company is well positioned to bring HyperConverged platforms to the masses and introduce the technology to a new set of customers previously unreached.

Publish date: 12/16/14
Technology Validation

Unified Storage Array Efficiency: HP 3PAR StoreServ 7400c versus EMC VNX 5600 (TVS)

The IT industry is in the middle of a massive transition toward simplification and efficiency around managing on-premise infrastructure at today’s enterprise data centers. In the past few years there has been a rampant onset of technology clearly focused at simplifying and radically changing the economics of traditional enterprise infrastructure. These technologies include Public/Private Clouds, Converged Infrastructure, and Integrated Systems to name a few. All of these technologies are geared to provide more efficiency of resources, take less time to administer, all at a reduced TCO. However, these technologies all rely on efficiency and simplicity of the underlying technologies of Compute, Network, and Storage. Often times the overall solution is only as good as the weakest link in the chain. The storage tier of the traditional infrastructure stack is often considered the most complex to manage.

This technology validation focuses on measuring the efficiency and management simplicity by comparing two industry leading mid-range external storage arrays configured in the use case of unified storage. Unified storage has been a popular approach to storage subsystems that consolidates both file access and block access within a single external array thus being able to share the same precious drive capacity resources across both protocols simultaneously. Businesses value the ability to send server workloads down a high performance low latency block protocol while still taking advantage of simplicity and ease of sharing file protocols to various clients. In the past businesses would have either setup a separate file server in front of their block array or buy completely separate NAS devices, thus possibly over buying storage resource and adding complexity. Unified storage takes care of this by providing ease of managing one storage device for all business workload needs. In this study we compared the attributes of storage efficiency and ease of managing and monitoring an EMC VNX unified array versus an HP 3PAR StoreServ unified array. The approach we used was to setup two arrays side-by-side and recorded the actual complexity of managing each array for file and block access, per the documents and guides provided for each product. We also went through the exercise of sizing various arrays via publicly available configuration guides to see what the expected storage density efficiency would be for some typically configured systems.

Our conclusion was nothing short of astonishment. In the case of the EMC VNX2 technology, the approach to unification more closely resembles a hardware packaging and management veneer approach than what would have been expected for a second generation unified storage system. HP 3PAR StoreServ on the other hand, in its second generation of unified storage has transitioned the file protocol services from external controllers to completely converged block and file services within the common array controllers. In addition, all the data path and control plumbing is completely internal as well with no need to wire loop back cables between controllers. HP has also made the investment to create a totally new management paradigm based on the HP OneView management architecture, which radically simplifies the administrative approach to managing infrastructure. After performing this technology validation we can state with confidence that HP 3PAR StoreServ 7400c is 2X easier to provision, 2X easier to monitor, and up to 2X more data density efficient than a similarly configured EMC VNX 5600. 

Publish date: 12/03/14
Profile

OneCloud Software: DR For the Masses

With the advent of server virtualization, many adopters erroneously think that disaster recovery (DR) is a problem of the past. They cite the ability of the hypervisors to replace the two most common yet imperfect DR choices: 1) infrastructure replication to a secondary replica site – fast to restore but very expensive, or 2) economical tape backup with off-site long-term storage – economical but slow to recover from.

The reality is that while server virtualization has certainly helped the industry get closer to simpler and less expensive DR products, DR still remains one of the major challenges for IT. This is especially true for applications that fall somewhere between the most mission critical where RTOs and RPOs of a few seconds is needed (and cost is often no object) and those that find RTOs and RPOs of a day or two to be adequate. Today, DR products available for these “intermediate” applications are few and far between, especially when overall cost of DR is considered.  

The missing piece so far has been a cost-effective DR solution with excellent RTO and RPO for the majority of business applications -- without requiring a secondary site. OneCloud steps into the gap by replacing that expensive site with the hyper-scale public cloud. This Profile will discuss how OneCloud works to extend the primary data center onto the cloud, and how this impacts the ease and speed of VM recovery.

Publish date: 11/19/14
Page 1 of 42 pages  1 2 3 >  Last ›