Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Infrastructure Management

Includes Security, SRM, Cloud, ICM, SaaS, Business Intelligence, Data Warehouse, Database Appliances, NFM, Storage Management.

This section covers all forms of technologies that impact IT infrastructure management. Taneja Group analysts particularly focus on the interplay between server virtualization and storage, with and without virtualization, and study the impact on performance, security and management of the IT infrastructure. This section also includes all aspects of storage management (SRM, SMI-S) and the role of cross correlation engines on overall performance of an application. Storage virtualization technologies (In-band, Out-of-band, split path architectures or SPAID) are all covered in detail. Data Security, whether for data in-flight or at-rest and enterprise level key management issues are covered along with all the players that make up the ecosystems.

As databases continue to grow larger and more complex they present issues in terms of security, performance and management. Taneja Group analysts cover the vendors and the technologies that harness the power of archiving to reduce the size of active databases. We also cover specialized database appliances that have become the vogue lately. All data protection issues surrounding databases are also covered in detail. We write extensively on this topic for the benefit of the IT user.

Page 1 of 28 pages  1 2 3 >  Last ›
Profile

Software-Driven Mid-Range Storage: Customer Value and the Software-driven IBM Storwize V5000

Whether a customer is making their first foray into external storage technology, or buying their 100th storage array, there is little doubt in most customers' minds that storage can be hard. Specialized storage technology, combined with significant cost and the critical nature of stored data, mix together to make storage one of the riskiest endeavors most IT practitioners will undertake.

Over the past two years, the storage market has exploded with offerings that provide more storage system choices than ever before. In part, this is due to the recent and rapid introduction of technologies like flash storage that have enabled new companies to bring to market fairly competent storage systems with significantly less engineering effort.

There is little doubt that the resulting competition and choice are a boon to the customer, as this can drive down prices, and compel vendors to innovate and deliver new features more aggressively. But sometimes, new technologies may leave lingering surprises for the customer - especially for those customers trying to build a long term and lasting storage strategy. Moreover, storage technology is changing in multiple dimensions. There is a revolutionary shift toward software-defined capabilities, while simultaneously media, controller architectures, virtual infrastructure integrations, and workload patterns are all simultaneously changing. In the midst of such change, it is more important than ever to be attentive to what really matters, and in a changing market, what matters is not always clear. In our view, the consideration of the storage practitioner must broaden, and consider a careful balancing act that considers both new capabilities - like agility and cost-optimizing software-defined functionality - and foundational storage underpinnings that are too easy to take for granted. In this product profile, we've turned our sights on a recent product introduction from IBM - the Storwize V5000 - to consider how IBM is integrating a broad swatch of new capabilities while building those capabilities on a field proven and deeply architected storage foundation.

Publish date: 04/01/14
Free Reports

Fibre Channel: The Proven and Reliable Workhorse for Enterprise Storage Networks

Mission-critical assets such as virtualized and database applications demand a proven enterprise storage protocol to meet their performance and reliability needs. Fibre Channel has long filled that need for most customers, and for good reason. Unlike competing protocols, Fibre Channel was specifically designed for storage networking, and engineered to deliver high levels of reliability and availability as well as consistent and predictable performance for enterprise applications. As a result, Fibre Channel has been the most widely used enterprise protocol for many years.

But with the widespread deployment of 10GbE technology, some customers have explored the use of other block protocols, such as iSCSI and Fibre Channel over Ethernet (FCoE), or file protocols such as NAS. Others have looked to Infiniband, which is now being touted as a storage networking solution. In marketing the strengths of these protocols, vendors often promote feeds and speeds, such as raw line rates, as a key advantage for storage networking. However, as we’ll see, there is much more to storage networking than raw speed.

It turns out that on an enterprise buyer’s scorecard, raw speed doesn’t even make the cut as an evaluation criteria. Instead, decision makers focus on factors such as a solution’s demonstrated reliability, latency, and track record in supporting Tier 1 applications. When it comes to these requirements, no other protocol can measure up to the inherent strengths of Fibre Channel in enterprise storage environments.

Despite its long, successful track record, Fibre Channel does not always get the attention and visibility that other protocols receive. While it may not be winning the media wars, Fibre Channel offers customers a clear and compelling value proposition as a storage networking solution. Looking ahead, Fibre Channel also presents an enticing technology roadmap, even as it continues to meet the storage needs of today’s most critical business applications.

In this paper, we’ll begin by looking at the key requirements customers should look for in a commercial storage protocol. We’ll then examine the technology capabilities and advantages of Fibre Channel relative to other protocols, and discuss how those translate to business benefits. Since not all vendor implementations are created equal, we’ll call out the solution set of one vendor – QLogic – as we discuss each of the requirements, highlighting it as an example of a Fibre Channel offering that goes well beyond the norm.

Publish date: 02/28/14
Profile

Software Storage Solutions for Virtualization

Storage has long been a major source of operational and architectural challenges for IT practitioners, but today these challenges are most felt in the virtual infrastructure. The challenges are sprung from the physicality of storage – while the virtual infrastructure has made IT entirely more agile and adaptable than ever before, storage still depends upon digital bits that are permanently stored on a physical device somewhere within the data center.

For practitioners who have experienced the pain caused by this – configuration hurdles, painful data migrations, and even disasters – the idea of software-defined storage likely sounds somewhat ludicrous. But the term also holds tremendous potential to change the way IT is done by tackling this one last vestige of the traditional, inflexible IT infrastructure.

The reality is that software-defined storage isn’t that far away. In the virtual infrastructure, a number of vendors have long offered Virtual Storage Appliances (VSAs) that can make storage remarkably close to software-defined. These solutions allow administrators to easily and rapidly deploy storage controllers within the virtual infrastructure, and equip either networked storage pools or the direct-attached storage within a server with enterprise-class storage features that are consistent and easily managed by the virtual administrator, irrespective of where the virtual infrastructure is (in the cloud, or on premise). Such solutions can make comprehensive storage functionality available in places where it could never be had before, allow for higher utilization of stranded pools of storage (such as local disk in the server), and enable a homogeneous management approach even across many distributed locations.

The 2012-2013 calendar years have brought an increasing amount of attention and energy in the VSA marketplace. While the longest established, major vendor VSA solution in the marketplace has been HP’s StoreVirtual VSA, in 2013 an equally major vendor – VMware – introduced a similar, software-based, scale-out storage solution for the virtual infrastructure – VSAN. While VMware’s VSAN does not directly carry a VSA moniker, and in fact stands separate from VMware’s own vSphere Storage Appliance, VSAN has an architecture very similar to the latest generation of HP’s own StoreVirtual VSA. Both of these products are scale-out storage software solutions that are deployed in the virtual infrastructure and contain solid-state caching/tiering capabilities that enhance performance and make them enterprise-ready for production workloads. VMware’s 2013 announcement finally meant HP is no longer the sole major vendor (Fortune 500) with a primary storage VSA approach. This only adds validation to other vendors who have long offered VSA-based solutions, vendors like FalconStor, Nexenta, and StorMagic.

We’ve turned to a high level assessment of five market leaders who are today offering VSA or software storage in the virtual infrastructure. We’ve assessed these solutions here with an eye toward how they fit as primary storage for the virtual infrastructure. In this landscape, we’ve profiled the key characteristics and capabilities critical to storage systems fulfilling this role. At the end of our assessment, clearly each solution has a place in the market, but not all VSA solutions are ready for primary storage. Those that are, may stand to reinvent the practice of storage in customer data centers. 

Publish date: 01/03/14
Profile

Storage Infrastructure Performance Validation

Unacceptably poor performance can be a career killer and so IT generally “over-provisions” infrastructure as the rule. But how much is this approach really costing us? Today, the biggest line item in IT infrastructure spending is storage.  Even with data growth and new performance demands increasing, based on “safe” estimates we still overprovision by 50% or more which results in billions of dollars of wasted storage spending. A more important problem is that we may not be even be provisioning the right infrastructure for our application workload requirements, taking serious risks with every new investment.

Equally vital is knowing when to upgrade or refresh. Looking forward, how can anyone know when their current infrastructure will hit its inevitable “wall”? In day-to-day operations, every time a change is made to storage infrastructure, the application or the network, that change could be introducing a deeply rooted problem that might only show up under production pressure. Why do enterprises seem to proceed blindly, willingly rolling the dice when it comes to performance? Here at Taneja Group, we see an obvious correlation between risk of failure and lack of knowledge about how infrastructure responds to each application workload.

Unfortunately, enterprises too often rely on vendor benchmarks produced under ideal conditions with carefully crafted workloads that don’t reflect the real target environment. Or they might choose readily scalable systems so that in times of trouble they can always buy and deploy more resources, although this can be highly disruptive and expensive when buying on a short notice. They might architect for large virtual and cloud environments in an attempt to average out utilization and pool excess capacity for peak demand, but still without knowing how performance will degrade at the upper reaches of VM density. In contrast, we believe that IT managers must evolve from a perspective of assuming performance, to one of assuring performance.

Typical testing approaches usually involve generating workloads with heavily scripted servers used as load generators. This is an expensive, unreliable, brute force approach, only trotted out when sufficient staff, time and money is available to execute a large-scale performance evaluation. But Load DynamiX has changed that equation for storage, evolving workload modeling and performance load testing into a cost-efficient and practical continuous process. We think that Load DynamiX’s solution supports the adoption of a new best practice of proactively managing infrastructure from a position of knowledge called Infrastructure Performance Validation (IPV). 

In this report we will look at Load DynamiX’s workload modeling software and storage performance validation appliances, and walk through how IT can use them to establish effective IPV practices across the entire IT infrastructure lifecycle. We’ll examine why existing approaches to storage performance evaluation fall short and why we believe that successful storage deployments require a detailed understanding of application workload behavior. We’ll briefly review Load DynamiX’s solution to see how it addresses these challenges and uniquely enables broad adoption of IPV to the benefit of both the business and IT. We’ll look at how Load DynamiX generates accurate workload models for storage testing, a key IPV capability, and how limit testing and “what if” scenarios can be run, analyzed, and communicated for high impact. Finally, we’ll look at a range of validation scenarios, and how Load DynamiX can be leveraged to reduce risk, assure performance, and lower IT costs.

Publish date: 12/02/13
Free Reports

Market Landscape Abstract: Enterprise Hadoop Infrastructure for Big Data IT

Hadoop is coming to enterprise IT in a big way. The competitive advantage that can be gained from analyzing big data is just too “big” to ignore. And the amount of data available to crunch is only growing bigger, whether from new sensors, capture of people, systems and process “data exhaust”, or just longer retention of available raw or low-level details. It’s clear that enterprise IT practitioners everywhere are soon going to have to operate scale-out computing platforms in the production data center, and being the first, most mature solution on the scene, Hadoop is the likely target. The good news is that there is now a plethora of Hadoop infrastructure options to choose from to fit almost every practical big data need – the challenge now for IT is to implement the best solutions for their business client needs.

While Apache Hadoop as originally designed had a relatively narrow application for only certain kinds of batch-mode parallel algorithms applied over unstructured (or semi-structured depending on your definition) data, because of its widely available open source nature, commodity architecture approach, and ability to extract new kinds of value out of previously discarded or ignored data sets, the Hadoop ecosystem is rapidly evolving and expanding. With recent new capabilities like YARN that opens up the main execution platform to applications beyond batch MapReduce, the integration of structured data analysis, real-time streaming and query support, and the roll out of virtualized enterprise hosting options, Hadoop is quickly becoming a mainstream data processing platform.

There has been much talk that in order to derive top value from big data efforts, rare and potentially expensive data scientist types are needed to drive. On the other hand, there is an abundance of higher level analytical tools and pre-packaged applications emerging to support the existing business analyst and user with familiar tools and interfaces. While completely new companies have been founded on the exciting information and operational intelligence gained from exploiting big data, we expect wider adoption by existing organizations based on augmenting traditional lines of business with new insight and revenue enhancing opportunity. In addition, a Hadoop infrastructure serves as a great data capture and ETL base for extracting more structured data to feed downstream workflows, including traditional BI/DW solutions. No matter how you want to slice it, big data is becoming a common enterprise workload, and enterprise IT infrastructure folks will need to deploy, manage, and provide Hadoop services to their businesses.

Publish date: 10/01/13
Report

Astute Networked Flash ViSX: Application Performance Achieved Cost Effectively

In their quest to achieve better storage performance for their critical applications, mid-market customers often face a difficult quandary. Whether they have maxed out performance on their existing iSCSI arrays, or are deploying storage for a new production application, customers may find that their choices force painful compromises.

When it comes to solving immediate application performance issues, server-side flash storage can be a tempting option. Server-based flash is pragmatic and accessible, and inexpensive enough that most application owners can procure it without IT intervention. But by isolating storage in each server, such an approach breaks a company's data management strategy, and can lead to a patchwork of acceleration band-aids, one per application.

At the other end of the spectrum, customers thinking more strategically may look to a hybrid or all-flash storage array to solve their performance needs. But as many iSCSI customers have learned the hard way, the potential performance gains of flash storage can be encumbered by network speed. In addition to this performance constraint, array-based flash storage offerings tend to touch multiple application teams and involve big dollars, and may only be considered a viable option once pain points have been thoroughly and widely felt.

Fortunately for performance-challenged, iSCSI customers, there is a better alternative. Astute Networks ViSX sits in the middle, offering a broader solution than flash in the server, bust cost effective and tactically achievable as well. As an all-flash storage appliance that resides between servers and iSCSI storage arrays, ViSX complements and enhances existing iSCSI SAN environments, delivering wire-speed storage access without disrupting or forcing changes to server, virtual server, storage or application layers. Customers can invest in ViSX before their performance pain points get too big, or before they've gone down the road of breaking their infrastructure with a tactical solution.

Publish date: 08/31/13
Page 1 of 28 pages  1 2 3 >  Last ›