Taneja Group | SAN+Volume+Controller
Join Newsletter
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: SAN+Volume+Controller


Optimization For Real-Time Storage: IBM’s SAN Volume Controller

Storage virtualization has come a long way in the past seven years. After a false start in 2001, fraught with inflated expectations and product deficiencies, the category fell into infamy. Several vendors disappeared, many others repositioned themselves to focus on the Small Medium Business (SMB) space and yet others reinvented themselves with completely different products. Only one company stayed true to the promise of virtualization from the very beginning: IBM.

IBM’s SAN Volume Controller (SVC) product launched in July, 2003. The company took SVC and nurtured the market, in spite of the fact that many in the market didn’t even want to say the V-word anymore. IBM persisted, fundamentally because the customer could see the potential of storage virtualization and could count on IBM to support them through the early learning cycles. 

The payoff for IBM is huge. IBM has now shipped more than 12,000 SVC engines operating behind more than 5000 SVC systems. SVC is a mature, enterprise-proven product that has demonstrated proven ROI to its customers. IBM has shown that SVC and its in-band architecture can scale to handle the largest, most stringent enterprise SAN environments. By doing so, IBM has led the market where others have only slowly followed.

The value of storage virtualization is unquestioned. It helps rein in storage capital expenses (CAPEX) and operational expenses (OPEX) that are otherwise running amok. It provides a forum to perform storage management in a consistent fashion even while the underlying physical storage is heterogeneous and possesses its own idiosyncrasies. In our view, it is also a key building block for the next generation data center that will focus on delivering a variety of services. IBM knew that and held steady. We believe the payoff until now is a shadow of what it is to come, as IBM ties storage virtualization to other efforts, such as server blades and server virtualization. 

More recently, IBM has taken one more not inconsequential step in defining what value is when it comes to SVC and virtualization. That step is the introduction of Real-time Compression, integrated into the high performance controllers of SVC as an in-line technology that can be used against production data. In this Product Brief, we’ll take a look at SVC and its historical differentiators, and what compression for real-time, primary storage means for SVC customers – the value is no less than tremendous. 

Publish date: 06/25/12

Making Better Storage for a Virtual World: IBM SAN Volume Controller

Server virtualization has deeply penetrated IT and now hosts well more than half of all server instances, but storage virtualization has been slower to catch on. Yet the main constraint on further server virtualization adoption stems from poorly aligned storage. Perhaps the storage world just moves slower due to the “weight” of data, but if so it will also pick up more momentum. Here at Taneja Group we think due to virtualization pressure and desire for cloud (and now software defined data center) infrastructures, proven storage virtualization is next on everybody’s radar. This is good news for IBM and the IBM SAN Volume Controller (SVC). SVC, first launched in 2003, not only put a firm stake in the ground as to what block storage virtualization could be, but for a decade it has continued to evolve and has been for some time what we think of as the gold standard for what block storage virtualization can be.

In the face of ever-growing data, new processing paradigms, and aggressively evolving applications, storage virtualization provides an ideally adaptive approach by creating optimal logical storage services out of otherwise disparate and inflexible physical storage arrays. Like server virtualization, storage virtualization helps tackle difficult IT challenges in guaranteeing performance at scale, optimizing capacity utilization, taming complexity, increasing availability, and assuring data protection and DR across the enterprise - all while earning significant cost and efficiency benefits.

In fact, robust storage virtualization is becoming as necessary as server virtualization. Dynamic architectures require virtualizing all resources – compute, network, and storage. Experience with cloud implementations shows that server and storage virtualization are both necessary and complimentary, and lead towards the next generation data center built on end-to-end consistency, high automation, and  “software defined” principles.

While many IT storage strategists have pursued storage consolidation and adopted tiering practices to tame some growth challenges, they should all now look to storage virtualization to achieve higher levels of flexibility, agility, and resilience. However, storage virtualization adoption has lagged behind server virtualization. This is where IBM brings to the table a tremendously proven solution that has succeeded in more than 10,000 shipments to customers of almost every size and storage mix, coupled with world-class support and services to guarantee success.

In this profile, we’ll briefly define storage virtualization and the key benefits it brings to the modern data center and consider what it means when IBM says it is “Making Storage Better”. In that light we’ll look in more depth at IBM SVC, its architecture and key product features that have helped establish it as the market leading block storage virtualization solution.

Publish date: 06/28/13

IBM FlashSystem V840: Transforming the Traditional Datacenter

Within the past few months IBM announced a new member of its FlashSystem family of all-flash storage platforms – the IBM FlashSystem V840. FlashSystem V840 adds a rich set of storage virtualization features to the baseline FlashSystem 840 model. V840 combines two venerable technology heritages: the hardware hails from the long lineage of Texas Memory Systems flash storage arrays, and the storage services feature set for FlashSystem V840 is inherited from the IBM storage virtualization software that powers the SAN Volume Controller (SVC). One was created to deliver the highest performance out of flash technology and the other was a forerunner of what is being termed software defined storage. Together, these two technology streams represent decades of successful customer deployments in a wide variety of enterprise environments.

It is easy to be impressed with the performance and the tight integration of SVC functionality built into the FlashSystem V840. It is also easy to appreciate the wide variety of storage services built on top of SVC that are now an integral part of FlashSystem V840. But we believe the real impact of FlashSystem V840 is understood when one views how this product affects the cost of flash appliances, and more generally how this new cost profile will undoubtedly affect traditional data center architecture and deployment strategies. This Solution Profile will discuss how IBM FlashSystem V840 combines software-defined storage with the extreme performance of flash, and why the cost profile of this new product – equivalent essentially to current high performance disk storage – will have a major positive impact on data center storage architecture and the businesses that these data centers support.

Publish date: 09/16/14

Can you compare software-defined storage architecture to hyper-converged systems?

Hyper-converged systems and software-defined storage provide similar storage management features, but the technologies are architected very differently.

  • Premiered: 09/30/14
  • Author: Arun Taneja
  • Published: Tech Target: Search Virtual Storage
Topic(s): TBA Arun Taneja TBA TechTarget TBA hyperconverged TBA SDS TBA software-defined TBA software-defined storage TBA EMC VMAX TBA EMC TBA VMAX TBA HP TBA 3PAR TBA HDS TBA Hitachi TBA Hitachi Data Systems TBA IBM TBA IBM DS8000 TBA NetApp TBA High Performance TBA ViPR TBA SAN Volume Controller TBA SAN TBA Maxta TBA MxSP TBA Gridstore TBA HCS TBA Deduplication TBA VMWare TBA VSAN TBA Virtual SAN TBA EVO

General Purpose Disk Array Buying Guide

The disk array remains the core element of any storage infrastructure. So it’s appropriate that we delve into it in a lot more detail.

  • Premiered: 02/17/15
  • Author: Taneja Group
  • Published: Enterprise Storage Forum
Topic(s): TBA Disk TBA Storage TBA VDI TBA virtual desktop TBA NAS TBA Network Attached Storage TBA VCE TBA VNX TBA HDS TBA EMC TBA HP TBA NetApp TBA Hitachi Data Systems TBA Hitachi TBA IBM TBA Dell TBA Syncplicity TBA VSPEX TBA IBM SVC TBA SAN Volume Controller TBA software-defined TBA Software Defined Storage TBA SDS TBA Storwize TBA Storwize V7000 TBA replication TBA Automated Tiering TBA tiering TBA Virtualization TBA 3PAR

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15

Got mid-sized workloads? Storwize family to the rescue

Myths prevail in the IT industry just as they do in every other facet of life. One common myth is that mid-sized workloads, exemplified by smaller versions of mission critical applications, are only to be found in mid-size companies. The reality is mid-sized workloads exist in businesses of all sizes. Another common fallacy is that small and mid-size enterprises (SMEs) or departments within large organizations and Remote/Branch Offices (ROBOs) have lesser storage requirements than their larger enterprise counterparts. The reality is companies and groups of every size have business-critical applications and these workloads require enterprise-grade storage solutions that offer high-performance, reliability and strong security. The only difference is IT groups managing mid-sized workloads frequently have significant budget constraints. This is a tough combination and presents a big challenge for storage vendors striving to satisfy mid-sized workload needs.

A recent survey conducted by Taneja Group showed mid-size and enterprise needs for high-performance storage were best met by highly virtualized systems that minimize disruption to their current environment. Storage virtualization is key because it abstracts away all the differences of various storage boxes to create 1) a single virtualized storage pool 2) a common set of data services and 3) a common interface to manage storage resources. These storage virtualization capabilities are beneficial to the overall enterprise storage market and they are especially attractive to mid-sized storage customers because storage virtualization is the core underlying capability that drives efficiency and affordability.

The combination of affordability, manageability and enterprise-grade functionality is the core strength of the IBM Storwize family built upon IBM Spectrum Virtualize, the quintessential virtualization software that has been hardened for over a decade with IBM SAN Volume Controller (SVC). Simply stated – few enterprise storage solutions match IBM Storwize’s ability to deliver enterprise-grade functionality at such a low cost. From storage virtualization and auto-tiering to real-time data compression and proven reliability, Storwize with Spectrum Virtualize offers an end-to-end storage footprint and centralized management that delivers highly efficient storage for mid-sized workloads, regardless of whether they exist in small or large companies.

In this paper, we will look at the key requirements for mid-sized storage and we will evaluate IBM Storwize with Spectrum Virtualize’s ability to tackle mid-sized workload requirements. We will also present an overview of IBM Storwize family and provide a comparison of the various models in the Storwize portfolio.

Publish date: 06/24/16