Taneja Group | SVC
Join Newsletter
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: SVC

news / Blog

IBM peppers many storage products with Real-time Compression

You may recall that three years ago IBM picked up a small Israeli company named Storwize. That company sold an inline compression engine that sat in front of a NAS box and compressed/decompressed file data traversing the network. Its claim to fame was that it was 100% transparent to the application making the NFS call and, while it was indeed inline, it had zero impact on application performance. In fact, since the amount of data sitting on the storage box was approximately 1/3 the original size (average compression factor 3:1), reads were be faster from the disk. And since the Storwize box itself added only marginal latency, while it decompressed the data, the overall performance improved. Up until then, compression engines, which had been around for three decades or more, had always delivered compression at the expense of performance. That was simply the price one paid. Storwize broke the barrier for the first time.

  • Premiered: 07/19/12
  • Author: Arun Taneja
  • Published: Taneja Blog
Topic(s): Real-time Compression RTC SVC V7000 IBM Storwize NAS NFS

Optimization For Real-Time Storage: IBM’s SAN Volume Controller

Storage virtualization has come a long way in the past seven years. After a false start in 2001, fraught with inflated expectations and product deficiencies, the category fell into infamy. Several vendors disappeared, many others repositioned themselves to focus on the Small Medium Business (SMB) space and yet others reinvented themselves with completely different products. Only one company stayed true to the promise of virtualization from the very beginning: IBM.

IBM’s SAN Volume Controller (SVC) product launched in July, 2003. The company took SVC and nurtured the market, in spite of the fact that many in the market didn’t even want to say the V-word anymore. IBM persisted, fundamentally because the customer could see the potential of storage virtualization and could count on IBM to support them through the early learning cycles. 

The payoff for IBM is huge. IBM has now shipped more than 12,000 SVC engines operating behind more than 5000 SVC systems. SVC is a mature, enterprise-proven product that has demonstrated proven ROI to its customers. IBM has shown that SVC and its in-band architecture can scale to handle the largest, most stringent enterprise SAN environments. By doing so, IBM has led the market where others have only slowly followed.

The value of storage virtualization is unquestioned. It helps rein in storage capital expenses (CAPEX) and operational expenses (OPEX) that are otherwise running amok. It provides a forum to perform storage management in a consistent fashion even while the underlying physical storage is heterogeneous and possesses its own idiosyncrasies. In our view, it is also a key building block for the next generation data center that will focus on delivering a variety of services. IBM knew that and held steady. We believe the payoff until now is a shadow of what it is to come, as IBM ties storage virtualization to other efforts, such as server blades and server virtualization. 

More recently, IBM has taken one more not inconsequential step in defining what value is when it comes to SVC and virtualization. That step is the introduction of Real-time Compression, integrated into the high performance controllers of SVC as an in-line technology that can be used against production data. In this Product Brief, we’ll take a look at SVC and its historical differentiators, and what compression for real-time, primary storage means for SVC customers – the value is no less than tremendous. 

Publish date: 06/25/12

Making Better Storage for a Virtual World: IBM SAN Volume Controller

Server virtualization has deeply penetrated IT and now hosts well more than half of all server instances, but storage virtualization has been slower to catch on. Yet the main constraint on further server virtualization adoption stems from poorly aligned storage. Perhaps the storage world just moves slower due to the “weight” of data, but if so it will also pick up more momentum. Here at Taneja Group we think due to virtualization pressure and desire for cloud (and now software defined data center) infrastructures, proven storage virtualization is next on everybody’s radar. This is good news for IBM and the IBM SAN Volume Controller (SVC). SVC, first launched in 2003, not only put a firm stake in the ground as to what block storage virtualization could be, but for a decade it has continued to evolve and has been for some time what we think of as the gold standard for what block storage virtualization can be.

In the face of ever-growing data, new processing paradigms, and aggressively evolving applications, storage virtualization provides an ideally adaptive approach by creating optimal logical storage services out of otherwise disparate and inflexible physical storage arrays. Like server virtualization, storage virtualization helps tackle difficult IT challenges in guaranteeing performance at scale, optimizing capacity utilization, taming complexity, increasing availability, and assuring data protection and DR across the enterprise - all while earning significant cost and efficiency benefits.

In fact, robust storage virtualization is becoming as necessary as server virtualization. Dynamic architectures require virtualizing all resources – compute, network, and storage. Experience with cloud implementations shows that server and storage virtualization are both necessary and complimentary, and lead towards the next generation data center built on end-to-end consistency, high automation, and  “software defined” principles.

While many IT storage strategists have pursued storage consolidation and adopted tiering practices to tame some growth challenges, they should all now look to storage virtualization to achieve higher levels of flexibility, agility, and resilience. However, storage virtualization adoption has lagged behind server virtualization. This is where IBM brings to the table a tremendously proven solution that has succeeded in more than 10,000 shipments to customers of almost every size and storage mix, coupled with world-class support and services to guarantee success.

In this profile, we’ll briefly define storage virtualization and the key benefits it brings to the modern data center and consider what it means when IBM says it is “Making Storage Better”. In that light we’ll look in more depth at IBM SVC, its architecture and key product features that have helped establish it as the market leading block storage virtualization solution.

Publish date: 06/28/13

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15

Primary Data primes the pump for data virtualization software

Startup Primary Data prepares GA launch of DataSphere data virtualization software, which is designed to make data available any time on any storage.

  • Premiered: 08/27/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA primary data TBA data virtualization TBA Virtualization TBA DataSphere TBA VMWorld TBA VMworld 2015 TBA EMC TBA Isilon TBA NetApp TBA FAS TBA Intel TBA NVMe TBA Amazon TBA Simple Storage TBA Simple Storage Service TBA S3 TBA Amazon S3 TBA dataspace TBA Metadata TBA metadata engine TBA Primary Storage TBA SMB TBA Flash TBA SSD TBA server TBA Glacier TBA OpenStack TBA OpenStack Swift TBA Microsoft Azure TBA Azure

Kaminario K2 array uses 3D TLC NAND flash

Kaminario unveils 5.5 version of its K2 all-flash array with 3D TLC NAND flash, claims of sub-$1 per GB pricing and support for asynchronous replication.

  • Premiered: 08/20/15
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA Kaminario TBA K2 TBA NAND TBA Flash TBA SSD TBA replication TBA all-flash TBA AFA TBA All Flash TBA all flash array TBA Compression TBA Deduplication TBA Inline TBA inline deduplication TBA Data reduction TBA Samsung TBA Database TBA Data Center TBA DRAM TBA SVC TBA Dell TBA Pure Storage TBA controller TBA SATA TBA DR TBA Disaster Recovery TBA Availability TBA Jeff Kato
news / Blog

IBM Back in the Growth Game with Cleversafe Acquisition

On October 5, 2015, IBM announced the acquisition of Cleversafe, a provider of object-based storage software.


Got mid-sized workloads? Storwize family to the rescue

Myths prevail in the IT industry just as they do in every other facet of life. One common myth is that mid-sized workloads, exemplified by smaller versions of mission critical applications, are only to be found in mid-size companies. The reality is mid-sized workloads exist in businesses of all sizes. Another common fallacy is that small and mid-size enterprises (SMEs) or departments within large organizations and Remote/Branch Offices (ROBOs) have lesser storage requirements than their larger enterprise counterparts. The reality is companies and groups of every size have business-critical applications and these workloads require enterprise-grade storage solutions that offer high-performance, reliability and strong security. The only difference is IT groups managing mid-sized workloads frequently have significant budget constraints. This is a tough combination and presents a big challenge for storage vendors striving to satisfy mid-sized workload needs.

A recent survey conducted by Taneja Group showed mid-size and enterprise needs for high-performance storage were best met by highly virtualized systems that minimize disruption to their current environment. Storage virtualization is key because it abstracts away all the differences of various storage boxes to create 1) a single virtualized storage pool 2) a common set of data services and 3) a common interface to manage storage resources. These storage virtualization capabilities are beneficial to the overall enterprise storage market and they are especially attractive to mid-sized storage customers because storage virtualization is the core underlying capability that drives efficiency and affordability.

The combination of affordability, manageability and enterprise-grade functionality is the core strength of the IBM Storwize family built upon IBM Spectrum Virtualize, the quintessential virtualization software that has been hardened for over a decade with IBM SAN Volume Controller (SVC). Simply stated – few enterprise storage solutions match IBM Storwize’s ability to deliver enterprise-grade functionality at such a low cost. From storage virtualization and auto-tiering to real-time data compression and proven reliability, Storwize with Spectrum Virtualize offers an end-to-end storage footprint and centralized management that delivers highly efficient storage for mid-sized workloads, regardless of whether they exist in small or large companies.

In this paper, we will look at the key requirements for mid-sized storage and we will evaluate IBM Storwize with Spectrum Virtualize’s ability to tackle mid-sized workload requirements. We will also present an overview of IBM Storwize family and provide a comparison of the various models in the Storwize portfolio.

Publish date: 06/24/16