Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Profile

Page 7 of 43 pages ‹ First  < 5 6 7 8 9 >  Last ›
Profile

Software Storage Solutions for Virtualization

Storage has long been a major source of operational and architectural challenges for IT practitioners, but today these challenges are most felt in the virtual infrastructure. The challenges are sprung from the physicality of storage – while the virtual infrastructure has made IT entirely more agile and adaptable than ever before, storage still depends upon digital bits that are permanently stored on a physical device somewhere within the data center.

For practitioners who have experienced the pain caused by this – configuration hurdles, painful data migrations, and even disasters – the idea of software-defined storage likely sounds somewhat ludicrous. But the term also holds tremendous potential to change the way IT is done by tackling this one last vestige of the traditional, inflexible IT infrastructure.

The reality is that software-defined storage isn’t that far away. In the virtual infrastructure, a number of vendors have long offered Virtual Storage Appliances (VSAs) that can make storage remarkably close to software-defined. These solutions allow administrators to easily and rapidly deploy storage controllers within the virtual infrastructure, and equip either networked storage pools or the direct-attached storage within a server with enterprise-class storage features that are consistent and easily managed by the virtual administrator, irrespective of where the virtual infrastructure is (in the cloud, or on premise). Such solutions can make comprehensive storage functionality available in places where it could never be had before, allow for higher utilization of stranded pools of storage (such as local disk in the server), and enable a homogeneous management approach even across many distributed locations.

The 2012-2013 calendar years have brought an increasing amount of attention and energy in the VSA marketplace. While the longest established, major vendor VSA solution in the marketplace has been HP’s StoreVirtual VSA, in 2013 an equally major vendor – VMware – introduced a similar, software-based, scale-out storage solution for the virtual infrastructure – VSAN. While VMware’s VSAN does not directly carry a VSA moniker, and in fact stands separate from VMware’s own vSphere Storage Appliance, VSAN has an architecture very similar to the latest generation of HP’s own StoreVirtual VSA. Both of these products are scale-out storage software solutions that are deployed in the virtual infrastructure and contain solid-state caching/tiering capabilities that enhance performance and make them enterprise-ready for production workloads. VMware’s 2013 announcement finally meant HP is no longer the sole major vendor (Fortune 500) with a primary storage VSA approach. This only adds validation to other vendors who have long offered VSA-based solutions, vendors like FalconStor, Nexenta, and StorMagic.

We’ve turned to a high level assessment of five market leaders who are today offering VSA or software storage in the virtual infrastructure. We’ve assessed these solutions here with an eye toward how they fit as primary storage for the virtual infrastructure. In this landscape, we’ve profiled the key characteristics and capabilities critical to storage systems fulfilling this role. At the end of our assessment, clearly each solution has a place in the market, but not all VSA solutions are ready for primary storage. Those that are, may stand to reinvent the practice of storage in customer data centers. 

Publish date: 01/03/14
Profile

Storage Infrastructure Performance Validation

Unacceptably poor performance can be a career killer and so IT generally “over-provisions” infrastructure as the rule. But how much is this approach really costing us? Today, the biggest line item in IT infrastructure spending is storage.  Even with data growth and new performance demands increasing, based on “safe” estimates we still overprovision by 50% or more which results in billions of dollars of wasted storage spending. A more important problem is that we may not be even be provisioning the right infrastructure for our application workload requirements, taking serious risks with every new investment.

Equally vital is knowing when to upgrade or refresh. Looking forward, how can anyone know when their current infrastructure will hit its inevitable “wall”? In day-to-day operations, every time a change is made to storage infrastructure, the application or the network, that change could be introducing a deeply rooted problem that might only show up under production pressure. Why do enterprises seem to proceed blindly, willingly rolling the dice when it comes to performance? Here at Taneja Group, we see an obvious correlation between risk of failure and lack of knowledge about how infrastructure responds to each application workload.

Unfortunately, enterprises too often rely on vendor benchmarks produced under ideal conditions with carefully crafted workloads that don’t reflect the real target environment. Or they might choose readily scalable systems so that in times of trouble they can always buy and deploy more resources, although this can be highly disruptive and expensive when buying on a short notice. They might architect for large virtual and cloud environments in an attempt to average out utilization and pool excess capacity for peak demand, but still without knowing how performance will degrade at the upper reaches of VM density. In contrast, we believe that IT managers must evolve from a perspective of assuming performance, to one of assuring performance.

Typical testing approaches usually involve generating workloads with heavily scripted servers used as load generators. This is an expensive, unreliable, brute force approach, only trotted out when sufficient staff, time and money is available to execute a large-scale performance evaluation. But Load DynamiX has changed that equation for storage, evolving workload modeling and performance load testing into a cost-efficient and practical continuous process. We think that Load DynamiX’s solution supports the adoption of a new best practice of proactively managing infrastructure from a position of knowledge called Infrastructure Performance Validation (IPV). 

In this report we will look at Load DynamiX’s workload modeling software and storage performance validation appliances, and walk through how IT can use them to establish effective IPV practices across the entire IT infrastructure lifecycle. We’ll examine why existing approaches to storage performance evaluation fall short and why we believe that successful storage deployments require a detailed understanding of application workload behavior. We’ll briefly review Load DynamiX’s solution to see how it addresses these challenges and uniquely enables broad adoption of IPV to the benefit of both the business and IT. We’ll look at how Load DynamiX generates accurate workload models for storage testing, a key IPV capability, and how limit testing and “what if” scenarios can be run, analyzed, and communicated for high impact. Finally, we’ll look at a range of validation scenarios, and how Load DynamiX can be leveraged to reduce risk, assure performance, and lower IT costs.

Publish date: 12/02/13
Profile

Hybrid Cloud Storage from Microsoft: Leveraging Windows Azure and StorSimple

Cloud computing does some things very well. It delivers applications and upgrades. It runs analysis on cloud-based big data. It connects distributed groups sharing communications and files. It provides a great environment for developing web applications and running test/dev processes.

But public cloud storage is a different story. The cloud does deliver long-term, cost-effective storage for inactive backup and archives. Once the backup and archive data streams are scheduled and running they can use relatively low bandwidth as long as they are deduping on-site before transport. (And as long as they do not have to be rehydrated pre-upload, which is another story.) This alone helps to save on-premises storage capacity and can replace off-site tape vaulting.

But cloud storage users want more. They want to have the cost and agility advantages of the public cloud without incurring the huge expense of building one. They want to keep using the public cloud for cost-effective backup and archive, but they also want to use it for more active – i.e. primary – data. This is especially true for workloads with rapidly growing data sets that quickly age like collaboration and file shares. Some of this data needs to reside locally but the majority can be moved, or tiered, to public cloud storage.

What does the cloud need to work for this enterprise wish list? Above all it needs to make public cloud storage an integral part of the on-premises primary storage architecture. This requires intelligent and automated storage tiering, high performance for baseline uploads and continual snapshots, no geographical lock-in, and a central storage management console that integrates cloud and on-premises storage.

Hybrid cloud storage, or HCS, meets this challenge. HCS turns the public cloud into a true active storage tier for less active production data that is not ready to be put out to backup pasture. Hybrid cloud storage integrates on-premises storage with public cloud storage services: not as another backup target but as integrated storage infrastructure. The storage system uses both the on-premises array and scalable cloud storage resources for primary data, expanding that data and data protection to a cost-effective cloud storage tier.

Microsoft’s innovative and broad set of technology enables a true, integrated solution for hybrid cloud storage for business and government organizations – not just a heterogeneous combination of private cloud and public cloud storage offerings. Comprised of StorSimple cloud-integrated storage and the Windows Azure Storage service, HCS from Microsoft well serves the demanding enterprise storage environment, enabling customers to realize huge data management efficiencies in their Microsoft applications and Windows and VMware environments.

This paper will discuss how the Microsoft solution for hybrid cloud storage, consisting of Windows Azure and StorSimple, is different from traditional storage, best practices for leveraging it, and the real world results from multiple customer deployment examples.

Publish date: 08/31/13
Profile

Why VM Density Matters: HP Innovation Delivers Validated “2x” Advantage

Server virtualization brings a vast array of benefits ranging from direct cost savings to indirect improvements in business agility and client satisfaction. But for the IT investment decision-maker, it’s those measurable “hard” costs that matter most. Fortunately, virtualized environments deliver a quantifiably lower Total Cost of Ownership (TCO) compared to legacy physical infrastructures. Since we have all experienced the economic imperative to minimize TCO, it’s easy to understand why virtualization has been driven across significant percentages of modern data centers. Virtualization today is a proven, cost-effective, and nearly ubiquitous IT solution.

But the further challenge for IT investors now is to choose the best virtualization solution to get the “biggest bang for the buck”. Unfortunately the traditional cost per infrastructure (server CPU, storage GB, et.al.) metrics used to judge physical hardware are not sufficient buying criteria in a virtual environment. In a virtualized and cloudy world, cost per application comes closer to capturing the true value of a virtual infrastructure investment. For example, the more virtual machines (VMs) that can be hosted within the same size investment, the lower the cost per application. Therefore a key comparison metric between virtualization solutions is virtual machine (VM) density. All other things being equal (e.g. applications, choice of hypervisor, specific allocations and policies), an infrastructure supporting a higher VM density provides a better value.

As virtualization deployments grow to include active production workloads, they greatly stress and challenge traditional IT infrastructure. The virtualization hypervisor “blends up” client IO workloads and condenses IO intensive activities (i.e. migration, snapshots, backups), with the result that the underlying storage often presents the biggest constraint on effective VM density. Therefore it’s critically important when selecting storage for virtual environments to get past general marketing and focus on validated claims of proven VM density.

Publish date: 07/31/13
Profile

Dell AppAssure 5: Unified Platform for Business Resiliency

Backup applications with large user bases have been vendor cash cows because their customers are reluctant to change such deeply embedded products. As long as the backup worked, it was out of sight and out of mind.
But the field is rapidly changing.

The push to virtualize applications saw traditional backup foundering. Traditional backup in the virtual arena suffered from heavy operational overhead on server, application host, network, and storage levels. The growing amount of VMs and virtualized data had a serious impact on storage resources. For example, each VMDK file represented an entire VM file system image, typically at least 2GB in size. File sizes led to issues for bandwidth, monitoring, and storage resources.

In response, some vendors developed innovative virtual backup products. They made virtual backup much more resource-efficient and easily manageable. Increased performance shrank backup window requirements, provided effective RPO and RTO, simplified the backup process and improved recovery integrity.  These tools changed the virtual data protection landscape for the better.

However, many of these startups offered limited solutions that only supported a single type of hypervisor and several physical machines. This left virtual and physical networks essentially siloed – not to mention the problem of multiple point products creating even more silos within both environments. Managing cross-domain data protection using a variety of point products became inefficient and costly for IT.

Traditional backup makers also scrambled to add virtualization backup support and succeeded to a point, but only a point. Their backup code base was written well before the mass appearance of the cloud and virtualization, and retrofitting existing applications only went so far to provide scalability and integration. There was also the inability to solve a problem that has plagued IT since the early days of backup tape – restore assurance. It has always been risky to find out after the fact that the backup you depended on is not usable for recovery. With data sets doubling every 18 months, the risk of data loss has significantly risen.

More modern backup solves some of these problems but causes new ones. Modern backup offers automated scheduling, manual operations, policy setting, multiple types of backup targets, replication schemes, application optimization, and more. These are useful features but they are also costly and resource-hungry: roughly 30% of storage costs go to IT operations alone. Another problem with these new features is their complexity. It is difficult to optimize and monitor the data protection environment, leading to a conservative estimate of about 20% failure in backup or recovery jobs. 

In addition, most data protection products offer average-to-poor awareness and integration into their backup tape and disk targets. This results in difficulty in setting and testing Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for business applications. The last thing that IT wants is to cripple application recovery, but it is challenging to set meaningful RTO and RPO settings across multiple environments and applications, and extremely difficult to test them.
Even newer VM backup products are inadequate for modern enterprise data centers with physical and virtual layers running critical applications. Combine this with complex and mixed IT environments and it presents a very serious challenge for IT professionals charged with protecting data and application productivity.

What we are seeing now is next generation data protection that protects both virtual and physical environments in one flexible platform. Dell AppAssure is a leading pioneer in this promising field. AppAssure is rewriting the data protection book from limited point products to a highly agile data center protection platform with continual backup, instantaneous restore, backup assurance and a host of additional benefits.
 

Publish date: 06/27/13
Profile

Nimble Storage InfoSight: Transforming the Storage Lifecycle Experience with Deep-Data Analytics

The job of a storage administrator can sometimes be a difficult and lonely one. Administrators must handle a broad set of responsibilities, encompassing all aspects of managing their arrays and keeping up with user demands. And yet, flat IT budgets mean administrators are spread thin, with limited time to manage storage through its lifecycle, let alone improve and optimize storage practices and services.

In most organizations, the storage lifecycle is managed manually, as a complex and disjointed set of activities. Maintenance and support tend to be highly reactive, forcing administrators to play “catch up” each time a storage problem occurs. Monitoring and reporting rely on complex tools and large amounts of data that are difficult to interpret and act upon. Forecasting and planning are more art than science, leading administrators to overprovision to be on the safe side. These various lifecycle activities are seldom connected and inherently inefficient, and fail to provide administrators with the insight they need to anticipate issues and develop best practices. This, in turn, can put system availability and performance at risk, while reducing IT productivity.

Fortunately, one innovative vendor—Nimble Storage—has developed a powerful, data sciences-driven approach that promises to transform the storage lifecycle experience. Based on deep data collection, intelligent and predictive analytics, and automation built on storage and application expertise, Nimble InfoSight streamlines the storage lifecycle, providing administrators with the insights needed to optimize their arrays while also increasing their productivity. InfoSight collects and analyzes over 30 million data points each day from every installed Nimble Storage array worldwide, and then makes the resulting intelligence available both to Nimble engineers and customers. InfoSight automated analysis helps to proactively anticipate and prevent technical problems, significantly reducing the support burden on administrators. InfoSight also provides administrators with an intuitive, dashboard-driven portal into the performance, capacity utilization and data protection of their arrays, enabling them to monitor array operations across multiple sites and to better plan for future needs. By streamlining and informing key activities across the storage lifecycle, InfoSight simplifies and enhances day-to-day administrative tasks such as support, monitoring and forecasting, while enabling administrators to focus on more important initiatives.

To put a human face on InfoSight intelligence, Nimble Storage has also unveiled a new user community. The community allows users to connect and share ideas and resources via discussion forums, knowledge bases, and social media channels. The Nimble community will enable the company’s large and loyal customer base to write about and share their experiences and insights with each other, as well as with prospective users. Together, InfoSight and the Nimble community will give storage administrators unprecedented access to anonymized installed based data and peers’ expertise, enabling them to stay on top of their game and get more out of their arrays.

In this profile, we’ll examine the challenges administrators typically face on a day-to-day basis, and then take a closer look at InfoSight capabilities, and how they address these issues. We’ll then learn how two Nimble customers have benefited from InfoSight in several important ways. Finally, we’ll briefly examine the Nimble community, and discuss how these two initiatives together are empowering administrators through a combination of shared user data and insights.
 

Publish date: 04/15/13
Page 7 of 43 pages ‹ First  < 5 6 7 8 9 >  Last ›