Cloud computing does some things very well. It delivers applications and upgrades. It runs analysis on cloud-based big data. It connects distributed groups sharing communications and files. It provides a great environment for developing web applications and running test/dev processes.
But public cloud storage is a different story. The cloud does deliver long-term, cost-effective storage for inactive backup and archives. Once the backup and archive data streams are scheduled and running they can use relatively low bandwidth as long as they are deduping on-site before transport. (And as long as they do not have to be rehydrated pre-upload, which is another story.) This alone helps to save on-premises storage capacity and can replace off-site tape vaulting.
But cloud storage users want more. They want to have the cost and agility advantages of the public cloud without incurring the huge expense of building one. They want to keep using the public cloud for cost-effective backup and archive, but they also want to use it for more active – i.e. primary – data. This is especially true for workloads with rapidly growing data sets that quickly age like collaboration and file shares. Some of this data needs to reside locally but the majority can be moved, or tiered, to public cloud storage.
What does the cloud need to work for this enterprise wish list? Above all it needs to make public cloud storage an integral part of the on-premises primary storage architecture. This requires intelligent and automated storage tiering, high performance for baseline uploads and continual snapshots, no geographical lock-in, and a central storage management console that integrates cloud and on-premises storage.
Hybrid cloud storage, or HCS, meets this challenge. HCS turns the public cloud into a true active storage tier for less active production data that is not ready to be put out to backup pasture. Hybrid cloud storage integrates on-premises storage with public cloud storage services: not as another backup target but as integrated storage infrastructure. The storage system uses both the on-premises array and scalable cloud storage resources for primary data, expanding that data and data protection to a cost-effective cloud storage tier.
Microsoft’s innovative and broad set of technology enables a true, integrated solution for hybrid cloud storage for business and government organizations – not just a heterogeneous combination of private cloud and public cloud storage offerings. Comprised of StorSimple cloud-integrated storage and the Windows Azure Storage service, HCS from Microsoft well serves the demanding enterprise storage environment, enabling customers to realize huge data management efficiencies in their Microsoft applications and Windows and VMware environments.
This paper will discuss how the Microsoft solution for hybrid cloud storage, consisting of Windows Azure and StorSimple, is different from traditional storage, best practices for leveraging it, and the real world results from multiple customer deployment examples.
Server virtualization brings a vast array of benefits ranging from direct cost savings to indirect improvements in business agility and client satisfaction. But for the IT investment decision-maker, it’s those measurable “hard” costs that matter most. Fortunately, virtualized environments deliver a quantifiably lower Total Cost of Ownership (TCO) compared to legacy physical infrastructures. Since we have all experienced the economic imperative to minimize TCO, it’s easy to understand why virtualization has been driven across significant percentages of modern data centers. Virtualization today is a proven, cost-effective, and nearly ubiquitous IT solution.
But the further challenge for IT investors now is to choose the best virtualization solution to get the “biggest bang for the buck”. Unfortunately the traditional cost per infrastructure (server CPU, storage GB, et.al.) metrics used to judge physical hardware are not sufficient buying criteria in a virtual environment. In a virtualized and cloudy world, cost per application comes closer to capturing the true value of a virtual infrastructure investment. For example, the more virtual machines (VMs) that can be hosted within the same size investment, the lower the cost per application. Therefore a key comparison metric between virtualization solutions is virtual machine (VM) density. All other things being equal (e.g. applications, choice of hypervisor, specific allocations and policies), an infrastructure supporting a higher VM density provides a better value.
As virtualization deployments grow to include active production workloads, they greatly stress and challenge traditional IT infrastructure. The virtualization hypervisor “blends up” client IO workloads and condenses IO intensive activities (i.e. migration, snapshots, backups), with the result that the underlying storage often presents the biggest constraint on effective VM density. Therefore it’s critically important when selecting storage for virtual environments to get past general marketing and focus on validated claims of proven VM density.
Backup applications with large user bases have been vendor cash cows because their customers are reluctant to change such deeply embedded products. As long as the backup worked, it was out of sight and out of mind.
But the field is rapidly changing.
The push to virtualize applications saw traditional backup foundering. Traditional backup in the virtual arena suffered from heavy operational overhead on server, application host, network, and storage levels. The growing amount of VMs and virtualized data had a serious impact on storage resources. For example, each VMDK file represented an entire VM file system image, typically at least 2GB in size. File sizes led to issues for bandwidth, monitoring, and storage resources.
In response, some vendors developed innovative virtual backup products. They made virtual backup much more resource-efficient and easily manageable. Increased performance shrank backup window requirements, provided effective RPO and RTO, simplified the backup process and improved recovery integrity. These tools changed the virtual data protection landscape for the better.
However, many of these startups offered limited solutions that only supported a single type of hypervisor and several physical machines. This left virtual and physical networks essentially siloed – not to mention the problem of multiple point products creating even more silos within both environments. Managing cross-domain data protection using a variety of point products became inefficient and costly for IT.
Traditional backup makers also scrambled to add virtualization backup support and succeeded to a point, but only a point. Their backup code base was written well before the mass appearance of the cloud and virtualization, and retrofitting existing applications only went so far to provide scalability and integration. There was also the inability to solve a problem that has plagued IT since the early days of backup tape – restore assurance. It has always been risky to find out after the fact that the backup you depended on is not usable for recovery. With data sets doubling every 18 months, the risk of data loss has significantly risen.
More modern backup solves some of these problems but causes new ones. Modern backup offers automated scheduling, manual operations, policy setting, multiple types of backup targets, replication schemes, application optimization, and more. These are useful features but they are also costly and resource-hungry: roughly 30% of storage costs go to IT operations alone. Another problem with these new features is their complexity. It is difficult to optimize and monitor the data protection environment, leading to a conservative estimate of about 20% failure in backup or recovery jobs.
In addition, most data protection products offer average-to-poor awareness and integration into their backup tape and disk targets. This results in difficulty in setting and testing Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for business applications. The last thing that IT wants is to cripple application recovery, but it is challenging to set meaningful RTO and RPO settings across multiple environments and applications, and extremely difficult to test them.
Even newer VM backup products are inadequate for modern enterprise data centers with physical and virtual layers running critical applications. Combine this with complex and mixed IT environments and it presents a very serious challenge for IT professionals charged with protecting data and application productivity.
What we are seeing now is next generation data protection that protects both virtual and physical environments in one flexible platform. Dell AppAssure is a leading pioneer in this promising field. AppAssure is rewriting the data protection book from limited point products to a highly agile data center protection platform with continual backup, instantaneous restore, backup assurance and a host of additional benefits.
The job of a storage administrator can sometimes be a difficult and lonely one. Administrators must handle a broad set of responsibilities, encompassing all aspects of managing their arrays and keeping up with user demands. And yet, flat IT budgets mean administrators are spread thin, with limited time to manage storage through its lifecycle, let alone improve and optimize storage practices and services.
In most organizations, the storage lifecycle is managed manually, as a complex and disjointed set of activities. Maintenance and support tend to be highly reactive, forcing administrators to play “catch up” each time a storage problem occurs. Monitoring and reporting rely on complex tools and large amounts of data that are difficult to interpret and act upon. Forecasting and planning are more art than science, leading administrators to overprovision to be on the safe side. These various lifecycle activities are seldom connected and inherently inefficient, and fail to provide administrators with the insight they need to anticipate issues and develop best practices. This, in turn, can put system availability and performance at risk, while reducing IT productivity.
Fortunately, one innovative vendor—Nimble Storage—has developed a powerful, data sciences-driven approach that promises to transform the storage lifecycle experience. Based on deep data collection, intelligent and predictive analytics, and automation built on storage and application expertise, Nimble InfoSight streamlines the storage lifecycle, providing administrators with the insights needed to optimize their arrays while also increasing their productivity. InfoSight collects and analyzes over 30 million data points each day from every installed Nimble Storage array worldwide, and then makes the resulting intelligence available both to Nimble engineers and customers. InfoSight automated analysis helps to proactively anticipate and prevent technical problems, significantly reducing the support burden on administrators. InfoSight also provides administrators with an intuitive, dashboard-driven portal into the performance, capacity utilization and data protection of their arrays, enabling them to monitor array operations across multiple sites and to better plan for future needs. By streamlining and informing key activities across the storage lifecycle, InfoSight simplifies and enhances day-to-day administrative tasks such as support, monitoring and forecasting, while enabling administrators to focus on more important initiatives.
To put a human face on InfoSight intelligence, Nimble Storage has also unveiled a new user community. The community allows users to connect and share ideas and resources via discussion forums, knowledge bases, and social media channels. The Nimble community will enable the company’s large and loyal customer base to write about and share their experiences and insights with each other, as well as with prospective users. Together, InfoSight and the Nimble community will give storage administrators unprecedented access to anonymized installed based data and peers’ expertise, enabling them to stay on top of their game and get more out of their arrays.
In this profile, we’ll examine the challenges administrators typically face on a day-to-day basis, and then take a closer look at InfoSight capabilities, and how they address these issues. We’ll then learn how two Nimble customers have benefited from InfoSight in several important ways. Finally, we’ll briefly examine the Nimble community, and discuss how these two initiatives together are empowering administrators through a combination of shared user data and insights.
In many ways, Information Technology (IT) has become the centerpiece of business operations across the globe. This dynamic is both an opportunity and a threat to IT organizations. On one hand, IT has a very important seat at the table as businesses decide where to invest or deploy new offerings and services. On the other hand, IT organizations now become responsible for ensuring that these business services, and the data that drives them, are always available.
To ensure availability, IT must have a comprehensive business continuity plan in place, especially for critical operations that the business requires. However, business critical services are no longer just a matter of managing a single application or workload running on a solitary server. Instead, business critical services are often sets of interwoven components made up of multiple physical and virtual servers that depend upon one another. Seldom does a business critical application stand alone, or act with complete independence from other systems in the data center.
This complexity introduces challenges and compromises that the business is little prepared to understand or recognize. Often, when it comes to business continuity, issues are not recognized until it is too late. Many systems may have had a more manageable approach to continuity in the physical world. Now, with the agility characteristics that virtualization introduces, viewing, controlling and protecting the complete business service, especially when that service is made up of multiple physical and virtual components becomes a larger challenge. Considering the intersection of the business critical applications that run on physical and virtual infrastructure, IT needs a better capability for viewing and protecting the entire service being delivered to a business.
In this solution brief, we’ll look at what a Business Service is comprised of, and the challenges and options for business continuity across disparate physical and virtual infrastructure.
The answer to these organizations is IBM SmartCloud Storage Access, which lets organizations turn on-premise storage systems including IBM Scale Out Network Attached Storage (SONAS) and IBM Storwize V7000 Unified Storage into powerful private clouds. The cloud-based storage services created by IBM SmartCloud Storage Access combines the scale-out and unified features of the underlying storage systems into highly flexible and manageable cloud-based storage.
This report is also available through the IBM website here.