In this paper, we’ll briefly review the challenges to assuring good performance in today’s competitive IT environment, and discuss what it takes to overcome these challenges to deploy appropriate end-to-end infrastructure and operationally deliver high-performance service levels. We’ll then introduce TeamQuest, a long-time leading vendor in IT Service Optimization who has recently expanded their world-class performance and capacity management capabilities with deep storage domain coverage. This new solution is unique in both its non-linear predictive modeling leveraged to produce application-specific performance KPIs and its comprehensive span of visibility and management that extends from applications all the way down into SAN storage systems. Ultimately, we’ll see how TeamQuest empowers IT to take full advantage of agility and efficiency solutions like infrastructure virtualization, even for the most performance-sensitive and storage-intensive applications.
Cloud-based object architecture offers big benefits for storing unstructured data for active archiving, global access to data, fast application development and much lower cost compared to the high computing and data protection costs of on-premise NAS. EMC has engineered Atmos to provide these capabilities and many more as a massively scalable, distributed cloud-based system. In this Technology in Brief we will examine the fast-changing world of archiving and development on the web, and how object-based storage is the best way to go for these monumental tasks.
Why do so many virtualization implementations stall out when it comes to mission-critical applications? Why do so many important applications still run on dedicated hardware? In one word – performance. Virtualization technologies have proven incredibly powerful in helping IT deliver agile “idealized” services, and doing so by efficiently sharing expensive physical resources. But mission-critical applications bring above-average requirements for performance service quality that can greatly challenge virtualized hosting.
Maintaining good performance (as well as availability, et.al.) requires solid systems management. Hypervisor management solutions like VMware’s vCenter Operations Management Suite provide a significant advantage to virtualization administrators by centralizing and simplifying many traditionally disparate management tasks, including fundamental performance monitoring for system health and component utilization. Yet when it comes to assuring performance for mission-critical applications like transactional databases and email – the kinds of apps that depend heavily on multiple IT domains of resources – straight hypervisor-centric solutions can fall short. Solving complex cross-domain performance issues like resource contention, virtual-physical competition, and assuring sufficient “good performance” headroom can require both deeper and wider analysis capabilities.
In this paper we’ll first review a high-level management perspective of performance and capacity to explore what it takes to support mission-critical application performance service levels. We’ll examine the management strengths of the most well known hypervisor management solution – VMware’s vCenter Operations Suite - to understand the scope and limitations of its performance and capacity management capabilities. Next, we will look at how the uniquely cross-domain (storage and server, virtual and physical) model-based performance management capabilities of NetApp’s OnCommand Balance complements a solution like vCenter Operations. The resulting combination helps the virtualization admin and/or storage admin become more proactive and ultimately elevate performance management enough to reliably virtualize mission-critical applications.
If you are evaluating big data storage solutions for your enterprise or mid-sized company, Taneja Group has identified 5 strategic questions that you should ask your vendors during the evaluation process. In this Technology-in-Depth, we’ll review these five questions and look to one specific solution in the market – DataDirect Networks’ (DDN’s) GRIDScaler and the democratization of Big Data.
Over the past few years, server virtualization has rapidly emerged as the defacto standard for today’s data center. But the path has not been an easy one, as server virtualization has brought with it near upheaval in traditional infrastructure integrations.
From network utilization, to data backup, almost no domain of the infrastructure has been untouched, but by far, some of the deepest challenges have revolved around storage. It may well be the case that no single infrastructure layer has ever imposed as great of a challenge to any single IT initiative as the challenges that storage has cast before virtualization.
After experiencing wide-reaching initial rewards, IT managers have aggressively expanded their virtualization initiatives, and in turn the virtual infrastructure has grown faster than any other infrastructure technology ever before deployed. But with rapid growth, demands against storage rapidly exceed the level any business could have anticipated, requiring performance, capacity, control, adaptability, and resiliency like never before. In an effort to address these new demands, it quickly becomes obvious that storage cannot be delivered in the same old way. For organizations facing scale-driven, virtualization storage challenges, it quickly becomes clear that storage must be delivered in a more utility-like fashion than ever before.
What do we mean by utility-like? Storage must be highly efficient, more easily presented, scaled and managed, and more consistently delivered with acceptable performance and reliability than ever before.
In the face of challenges, storage has advanced by leaps and bounds, but differences still remain between products and vendors. This is not a matter of performance or even purely interoperability, but rather one of suitability over time in the face of growing and constantly changing virtual infrastructures – changes that don’t solely revolve around the number and types of workloads, but also includes a constantly evolving virtualization layer. A choice today is still routinely made – typically at the time of storage system acquisition – between iSCSI, Fibre Channel (FC), and NFS. While often a choice between block and file for the customer, there are substantial differences between these block and file architectures, and even iSCSI and FC that will define the process of presenting and using storage, and determine the customer’s efficiency and scale as they move forward with virtualization. Even minor differences will have long ranging effects and ultimately determine whether an infrastructure can ever be operated with utility-like efficiency.
Recently, in this Technology in Depth report, Taneja Group set out to evaluate these protocol choices and determine what fits the requirements of the virtual infrastructure. We built our criteria with the expectation that storage was about much more than just performance or interoperability, or up-front ease of use – factors that are too often bandied about by vendors who conduct their own assessments while using their own alternative offerings as proxies for the competition. Instead, we defined a set of criteria that we believe are determinative in how customer infrastructure can deliver, adapt, and last over the long term.
We summarize these characteristics as five key criteria. They are:
• Efficiency – in capacity and performance
• Presentation and Consumption
• Storage Control and Visibility
• Scalable and autonomic adaptation
These are not inconsequent criteria, as a key challenge before the business is effectively realizing intended virtualization gains as the infrastructure scales. Moreover, our evaluation is not a matter of performance or interoperability – as the protocols themselves have comparable marks here. Rather our assessment is a broader consideration of storage architecture suitability over time in the face of a growing and constantly changing virtual infrastructure. As we’ll discuss, mismatched storage can create a number of inefficiencies that defeat virtualization gains and create significant problems for the virtual infrastructure at scale, and these criteria highlight the alignment of storage protocol choices with the intended goals of virtualization.
What did we find? Block storage solutions carry significant advantages today. Key capabilities such as VMware API integrations, and approaches to scaling, performance, and resiliency make a difference. While advantages may be had in initial deployment with NAS/NFS, architectural and scalability characteristics suggest this is a near term advantage that does not hold up in the long run. Meanwhile, between block-based solutions, we see the difference today surfacing mostly at scale. At mid-sized scale, iSCSI may have a serious cost advantage while “converged” form factors may let the mid-sized business/enterprise scale with ease into the far future. But for businesses facing serious IO pressure, or looking to build an infrastructure for long term use that can serve an unexpected multitude of needs, FC storage systems delivery utility-like storage with a level of resiliency that likely won’t be matched without the FC SAN.
Five nines availability is not restricted to the enterprise. Businesses of all sizes, from cost conscious mid-size to larger enterprises, frequently need five nines availability. At the same time they also require cost-effective CAPEX and OPEX. Dell has already stepped in to this market with the five nines Compellent Storage Center. Their test results are impressive and illustrate how seriously Dell takes the need for cost-effective yet highly available storage for businesses.