Items Tagged: Performance
I've been watching virtual environment performance optimization closely, and two young vendors have shown me interesting tools lately.
Dave Marshall and I have been talking about the current stall in virtualization uptake and how we can break through to the next wave of virtualization over at InfoWorld.
WysDM 4 - Data Protection Compliance and Best Practices
Compliance issues surround data protection practices in enterprises today, yet few organizations have proactively engaged their data protection practices to address these needs. In this product profile, we examine some of the data protection implications of different regulations, and look at how organizations are overlooking their data protection practices when it comes to compliance. As one solution, WysDM 4 can provide a best practices data protection management framework and serve as a platform for holisitically managing data protection and ensuring that it is compliant, performant, and optimized.
Maximizing Desktop Virtualization Success with VDI-optimized Dell EqualLogic Hybrid Arrays
VDI has often promised more than it delivered, due to stubborn complexity, performance and cost challenges. Chief among these challenges has been the high up-front capital costs and subsequent inefficiencies of the storage platforms deployed to support it. Building on deep integration with VMware’s vSphere 4.1 platform via the vStorage APIs, Dell has set its sights squarely on VDI and aims to break down both the cost of acquisition and TCO (CapEx and OpEx) barriers that have plagued VDI ROI in the past. Dell has added intelligent workload tiering and new hybrid SSD/SAS arrays to its Equal-Logic PS Series family and we analyze a recent performance benchmark to evaluate in detail how Dell’s innovations reduce complexity, improve performance and lower the cost of VDI.
The Kaminario K2: Transforming the Costs and Capabilities of Storage Performance
In this product profile, we’ll take a look at what we think is required to deliver a true enterprise-class storage foundation for improving application performance, and one vendor's distinctly different approach to delivering a high performance storage solution. That solution - the Kaminario K2 – delivers a compelling approach for those enterprises in need of faster storage for greater application performance.
The Cost of Performance
What’s an IO worth to you? Is it worth more than a gigabyte? Less? That’s a hard question for many IT and business professionals to begin to consider; yet we often see it bandied about. It certainly has merit, it just isn’t easily understood. In this industry article, Taneja Group takes a look at how big the cost of performance is, and with that understanding in mind, we’ll look at two examples of new solutions and what they suggest is a changing way to get cost-effective performance inside the data center walls.
Maximizing Database Performance With Dell Equallogic Hybrid Arrays
Today’s combination of rapidly-accelerating demand for data and rapidly-consolidating datacenter infrastructure makes choosing the right storage for each of your business applications more important—and more difficult—than ever. In our view, it’s time more of this burden is taken on by the SAN itself. In other words, it’s time for more SAN intelligence. The intelligent SAN should optimize all available storage resources—automatically. In this profile explore how dynamic, multi-tiered OLTP workloads test the limits of traditional manual storage tiering strategies, and further strengthen the case for automated tiering on the SAN itself. Then we review Dell’s internal benchmark test results and speak to Carnival Cruise Lines, an EqualLogic customer, in order to evaluate how Dell’s hybrid SSD/SAS arrays are delivering higher performance and lower overhead both in the lab and in the field.
The case for Intelligent Storage (Dell)
In the past decade, a volatile business climate and a dynamic technology landscape have combined to raise the pressures on the enterprise datacenter, and especially on the storage infrastructure that underlies it. The need to adapt to such constantly-shifting demands and technology developments has sorely tested the limits of existing networked storage solutions. The virtualization mega-trend has dramatically changed the way information is sized, controlled, and protected. But traditional networked storage solutions are too often rigid, complex, and inefficient. Taneja Group has identified a way forward. We have collected the essential elements of the storage solutions needed for today’s new IT realities under the term “intelligent storage.” In this profile, we define what we mean by storage intelligence, whether that storage is file or block, and whether its architecture is SAN, NAS, or unified. We then examine how Dell is delivering this intelligence with its EqualLogic storage line. Wherever you are in your datacenter evolution, we think it’s time to examine whether your storage has the intelligence to carry you to your end goals.
NexGen – Storage Control for the Virtual Data Center
In this Taneja Group Product Profile, we examine the challenges facing the data center architect when dealing with consolidating, ever-denser, next generation workloads. Clearly, the most difficult challenges show up in the storage layer. With this in mind, a new generation of storage array providers are coming to market, aiming to scale and provide more performance, in a denser footprint than ever before. But it takes more than just throwing IO at the problem, and NexGen has a unique approach that is poised to go further than ever before in solving problems around enterprise storage.
The HP Storage Portfolio – Building the Foundation for the Virtualized Infrastructure
Over the past couple of years, HP has executed an impressive number of storage acquisitions, and is systematically innovating around each of three key technologies – HP 3PAR, P4000, and their deduplicating StoreOnce. Perhaps nowhere are the synergies more clear than among the virtual infrastructure. In this Opinion, we’ll turn a critical eye toward these synergies, and render the Taneja Group perspective on whether HP is on the right path.
Selecting Exchange Storage Infrastructure-3 Critical Questions for the Selection of Microsoft..(IBM)
SELECTING EXCHANGE STORAGE INFRASTRUCTURE - 3 Critical Questions for the Selection of Microsoft Exchange Storage Infrastructure
Building ideal Microsoft Exchange storage infrastructures has always been an exercise in uncertainty and complexity. Uncertainty in terms of unknown future growth in both capacity and performance demands, and complexity because Microsoft Exchange seems to blossom in storage demanding features and capabilities with each new release. Worse yet, Microsoft itself can sometimes exacerbate the situation, by seemingly injecting Exchange with more storage features and speaking in veiled terms about the usefulness of external storage versus direct attached storage. To be certain, the demands from Exchange mandate something better than direct attached disk (DAS or DAD). In fact, Exchange demands more than run of the mill networked storage (NAS or SAN). In this solution profile, we’ll examine the fundamental pressures found in Exchange environments, and take a look at why it takes more than just capacity, and more than just performance.
Dell Compellent: Fluid Storage for a Virtualized World
The enterprise datacenter was a very different place just a few years ago. Over the last decade, several macro trends have converged: rapid server consolidation enabled by virtualization, dramatic data proliferation and the rise of “big data,” solid-state drive technology advances, and an increasingly mobile and demanding workforce. In short, IT continues to consolidate, while business becomes more distributed. This tension drives the search for greater efficiency now at the heart of every IT decision. And no-where is this pressure felt more acutely than in the storage layer. Virtualized and consolidated work-loads create new types of storage I/O contention, which are costly to troubleshoot and repair. Storage costs continue to rise because capacity planning is harder in today’s dynamic business environment. Over time, performance limitations, wasted capacity, and complex operations eat into the bottom line and increase lifetime storage TCO. These realities drive the need for more intelligence in the storage layer. In this technology brief, we explore the ways in which Dell Compellent’s Storage Center is delivering such intelligence today.
Dell Equallogic FS7500: Unified Storage Simplifies File Sharing And Accelerates Virtualization
With the introduction of the FS7500 NAS appliance for the EqualLogic PS Series, Dell customers now have a unified storage option to further reduce management overhead and improve efficiency. All too often, companies have been forced to deploy different storage platforms for different needs: NAS for file-based applications and user file shares and SAN for block-based applications and high-performance virtualized workloads. The FS7500 changes the game. Your unified storage solution should let you easily scale your file shares to handle today’s tremendous growth in unstructured data. It should also accelerate and simplify your virtualization efforts by giving you the freedom to choose the best storage protocol for each virtual workload based on your unique application requirements, skill sets, and existing storage investments. In this technology brief, we explore how Dell’s customers can benefit from the addition of scale-out NAS to the leading scale-out iSCSI SAN storage family.
Virtual desktops offer some attractive benefits, but storage systems that aren’t up to the task can make it hard to realize those benefits.
NexGen's ioControl 2.1 embeds performance improvements, service level management and reporting capabilities into VMware vCenter.
An effective QoS implementation helps tunes data storage to meet the specific needs of applications. New tools that offer more automation are emerging to help.
Increasing Virtualization Velocity with NetApp OnCommand Balance
Why do so many virtualization implementations stall out when it comes to mission-critical applications? Why do so many important applications still run on dedicated hardware? In one word – performance. Virtualization technologies have proven incredibly powerful in helping IT deliver agile “idealized” services, and doing so by efficiently sharing expensive physical resources. But mission-critical applications bring above-average requirements for performance service quality that can greatly challenge virtualized hosting.
Maintaining good performance (as well as availability, et.al.) requires solid systems management. Hypervisor management solutions like VMware’s vCenter Operations Management Suite provide a significant advantage to virtualization administrators by centralizing and simplifying many traditionally disparate management tasks, including fundamental performance monitoring for system health and component utilization. Yet when it comes to assuring performance for mission-critical applications like transactional databases and email – the kinds of apps that depend heavily on multiple IT domains of resources – straight hypervisor-centric solutions can fall short. Solving complex cross-domain performance issues like resource contention, virtual-physical competition, and assuring sufficient “good performance” headroom can require both deeper and wider analysis capabilities.
In this paper we’ll first review a high-level management perspective of performance and capacity to explore what it takes to support mission-critical application performance service levels. We’ll examine the management strengths of the most well known hypervisor management solution – VMware’s vCenter Operations Suite - to understand the scope and limitations of its performance and capacity management capabilities. Next, we will look at how the uniquely cross-domain (storage and server, virtual and physical) model-based performance management capabilities of NetApp’s OnCommand Balance complements a solution like vCenter Operations. The resulting combination helps the virtualization admin and/or storage admin become more proactive and ultimately elevate performance management enough to reliably virtualize mission-critical applications.
Optimizing Performance Across Systems and Storage: Best Practices with TeamQuest
In this paper, we’ll briefly review the challenges to assuring good performance in today’s competitive IT environment, and discuss what it takes to overcome these challenges to deploy appropriate end-to-end infrastructure and operationally deliver high-performance service levels. We’ll then introduce TeamQuest, a long-time leading vendor in IT Service Optimization who has recently expanded their world-class performance and capacity management capabilities with deep storage domain coverage. This new solution is unique in both its non-linear predictive modeling leveraged to produce application-specific performance KPIs and its comprehensive span of visibility and management that extends from applications all the way down into SAN storage systems. Ultimately, we’ll see how TeamQuest empowers IT to take full advantage of agility and efficiency solutions like infrastructure virtualization, even for the most performance-sensitive and storage-intensive applications.
One of the hardest challenges for an IT provider today is to guarantee a specific level of "response-time" performance to applications. Performance is absolutely mission-critical for many business applications, which has often led to expensively over-provisioned and dedicated infrastructures. Unfortunately broad technology evolutions like virtualization and cloud architectures that are driving great efficiency and agility gains across wide swaths of the data center can actually make it harder to deliver consistent performance to critical applications.
Just this AM I gave a brief presentation at VMware Partner Exchange on what VM density is, and how storage can make a fundamental difference in achieving a higher VM density.