With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services that include “extreme” applications like massive voice and image processing or complex fi-nancial analysis modeling that can push storage systems to their limits. Examples of some high visi-bility and big market impacting solutions include applications based on image pattern recognition at large scale and financial risk management based on decision-making at high speed.
These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential. Every day here at Taneja Group we see more and more mainstream enterprises exploring similar “extreme service” opportunities. But when enterprise IT data centers take stock of what it is required to host and deliver these new services, it quickly becomes apparent that traditional clustered and even scale-out file systems - of the kind that most enterprise data centers (or cloud providers) have racks and racks of - simply can’t handle the performance requirements.
There are already great enterprise storage solutions for applications that need either raw throughput, high capacity, parallel access, low latency, or high availability – maybe even for two or three of those at a time. But when an “extreme” application needs all of those requirements at the same time, only supercomputing type storage in the form of parallel file systems provides a functional solution. The problem is that most commercial enterprises simply can’t afford or risk basing a line of business on an expensive research project.
The good news is that some storage vendors have been industrializing former supercomputing storage technologies, hardening massively parallel file systems into commercially viable solutions. This opens the door for revolutionary services creation, enabling mainstream enterprise datacenters to support the exploitation of new extreme applications.
Object storage has long been pigeon-holed as a necessary overhead expense for long-term archive storage, a data purgatory one step before tape or deletion. In our experience, we have seen many IT shops view object storage more as something exotic they have to implement to meet government regulations rather than as a competitive strategic asset that can help their businesses make money.
Normally when companies invest in high-end IT assets like enterprise-class storage, they hope to re-coup those investments in big ways like accelerating the performance of market competitive applica-tions or efficiently consolidating data centers. Maybe they are even starting to analyze big data to find better ways to run the business. There are far more opportunities to be sure, but these kinds of “money-making” initiatives have been mainly associated with “file” and “block” types of storage – the primary storage commonly used to power databases, host office productivity applications, and build pools of shared resources for virtualization projects. But that’s about to change. If you’ve intentionally dismissed or just over-looked object storage it is time to take deeper look. Today’s object storage provides brilliant capabilities for enhancing productivity, creating global platforms and developing new revenue streams.
Object storage has been evolving from its historical second tier data dumping ground into a value-building primary storage platform for content and collaboration. And the latest high performance cloud storage solutions could transform the whole nature of enterprise data storage. To really exploit this new generation of object storage, it is important to understand not only what it is and how it has evolved, but to start thinking about how to harness its emerging capabilities in building net new business.
Reliable storage is critical to the lifeblood of every data-driven business, and operational storage capabilities like non-disruptive scalability, continuous data protection, capacity optimization, and disaster recovery are not just desired, but required. But enterprise-class storage features have long been out of reach of organizations that don't have enterprise-sized budgets, storage experts and large data centers. Instead, they make do with low-end disk arrays or even just a box of disks patched together with a minimal amount of data protection in the form of manual backups. The problem is that disks fail, organizations change, and data continues to grow – organizations that pile up disks under the desktop are taking big risks for significant business failure while those that pay up for traditional arrays and even cloud storage incur significant cost and management overhead.
Having to step up to deliver these advanced storage requirements challenges growing organizations with big adoption hurdles, not the least of which is both OPEX and CAPEX cost. Far too many organizations struggle along with high-risk storage or feel forced to allocate significant energy, cost, and staff time into acquiring, deploying, and operating high-touch storage arrays with layers of complex add-on software. Even larger enterprises with expert storage gurus and big data centers can feel the weight of managing complex SAN’s for departmental, ROBO, and other practical rubber-meets-road storage scenarios. What’s really needed is a new approach to storage - an affordable, expandable array solution with advanced storage capabilities baked in. Ideally it should be simpler to operate than even setting up a file system on raw drives and it should be available at a justifiable cost for even small data driven businesses.
In this solution brief we are going to look at what SMB and departmental storage buyers should both require and expect from storage solutions to meet their business goals, and how traditional mid-market storage based on old technologies can fall short. We will then introduce Exablox’s new OneBlox storage array to highlight how purposefully designing storage from the ground up can lead to a simple but powerful hardware design and software architecture that features built-in high availability, easy scalability, and great data protection. Along the way we’ll see how two real-world users of OneBlox experience its actual benefits, cost effectiveness and true ease of management in their live customer deployments.
Unfortunately, too many data centers rely solely on native multipathing tools, and tolerate consequent and oftentimes unrecognized compromises in efficiency, management, and resiliency. Seemingly though, only one storage juggernaut has ever delivered a truly differentiated and distinguished approach to multipathing – EMC. EMC’s PowerPath offers multipathing for VMware, Windows, Linux, AIX, Solaris, and HP-UX. Based on a secret sauce of patented, path-optimizing algorithms, the PowerPath/VE version of the PowerPath software family optimizes and enhances management for I/O beneath VMware vSphere and the virtual data center.
However, even with VMware there is no such thing as virtualizing the complexity out of the virtualized data center, especially when it comes to storage. I/O characteristics become much more challenging to manage from multiple VMs running on a single host. Woven together into a data center of many hosts, I/O becomes even more complex. I/O suddenly becomes dependent upon both a physical fabric, plus a virtual fabric – either of which may introduce errors or less than optimal performance. This is what makes the idea of “pathing” – how I/O travels from VM to storage – critically important in the virtualized data center. Moving I/O across the lowest latency paths, and avoiding outages introduced by misconfiguration or error will ultimately determine whether the virtual data center holds up under pressure or not. Moreover, the secondary capabilities introduced by innovative pathing technologies can add even more value in complex infrastructures by enhancing management and increasing the efficiency of storage interaction.
Virtualization technology has had a transformational impact on computing domains: providing deep consolidation for servers, network fabrics, and the storage pools behind individual servers. Today, the range of capabilities has made it a natural development to weave virtualized computing into virtualized data centers. VMware is the market leader in this area and vSphere a leading product for virtualizing modern data centers.
This Product in Depth will detail the challenges around virtualizing the data center and how EMC‘s PowerPath/VE creates unique value for VMware virtual data centers.
Taneja Group conducted in-depth telephone interviews with six Virtual Instruments (VI) customers. The customers represented enterprises from different industry verticals. The interviews took place over a 3-month period in late 2012 and early 2013. We were pursuing user insights into how VI is bringing new levels of performance monitoring and troubleshooting to customers running large virtualized server and storage infrastructures.
Running large virtualized data centers with hundreds or even thousands of servers, petabytes of data and a large distributed storage network, requires a comprehensive management platform. Such a platform must provide insight into performance and enable proactive problem avoidance and troubleshooting to drive both OPEX and CAPEX savings. Our interviewees revealed that they consider VI to be an invaluable partner in helping to manage the performance of their IT infrastructure supporting mission critical applications.
VI’s expertise and the VirtualWisdom platform differ significantly from other tools’ monitoring, capacity planning and trending capabilities. Their unique platform approach provides true, real-time, system-wide visibility into performance—and correlates data from multiple layers—for proactive remediation of problems and inefficiencies before they affect application service levels. Other existing tools have their usefulness, but they don’t provide the level of detail required for managing through the layers of abstraction and virtualization that characterize today’s complex enterprise data center.
Most of the representative companies were using storage array-specific or fabric device monitoring tools but not system-wide performance management solutions. They went looking for a more comprehensive platform that would monitor, alert and remediate the end-to-end compute infrastructure. The customers we interviewed talked about why they needed this level of instrumentation and why they chose VI over other options. Their needs fell into 6 primary areas:
1. Demonstrably decrease system-wide CAPEX and OPEX while getting more out of existing assets.
2. Align expenditures on server, switch and storage infrastructure with actual requirements.
3. Proactively improve data center performance including mixed workloads and I/O.
4. Manage and monitor multiple data centers and complex computing environments.
5. Troubleshoot performance slowdowns and application failures across the stack.
6. Create customized dashboards and comprehensive reports on the end-to-end environment.
The consensus of opinion is that VI’s VirtualWisdom is by far the best solution for meeting complex data center infrastructure performance challenges, and that the return on investment is unparalleled.
For more information, check out the press release issued by Virtual Instruments.
You can also download this report directly from Virtual Instruments.
In this paper, we’ll briefly review the challenges to assuring good performance in today’s competitive IT environment, and discuss what it takes to overcome these challenges to deploy appropriate end-to-end infrastructure and operationally deliver high-performance service levels. We’ll then introduce TeamQuest, a long-time leading vendor in IT Service Optimization who has recently expanded their world-class performance and capacity management capabilities with deep storage domain coverage. This new solution is unique in both its non-linear predictive modeling leveraged to produce application-specific performance KPIs and its comprehensive span of visibility and management that extends from applications all the way down into SAN storage systems. Ultimately, we’ll see how TeamQuest empowers IT to take full advantage of agility and efficiency solutions like infrastructure virtualization, even for the most performance-sensitive and storage-intensive applications.