Includes Security, SRM, Cloud, ICM, SaaS, Business Intelligence, Data Warehouse, Database Appliances, NFM, Storage Management.
This section covers all forms of technologies that impact IT infrastructure management. Taneja Group analysts particularly focus on the interplay between server virtualization and storage, with and without virtualization, and study the impact on performance, security and management of the IT infrastructure. This section also includes all aspects of storage management (SRM, SMI-S) and the role of cross correlation engines on overall performance of an application. Storage virtualization technologies (In-band, Out-of-band, split path architectures or SPAID) are all covered in detail. Data Security, whether for data in-flight or at-rest and enterprise level key management issues are covered along with all the players that make up the ecosystems.
As databases continue to grow larger and more complex they present issues in terms of security, performance and management. Taneja Group analysts cover the vendors and the technologies that harness the power of archiving to reduce the size of active databases. We also cover specialized database appliances that have become the vogue lately. All data protection issues surrounding databases are also covered in detail. We write extensively on this topic for the benefit of the IT user.
Object storage has long been pigeon-holed as a necessary overhead expense for long-term archive storage, a data purgatory one step before tape or deletion. In our experience, we have seen many IT shops view object storage more as something exotic they have to implement to meet government regulations rather than as a competitive strategic asset that can help their businesses make money.
Normally when companies invest in high-end IT assets like enterprise-class storage, they hope to re-coup those investments in big ways like accelerating the performance of market competitive applica-tions or efficiently consolidating data centers. Maybe they are even starting to analyze big data to find better ways to run the business. There are far more opportunities to be sure, but these kinds of “money-making” initiatives have been mainly associated with “file” and “block” types of storage – the primary storage commonly used to power databases, host office productivity applications, and build pools of shared resources for virtualization projects. But that’s about to change. If you’ve intentionally dismissed or just over-looked object storage it is time to take deeper look. Today’s object storage provides brilliant capabilities for enhancing productivity, creating global platforms and developing new revenue streams.
Object storage has been evolving from its historical second tier data dumping ground into a value-building primary storage platform for content and collaboration. And the latest high performance cloud storage solutions could transform the whole nature of enterprise data storage. To really exploit this new generation of object storage, it is important to understand not only what it is and how it has evolved, but to start thinking about how to harness its emerging capabilities in building net new business.
Taneja Group conducted in-depth telephone interviews with six Virtual Instruments (VI) customers. The customers represented enterprises from different industry verticals. The interviews took place over a 3-month period in late 2012 and early 2013. We were pursuing user insights into how VI is bringing new levels of performance monitoring and troubleshooting to customers running large virtualized server and storage infrastructures.
Running large virtualized data centers with hundreds or even thousands of servers, petabytes of data and a large distributed storage network, requires a comprehensive management platform. Such a platform must provide insight into performance and enable proactive problem avoidance and troubleshooting to drive both OPEX and CAPEX savings. Our interviewees revealed that they consider VI to be an invaluable partner in helping to manage the performance of their IT infrastructure supporting mission critical applications.
VI’s expertise and the VirtualWisdom platform differ significantly from other tools’ monitoring, capacity planning and trending capabilities. Their unique platform approach provides true, real-time, system-wide visibility into performance—and correlates data from multiple layers—for proactive remediation of problems and inefficiencies before they affect application service levels. Other existing tools have their usefulness, but they don’t provide the level of detail required for managing through the layers of abstraction and virtualization that characterize today’s complex enterprise data center.
Most of the representative companies were using storage array-specific or fabric device monitoring tools but not system-wide performance management solutions. They went looking for a more comprehensive platform that would monitor, alert and remediate the end-to-end compute infrastructure. The customers we interviewed talked about why they needed this level of instrumentation and why they chose VI over other options. Their needs fell into 6 primary areas:
1. Demonstrably decrease system-wide CAPEX and OPEX while getting more out of existing assets.
2. Align expenditures on server, switch and storage infrastructure with actual requirements.
3. Proactively improve data center performance including mixed workloads and I/O.
4. Manage and monitor multiple data centers and complex computing environments.
5. Troubleshoot performance slowdowns and application failures across the stack.
6. Create customized dashboards and comprehensive reports on the end-to-end environment.
The consensus of opinion is that VI’s VirtualWisdom is by far the best solution for meeting complex data center infrastructure performance challenges, and that the return on investment is unparalleled.
For more information, check out the press release issued by Virtual Instruments.
You can also download this report directly from Virtual Instruments.
In this paper, we’ll briefly review the challenges to assuring good performance in today’s competitive IT environment, and discuss what it takes to overcome these challenges to deploy appropriate end-to-end infrastructure and operationally deliver high-performance service levels. We’ll then introduce TeamQuest, a long-time leading vendor in IT Service Optimization who has recently expanded their world-class performance and capacity management capabilities with deep storage domain coverage. This new solution is unique in both its non-linear predictive modeling leveraged to produce application-specific performance KPIs and its comprehensive span of visibility and management that extends from applications all the way down into SAN storage systems. Ultimately, we’ll see how TeamQuest empowers IT to take full advantage of agility and efficiency solutions like infrastructure virtualization, even for the most performance-sensitive and storage-intensive applications.
The past few years have seen virtualization rapidly move into the mainstream of the data center. Today, virtualization is often the defacto standard in the data center for deployment of any application or service. This includes important operational and business systems that are the lifeblood of the business.
For mission critical systems, customers necessarily demand a broader level of services than is common among the test and development environments where virtualization often gains its foothold in the data center. It goes almost without saying that topmost in customer’s minds are issues of availability.
Availability is a spectrum of technology that offers businesses many different levels of protection – from general recoverability to uninterruptable applications. At the most fundamental level, are mechanisms that protect the data and the server beneath applications. While in the past these mechanisms have often been hardware and secondary storage systems, VMware has steadily advanced the capabilities of their vSphere virtualization offering, and it includes a long list of features – vMotion, Storage vMotion, vSphere Replication, VMware vCenter Site Recovery Manager, vSphere High Availability, and vSphere Fault Tolerance. While clearly VMware is serious about the mission critical enterprise, each of these offerings have retained a VMware-specific orientation toward protecting the “compute instance”.
The challenge is that protecting a compute instance does not go far enough. It is the application that matters, and detecting VM failures may fall short of detecting and mitigating application failures.
With this in mind, Symantec has steadily advanced a range of solutions for enhancing availability protection in the virtual infrastructure. Today this includes ApplicationHA – developed in partnership with VMware – and their gold standard offering of Veritas Cluster Server (VCS) enhanced for the virtual infrastructure. We recently turned an eye toward how these solutions enhance virtual availability in a hands-on lab exercise, conducted remotely from Taneja Group Labs in Phoenix, AZ. Our conclusion: VCS is the only HA/DR solution that can monitor and recover applications on VMware that is fully compatible with typical vSphere management practices such as vMotion, Dynamic Resource Scheduler and Site Recovery Manager, and it can make a serious difference in the availability of important applications.
Taneja Group and InfoStor jointly ran a survey asking IT managers about their big data experiences and roadmaps. We concluded that there is a great deal of uncertainty around big data: what it is, how to manage it, and if it is even in the IT domain rather than specialized application administrators.
Storing and managing large volumes of data certainly involves IT. However, “big data” is its own class: large data sets that are subjected to ongoing analytics and/or massive re-use. Some big data is structured into databases; most of it is unstructured. Big data operations continuously act upon large and growing volumes of data, which generates fast and frequent data movement between servers, networks and storage. Big data analytics in particular need fast and large feedback loops for decision-making as the specialized software tools analyze and reform data into a variety of views, reports and reformed data sets.
IT is rarely involved at the analytics administration level, but they are very involved at the storage level. Big data needs both high capacity and high performance, which requires storage with high capacity disk and the ability to process storage IO very quickly. It must also be highly available since big data by definition is active and important data. And it should be cost-effective as well, though it will not inexpensive.
[Taneja Group discusses scale-out storage as a best practice solution to big data analytics in our report: “Big Data, Big Storage: Scale-Out NAS for Big Data Environments.” (http://bit.ly/UGCVjm)]
Big data means different things to different people. A database administrator might insist that big data is large databases; a 100-server SharePoint administrator might classify content blobs as big data; a storage administrator in a hospital radiology lab may define big data as digitized x-rays times 100,000 yearly patients. In fact they are all right: each administrator’s data is large, active, and must be kept protected and highly available to applications. In other words, big data.
It is the business units’ responsibility to decide how to use and analyze this data; it is IT’s lookout to store the data in a way that provides required service levels of availability and performance. IT frequently turns to NAS to do this, citing its familiarity, file-based architecture and general ease of use. However, traditional NAS’s very simplicity can impact its usability for big data growth and capacity needs. Given fast data growth and more active data than ever before, this model soon disintegrates into poorly managed storage sprawl and forced data migrations in the name of balancing workloads.
There are several storage choices for big data depending on your big data environment: projected growth, data types, performance, capacity and scalability. One excellent option for many big data storage environments is scale-out NAS. This report will briefly discuss scale-out and suggest important questions to ask when researching vendors.