Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.
All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.
Choosing a cloud vendor in today's market can be a challenge. This market profile from Taneja Group provides an objective review of current cloud infrastructure-as-a-service (IaaS) and management, platform-as-a-service (PaaS), end user computing (EUC), and public cloud IaaS offerings from eleven leading vendors and includes key takeaways for IT leaders to consider when selecting a vendor.
Attached is the executive summary only. To get access to the FULL Taneja Group Landscape Report, you can download it from VMware directly.
In this paper, we’ll briefly review the challenges to assuring good performance in today’s competitive IT environment, and discuss what it takes to overcome these challenges to deploy appropriate end-to-end infrastructure and operationally deliver high-performance service levels. We’ll then introduce TeamQuest, a long-time leading vendor in IT Service Optimization who has recently expanded their world-class performance and capacity management capabilities with deep storage domain coverage. This new solution is unique in both its non-linear predictive modeling leveraged to produce application-specific performance KPIs and its comprehensive span of visibility and management that extends from applications all the way down into SAN storage systems. Ultimately, we’ll see how TeamQuest empowers IT to take full advantage of agility and efficiency solutions like infrastructure virtualization, even for the most performance-sensitive and storage-intensive applications.
The answer to these organizations is IBM SmartCloud Storage Access, which lets organizations turn on-premise storage systems including IBM Scale Out Network Attached Storage (SONAS) and IBM Storwize V7000 Unified Storage into powerful private clouds. The cloud-based storage services created by IBM SmartCloud Storage Access combines the scale-out and unified features of the underlying storage systems into highly flexible and manageable cloud-based storage.
This report is also available through the IBM website here.
The past few years have seen virtualization rapidly move into the mainstream of the data center. Today, virtualization is often the defacto standard in the data center for deployment of any application or service. This includes important operational and business systems that are the lifeblood of the business.
For mission critical systems, customers necessarily demand a broader level of services than is common among the test and development environments where virtualization often gains its foothold in the data center. It goes almost without saying that topmost in customer’s minds are issues of availability.
Availability is a spectrum of technology that offers businesses many different levels of protection – from general recoverability to uninterruptable applications. At the most fundamental level, are mechanisms that protect the data and the server beneath applications. While in the past these mechanisms have often been hardware and secondary storage systems, VMware has steadily advanced the capabilities of their vSphere virtualization offering, and it includes a long list of features – vMotion, Storage vMotion, vSphere Replication, VMware vCenter Site Recovery Manager, vSphere High Availability, and vSphere Fault Tolerance. While clearly VMware is serious about the mission critical enterprise, each of these offerings have retained a VMware-specific orientation toward protecting the “compute instance”.
The challenge is that protecting a compute instance does not go far enough. It is the application that matters, and detecting VM failures may fall short of detecting and mitigating application failures.
With this in mind, Symantec has steadily advanced a range of solutions for enhancing availability protection in the virtual infrastructure. Today this includes ApplicationHA – developed in partnership with VMware – and their gold standard offering of Veritas Cluster Server (VCS) enhanced for the virtual infrastructure. We recently turned an eye toward how these solutions enhance virtual availability in a hands-on lab exercise, conducted remotely from Taneja Group Labs in Phoenix, AZ. Our conclusion: VCS is the only HA/DR solution that can monitor and recover applications on VMware that is fully compatible with typical vSphere management practices such as vMotion, Dynamic Resource Scheduler and Site Recovery Manager, and it can make a serious difference in the availability of important applications.
Microsoft officially acquired StorSimple on November 15, 2012. StorSimple was a relative startup that had been shipping products for about 18 months. Why did Microsoft buy StorSimple? What is the strategy behind the purchase? Where will Microsoft take this newly acquired technology? These are many of the questions we are being asked at present. Here is our view....
Taneja Group and InfoStor jointly ran a survey asking IT managers about their big data experiences and roadmaps. We concluded that there is a great deal of uncertainty around big data: what it is, how to manage it, and if it is even in the IT domain rather than specialized application administrators.
Storing and managing large volumes of data certainly involves IT. However, “big data” is its own class: large data sets that are subjected to ongoing analytics and/or massive re-use. Some big data is structured into databases; most of it is unstructured. Big data operations continuously act upon large and growing volumes of data, which generates fast and frequent data movement between servers, networks and storage. Big data analytics in particular need fast and large feedback loops for decision-making as the specialized software tools analyze and reform data into a variety of views, reports and reformed data sets.
IT is rarely involved at the analytics administration level, but they are very involved at the storage level. Big data needs both high capacity and high performance, which requires storage with high capacity disk and the ability to process storage IO very quickly. It must also be highly available since big data by definition is active and important data. And it should be cost-effective as well, though it will not inexpensive.
[Taneja Group discusses scale-out storage as a best practice solution to big data analytics in our report: “Big Data, Big Storage: Scale-Out NAS for Big Data Environments.” (http://bit.ly/UGCVjm)]