Taneja Group | Applications
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Applications

Profiles/Reports

BakBone Introduces NetVault: TrueCDP – Integrated, Continuous Data Protection

In this profile we review this challenge of fast and granular data recoverability, the role of CDP, and the advantage of BakBone’s NetVault: TrueCDP for fast and consistent file recovery in environments running the BakBone suite of data protection products.

Publish date: 09/01/07
Profiles/Reports

Asempra BCS - Continuous Availability

The reality today is that too many data protection platforms targeted at the small and medium enterprise (SME) market fail to deliver basic capabilities with the ease of use, speed, and flexibility necessary to provide the holistic, effective data protection demanded.  What the SME requires is a new approach to data protection that is easy to use and ensures the business that both the application and data remain available. We’ve identified this approach to data protection as Continuous Availability.

Publish date: 10/07/07
news

Moving applications to the cloud guide

The four-part cloud tips series by Arun Taneja, founder and president of Taneja Group, concludes with an explanation of which applications are best suited for the cloud. When moving applications to the cloud, what are the basic ground rules for what applications aren’t going to work and which ones might from a performance standpoint?

  • Premiered: 05/10/12
  • Author: Arun Taneja
  • Published: TechTarget: SearchCloudStorage
Topic(s): TBA Cloud TBA Applications TBA Security TBA DR TBA Big Data
Profiles/Reports

VI - Top Six Physical Layer Best Practices: Maintaining Fiber Optics for the High Speed Data Center

Whether it’s handling more data, accelerating mission-critical applications, or ultimately delivering superior customer satisfaction, businesses are requiring IT to go faster, farther, and at ever-larger scales. In response vendors keep evolving newer generations of higher-performance technology. It’s an IT arms race full of uncertainty, but one thing is inevitable – the interconnections that tie it all together, the core data center networks, will be driven faster and faster.

Unfortunately, many data center owners are under the impression that their current “certified” fiber cabling plant is inherently future-proofed and will readily handle tomorrow’s networking speeds. This is especially true for the high-speed critical SAN’s at the heart of the data center. For example, most of today’s fiber plants supporting protocols like 2Gb or 4Gb Fibre Channel (FC) simply do not meet the required physical layer specifications to support upgrades to 8Gb or 16Gb FC. And faster speeds like 20Gb FC are on the horizon.

It is not just the plant design that’s a looming problem. Fiber cabling has always deserved special handling but is often robust enough that it can withstand a certain amount of dirt and mistreatment at today’s speeds. While lack of good cable hygiene and maintenance can and does cause significant problems today, at higher networking speeds the tolerance for dust, bends, and other optical distractions is much smaller. Careless practices need to evolve to whole new level of best practice now or future network upgrades are doomed.

In this paper we’ll consider the tighter requirements of higher speed protocols and examine the critical reasons why standard fiber cabling designs may not be “up to speed”. We’ll introduce some redesign considerations and also look at how an improperly maintained plant can easily degrade or defeat higher-speed network protocols, including some real world experiences that we’ve drawn from experienced field experts in SAN troubleshooting at Virtual Instruments. Along the way we will come to recommend the top six physical layer best practices we see necessary to designing and maintaining fiber to handle whatever comes roaring down the technology highway.

Publish date: 07/31/12
news / Blog

Move to the Cloud: A Taneja Group eBook

Cloud storage can be a beast to wrangle. Deciding which applications to move into the cloud, understanding how to select and deal with a cloud storage provider, deciding on cloud storage solutions – none of these are easy. Is it worth it?

  • Premiered: 09/13/12
  • Author: Taneja Group
Topic(s): Cloud Storage Applications eBook
Profiles/Reports

Increasing Virtualization Velocity with NetApp OnCommand Balance

Why do so many virtualization implementations stall out when it comes to mission-critical applications? Why do so many important applications still run on dedicated hardware? In one word – performance. Virtualization technologies have proven incredibly powerful in helping IT deliver agile “idealized” services, and doing so by efficiently sharing expensive physical resources. But mission-critical applications bring above-average requirements for performance service quality that can greatly challenge virtualized hosting.

Maintaining good performance (as well as availability, et.al.) requires solid systems management. Hypervisor management solutions like VMware’s vCenter Operations Management Suite provide a significant advantage to virtualization administrators by centralizing and simplifying many traditionally disparate management tasks, including fundamental performance monitoring for system health and component utilization. Yet when it comes to assuring performance for mission-critical applications like transactional databases and email – the kinds of apps that depend heavily on multiple IT domains of resources – straight hypervisor-centric solutions can fall short. Solving complex cross-domain performance issues like resource contention, virtual-physical competition, and assuring sufficient “good performance” headroom can require both deeper and wider analysis capabilities.

In this paper we’ll first review a high-level management perspective of performance and capacity to explore what it takes to support mission-critical application performance service levels. We’ll examine the management strengths of the most well known hypervisor management solution – VMware’s vCenter Operations Suite - to understand the scope and limitations of its performance and capacity management capabilities. Next, we will look at how the uniquely cross-domain (storage and server, virtual and physical) model-based performance management capabilities of NetApp’s OnCommand Balance complements a solution like vCenter Operations. The resulting combination helps the virtualization admin and/or storage admin become more proactive and ultimately elevate performance management enough to reliably virtualize mission-critical applications.
 

Publish date: 10/29/12
Profiles/Reports

Hybrid Cloud Storage from Microsoft: Leveraging Windows Azure and StorSimple

Cloud computing does some things very well. It delivers applications and upgrades. It runs analysis on cloud-based big data. It connects distributed groups sharing communications and files. It provides a great environment for developing web applications and running test/dev processes.

But public cloud storage is a different story. The cloud does deliver long-term, cost-effective storage for inactive backup and archives. Once the backup and archive data streams are scheduled and running they can use relatively low bandwidth as long as they are deduping on-site before transport. (And as long as they do not have to be rehydrated pre-upload, which is another story.) This alone helps to save on-premises storage capacity and can replace off-site tape vaulting.

But cloud storage users want more. They want to have the cost and agility advantages of the public cloud without incurring the huge expense of building one. They want to keep using the public cloud for cost-effective backup and archive, but they also want to use it for more active – i.e. primary – data. This is especially true for workloads with rapidly growing data sets that quickly age like collaboration and file shares. Some of this data needs to reside locally but the majority can be moved, or tiered, to public cloud storage.

What does the cloud need to work for this enterprise wish list? Above all it needs to make public cloud storage an integral part of the on-premises primary storage architecture. This requires intelligent and automated storage tiering, high performance for baseline uploads and continual snapshots, no geographical lock-in, and a central storage management console that integrates cloud and on-premises storage.

Hybrid cloud storage, or HCS, meets this challenge. HCS turns the public cloud into a true active storage tier for less active production data that is not ready to be put out to backup pasture. Hybrid cloud storage integrates on-premises storage with public cloud storage services: not as another backup target but as integrated storage infrastructure. The storage system uses both the on-premises array and scalable cloud storage resources for primary data, expanding that data and data protection to a cost-effective cloud storage tier.

Microsoft’s innovative and broad set of technology enables a true, integrated solution for hybrid cloud storage for business and government organizations – not just a heterogeneous combination of private cloud and public cloud storage offerings. Comprised of StorSimple cloud-integrated storage and the Windows Azure Storage service, HCS from Microsoft well serves the demanding enterprise storage environment, enabling customers to realize huge data management efficiencies in their Microsoft applications and Windows and VMware environments.

This paper will discuss how the Microsoft solution for hybrid cloud storage, consisting of Windows Azure and StorSimple, is different from traditional storage, best practices for leveraging it, and the real world results from multiple customer deployment examples.

Publish date: 08/31/13
Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15