Taneja Group | SLA
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: SLA

news

Compliance & The Cloud

Know Industry Regulations, Establish SLAs & Protect Your Data

  • Premiered: 03/04/13
  • Author: Taneja Group
  • Published: PCToday.com
Topic(s): TBA Cloud TBA compliance TBA Data protection TBA Christine Taylor TBA SLA TBA Storage
news

Cloud insurance: How much protection does it provide?

You have insurance for your house, your car and maybe your boat. Why not buy insurance for your cloud storage?

  • Premiered: 08/22/13
  • Author: Taneja Group
  • Published: Tech Target: Search Cloud Storage
Topic(s): TBA Cloud Storage TBA Cloud TBA Storage TBA Cloud Security TBA SLA
news / Blog

Backup Balancing Act: Growing Pains with Data Backup and the Cost of Migration

Backup these days is, as I view it, an incredibly interesting topic. Data protection has always been a challenging operation for the business. When you think about it, it is one of the most physical of systems, requiring tremendous amounts of data movement and storage - moving and storing the same bits over and over again with such requirements for speed and then storage that the task has at many different times in the history of IT seemed like mission impossible.

  • Premiered: 04/03/14
  • Author: Taneja Group
Topic(s): Data protection Storage Cloud Backup Disaster Recovery DR Riverbed Tintri VMWare VSAN SSD Flash Archive Disk SLA Tape
news

In the Cloud Era, The Era of Convergence Is Upon Us

The era of IT infrastructure convergence is upon us. Every major vendor has some type of offering under this category. Startups and smaller players are also "talking" convergence. But what exactly is convergence and why are all the vendors so interested in getting included in this category? We will explain below the history of convergence, what it is, what it is not, what benefits accrue from such systems, who the players are, and who is leading the pack in true convergence.

  • Premiered: 06/10/14
  • Author: Arun Taneja
  • Published: Virtualization Review
Topic(s): TBA Arun Taneja TBA Virtualization Review TBA Virtualization TBA IT infrastructure TBA Converged Infrastructure TBA convergence TBA Networking TBA HDD TBA WANO TBA WAN Optimization TBA Data Deduplication TBA Deduplication TBA Hybrid Array TBA Hybrid TBA Cloud Computing TBA Cloud Storage TBA Storage TBA Cloud TBA Hadoop TBA Storage Virtualization TBA Compression TBA RTO TBA RPO TBA DR TBA Disaster Recovery TBA Remote Office TBA ROBO TBA Compute TBA hyperconvergence TBA Server Virtualization
news

Six questions to ensure smooth retrieval of cloud-based data

Analyst Arun Taneja explains why the first step in moving data to the cloud is to determine how easy it will be to extract it.

  • Premiered: 12/18/14
  • Author: Arun Taneja
  • Published: Tech Target: Search Cloud Storage
Topic(s): TBA Cloud TBA Arun Taneja TBA data TBA Storage TBA SLA TBA Amazon S3 TBA Amazon EC2 TBA Amazon TBA VMWare TBA ESX TBA Unitrends TBA HotLink TBA RiverMeadow TBA Encryption TBA AWS TBA Amazon Web Services TBA Public Cloud
Profiles/Reports

Making your Virtual Infrastructure Non-Stop: Making availability efficient with Symantec products

All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.

The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.

With application and service availability in mind, companies such as Symantec have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Symantec has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Symantec ApplicationHA (developed in partnership with VMware) and Symantec Cluster Server powered by Veritas (VCS). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.

Publish date: 04/13/15
Profiles/Reports

HP ConvergedSystem: Solution Positioning for HP ConvergedSystem Hyper-Converged Products

Converged infrastructure systems – the integration of compute, networking, and storage - have rapidly become the preferred foundational building block adopted by businesses of all shapes and sizes. The success of these systems has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the effort and time to custom build their infrastructure from best of breed D-I-Y components. Purpose built converged infrastructure systems have been optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI).

Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, integrated at the rack level gave businesses the flexibility to cover the widest range of solution workload requirements while still using well-known infrastructure components. Emerging onto the scene recently has been a more modular approach to convergence using what we term Hyper-Convergence. With hyper-convergence, the three-tier architecture has been collapsed into a single system appliance that is purpose-built for virtualization with hypervisor, compute, and storage with advanced data services all integrated into an x86 industry-standard building block.

In this paper we will examine the ideal solution environments where Hyper-Converged products have flourished. We will then give practical guidance on solution positioning for HP’s latest ConvergedSystem Hyper-Converged product offerings.

Publish date: 05/07/15
news

In the Cloud Era, The Era of Convergence Is Upon Us

What exactly is convergence and what is making vendors scramble to get included in this category?

  • Premiered: 06/10/15
  • Author: Arun Taneja
  • Published: Virtualization Review
Topic(s): TBA IT infrastructure TBA convergence TBA Storage TBA HDD TBA WANO TBA WAN Optimization TBA Storage Virtualization TBA Virtualization TBA Cloud TBA Data Deduplication TBA Deduplication TBA Compression TBA Hybrid Array TBA Hybrid TBA Backup TBA Hadoop TBA Cloud Storage TBA Hybrid Cloud Storage TBA Hybrid Cloud TBA Capacity TBA Disaster Recovery TBA DR TBA RTO TBA RPO TBA hyperconvergence TBA hyperconverged TBA VCE TBA VMWare TBA Cisco TBA EMC
Profiles/Reports

DP Designed for Flash - Better Together: HPE 3PAR StoreServ Storage and StoreOnce System

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HPE 3PAR StoreServ Storage, HPE StoreOnce System backup appliances, and HPE Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 06/06/16
Profiles/Reports

Converged IT Infrastructure's Place in the Internet of Things

All of the trends leading towards the world-wide Internet of Things (IoT) – ubiquitous, embedded computing, mobile, organically distributed nodes, and far-flung networks tying them together - are also coming in full force into the IT data center. These solutions are taking the form of converged and hyperconverged modules of IT infrastructure. Organizations adopting such solutions gain from a simpler building-block way to architect and deploy IT, and forward-thinking vendors now have a unique opportunity to profit from subscription services that while delivering superior customer insight and support, also help build a trusted advisor relationship that promises an ongoing “win-win” scenario for both the client and the vendor.

There are many direct (e.g. revenue impacting) and indirect (e.g. customer satisfaction) benefits we mention in this report, but the key enabler to this opportunity is in establishing an IoT scale data analysis capability. Specifically, by approaching converged and hyperconverged solutions as an IoT “appliance”, and harvesting low-level component data on utilization, health, configuration, performance, availability, faults, and other end point metrics across the full worldwide customer base deployment of appliances, an IoT vendor can then analyze the resulting stream of data with great profit for both the vendor and each individual client. Top-notch analytics can feed support, drive product management, assure sales/account control, inform marketing, and even provide a revenue opportunity directly (e.g. offering a gold level of service to the end customer). 

An IoT data stream from a large pool of appliances is almost literally the definition of “big data” – non-stop machine data at large scale with tremendous variety (even within a single converged solution stack) – and operating and maintaining such a big data solution requires a significant amount of data wrangling, data science and ongoing maintenance to stay current. Unfortunately this means IT vendors looking to position IoT oriented solutions may have to invest a large amount of cash, staff and resources into building out and supporting such analytics. For many vendors, especially those with a varied or complex convergence solution portfolio or established as a channel partner building them from third-party reference architectures, these big data costs can be prohibitive. However, failing to provide these services may result in large friction selling and supporting converged solutions to clients now expecting to manage IT infrastructure as appliances.

In this report, we’ll look at the convergence and hyperconvergence appliance trend, and the increasing customer expectations for such solutions. In particular we’ll see how IT appliances in the market need to be treated as complete, commoditized products as ubiquitous and with the same end user expectations as emerging household IoT solutions. In this context, we’ll look at Glassbeam’s unique B2B SaaS SCALAR that converged and hyperconverged IT appliance vendors can immediately adopt to provide an IoT machine data analytic solution. We’ll see how Glassbeam can help differentiate amongst competing solutions, build a trusted client relationship, better manage and support clients, and even provide additional direct revenue opportunities.

Publish date: 08/18/15
news / Blog

New Startup Formation Data Systems is Attempting to Redefine Enterprise Storage

Earlier this month I was briefed by a new startup called Formation Data Systems. Formation Data Systems is being led by Mark Lewis a former EMC Executive VP and he has with him a strong team of executives previously from industry leading companies. The vision of this company is to redefine the enterprise storage market by fundamentally shifting to a software-defined hyper-scale storage platform through a software platform called FormationOne.

Profiles/Reports

Making Your Virtual Infrastructure Non-Stop with Veritas Products

All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.

The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.

With application and service availability in mind, companies such as Veritas have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Veritas Technologies LLC. has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Veritas ApplicationHA (developed in partnership with VMware) and Veritas InfoScale Availability (formerly Veritas Cluster Server). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.

Publish date: 09/04/15
Profiles/Reports

Unitrends Enterprise Backup 9.0: Simple and Powerful Data Protection for the Whole Data Center

Backup and recovery, replication, recovery assurance: all are more crucial than ever in the light of massively growing data. But complexity has grown right alongside expanding data. Data centers and their managers strain under the burdens of legacy physical data protection, fast-growing virtual data requirements, backup decisions around local, remote and cloud sites, and the need for specialist IT to administer complex data protection processes.

In response, Unitrends has launched a compelling new version of Unitrends Enterprise Backup (UEB): Release 9.0. Its completely revamped user interface and experience significantly reduces management overhead and lets even new users easily perform sophisticated functions using the redesigned dashboard. And its key capabilities are second to none for modern data protection in physical and virtual environments.

One of UEB 9.0’s differentiating strengths (indeed, the entire Unitrends product line) is the fact that in today’s increasingly more virtualized world, they still offer deep support for physical as well as virtual environments. This is more important than it might at first appear. There is a huge installed base of legacy equipment in existence and a lot of it has still not been moved into a virtual environment; yet it all needs to be protected. Within this legacy base, there are many mission-critical applications still running on physical servers that remain high priority protection targets. In these environments, many admins are forced to purchase specialized tools for protecting virtual environments separate from physical ones, or to use point backup products for specific applications. Both options carry extra costs by buying multiple applications that do essentially the same thing, and by hiring multiple people trained to use them.

This is why no matter how virtualized an environment is, if there is even one critical application that is still physical, admins need to strongly consider a solution that protects both. This gives the data center maximum protection with lower operating costs, since they no longer need multiple data protection packages and trained staff to run them.

This is where Unitrends steps in. With its rich capabilities and intuitive interface, UEB 9.0 protects data throughout the data center, and does not require IT specialists. This Product in Depth assesses Unitrends Enterprise Backup 9.0, the latest version of Unitrends flagship data protection platform. We put the new user interface through its paces to see just how intuitive it is, what information it provides and how many clicks it takes to perform some basic operations. We also did a deep dive into the functionality provided by the backup engine itself, some of which is a carryover from earlier versions and some which are new for 9.0.

Publish date: 09/17/15
Profiles/Reports

Free Report - Better Together: HP 3PAR StoreServ Storage and StoreOnce System Opinion

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HP 3PAR StoreServ Storage, HP StoreOnce System backup appliances, and HP StoreOnce Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 09/25/15
news

VMware vSphere 6 release good news for storage admins

VMware's vSphere 6 release shows that the vendor is aiming for a completely software-defined data center with a fully virtualized infrastructure.

  • Premiered: 10/05/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VMware vSphere TBA vSphere TBA vSphere 6 TBA software-defined TBA Software-Defined Data Center TBA SDDC TBA Virtualization TBA virtualized infrastructure TBA VSAN TBA VVOLs TBA VMware VVOLs TBA Virtual Volumes TBA VMotion TBA high availability TBA Security TBA scalability TBA Data protection TBA replication TBA VMware PEX TBA Fault Tolerance TBA Virtual Machine TBA VM TBA Provisioning TBA Storage Management TBA SLA TBA 3D Flash TBA FT TBA vCPU TBA CPU
Profiles/Reports

Now Big Data Works for Every Enterprise: Pepperdata Adds Missing Performance QoS to Hadoop

While a few well-publicized web 2.0 companies are taking great advantage of foundational big data solution that they have themselves created (e.g. Hadoop), most traditional enterprise IT shops are still thinking about how to practically deploy their first business-impacting big data applications – or have dived in and are now struggling mightily to effectively manage a large Hadoop cluster in the middle of their production data center. This has led to the common perception that realistic big data business value may yet be just out of reach for most organizations – especially those that need to run lean and mean on both staffing and resources.   

This new big data ecosystem consists of scale-out platforms, cutting-edge open source solutions, and massive storage that is inherently difficult for traditional IT shops to optimally manage in production – especially with still evolving ecosystem management capabilities. In addition, most organizations need to run large clusters supporting multiple users and applications to control both capital and operational costs. Yet there are no native ways to guarantee, control, or even gain visibility into workload-level performance within Hadoop. Even if there wasn’t a real high-end skills and deep expertise gap for most, there still isn’t any practical way that additional experts could tweak and tune mixed Hadoop workload environments to meet production performance SLA’s.

At the same time, the competitive game of mining of value from big data has moved from day-long batch ELT/ETL jobs feeding downstream BI systems, to more user interactive queries and business process “real time” applications. Live performance matters as much now in big data as it does in any other data center solution. Ensuring multi-tenant workload performance within Hadoop is why Pepperdata, a cluster performance optimization solution, is critical to the success of enterprise big data initiatives.

In this report we’ll look deeper into today’s Hadoop deployment challenges and learn how performance optimization capabilities are not only necessary for big data success in enterprise production environments, but can open up new opportunities to mine additional business value. We’ll look at Pepperdata’s unique performance solution that enables successful Hadoop adoption for the common enterprise. We’ll also examine how it inherently provides deep visibility and reporting into who is doing what/when for troubleshooting, chargeback and other management needs. Because Pepperdata’s function is essential and unique, not to mention its compelling net value, it should be a checklist item in any data center Hadoop implementation.

To read this full report please click here.

Publish date: 12/17/15
news

Concurrent app management tools work on Hadoop and Spark

If Hadoop and Spark are to sneak into the enterprise, they will need to be manageable. With Driven, Concurrent Inc. takes a stab at the problem.

  • Premiered: 12/09/15
  • Author: Taneja Group
  • Published: TechTarget: Search Data Management
Topic(s): TBA Hadoop TBA Spark TBA Driven TBA Concurrent TBA manageability TBA Big Data TBA Performance TBA Performance Management TBA Mike Matchett TBA Hive TBA MapReduce TBA SLA TBA service level agreement TBA software TBA high-fidelity TBA HiFi TBA cluster TBA Pepperdata TBA Oracle TBA IBM TBA CA
news

Mobile gaming company plays new Hadoop cluster management card

Chartboost, which operates a platform for mobile games, turned to new cluster management software in an effort to overcome problems in controlling the use of its Hadoop processing resources.

  • Premiered: 01/05/16
  • Author: Taneja Group
  • Published: TechTarget: Search Data Management
Topic(s): TBA Chartboost TBA mobile TBA cluster TBA Cluster Management TBA Hadoop TBA processing TBA data processing TBA analytics TBA Big Data TBA MapReduce TBA Hive TBA Spark TBA Optimization TBA Cloudera TBA AWS TBA Amazon TBA Cloud TBA YARN TBA Pepperdata TBA Memory TBA CPU TBA Application TBA Concurrent TBA SLA TBA service-level agreement TBA HBase TBA application performance TBA application performance management TBA Mike Matchett
Profiles/Reports

Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps

Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.

The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.

Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.

HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications?  In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.

Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of. 

Publish date: 11/30/15
news

Hybrid cloud implementation preparation checklist

Get a complete listing of the resources, use cases and requirements that need to be taken into account before starting a hybrid cloud deployment project.

  • Premiered: 02/29/16
  • Author: Jeff Byrne
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Hybrid Cloud TBA Cloud Storage TBA Hybrid Cloud Storage TBA Storage TBA Disaster Recovery TBA Disaster Recovery as a Service TBA DRaaS TBA SLA TBA QoS TBA Storage Performance TBA Availability TBA Security TBA VPN TBA IaaS TBA CAPEX TBA SDS TBA software-defined storage TBA Storage Management