Join Newsletter
Trusted Business Advisors, Expert Technology Analysts



Page 1 of 41 pages  1 2 3 >  Last ›

Unitrends Enterprise Backup 9.0: Simple and Powerful Data Protection for the Whole Data Center

Backup and recovery, replication, recovery assurance: all are more crucial than ever in the light of massively growing data. But complexity has grown right alongside expanding data. Data centers and their managers strain under the burdens of legacy physical data protection, fast-growing virtual data requirements, backup decisions around local, remote and cloud sites, and the need for specialist IT to administer complex data protection processes.

In response, Unitrends has launched a compelling new version of Unitrends Enterprise Backup (UEB): Release 9.0. Its completely revamped user interface and experience significantly reduces management overhead and lets even new users easily perform sophisticated functions using the redesigned dashboard. And its key capabilities are second to none for modern data protection in physical and virtual environments.

One of UEB 9.0’s differentiating strengths (indeed, the entire Unitrends product line) is the fact that in today’s increasingly more virtualized world, they still offer deep support for physical as well as virtual environments. This is more important than it might at first appear. There is a huge installed base of legacy equipment in existence and a lot of it has still not been moved into a virtual environment; yet it all needs to be protected. Within this legacy base, there are many mission-critical applications still running on physical servers that remain high priority protection targets. In these environments, many admins are forced to purchase specialized tools for protecting virtual environments separate from physical ones, or to use point backup products for specific applications. Both options carry extra costs by buying multiple applications that do essentially the same thing, and by hiring multiple people trained to use them.

This is why no matter how virtualized an environment is, if there is even one critical application that is still physical, admins need to strongly consider a solution that protects both. This gives the data center maximum protection with lower operating costs, since they no longer need multiple data protection packages and trained staff to run them.

This is where Unitrends steps in. With its rich capabilities and intuitive interface, UEB 9.0 protects data throughout the data center, and does not require IT specialists. This Product in Depth assesses Unitrends Enterprise Backup 9.0, the latest version of Unitrends flagship data protection platform. We put the new user interface through its paces to see just how intuitive it is, what information it provides and how many clicks it takes to perform some basic operations. We also did a deep dive into the functionality provided by the backup engine itself, some of which is a carryover from earlier versions and some which are new for 9.0.

Publish date: 09/17/15

Making Your Virtual Infrastructure Non-Stop with Veritas Products

All businesses have a core set of applications and services that are critical to their ongoing operation and growth. They are the lifeblood of a business. Many of these applications and services are run in virtual machines (VM), as over the last decade virtualization has become the de facto standard in the datacenter for the deployment of applications and services. Some applications and services are classified as business critical. These business critical applications require a higher level of resilience and protection to minimize the impact on a business’s operation if they become inoperable.

The ability to quickly recover from an application outage has become imperative in today’s datacenter. There are various methods that offer different levels of protection to maintain application uptime. These methods range from minimizing the downtime at the application level to virtual machine (VM) recovery to physical system recovery. Prior to virtualization, mechanisms were in place to protect physical systems and were based on having secondary hardware and redundant storage systems. However, as noted above, today most systems have been virtualized. The market leader in virtualization, VMware, recognized the importance of availability early on and created business continuity features in vSphere such as vMotion, Storage vMotion, vSphere Replication, vCenter Site Recovery Manager (SRM), vSphere High Availability (HA) and vSphere Fault Tolerance (FT). These features have indeed increased the uptime of applications in the enterprise, yet they are oriented toward protecting the VM. The challenge, as many enterprises have discovered, is that protecting the VM alone does not guarantee uptime for applications and services. Detecting and remediating VM failure falls short of what is truly vital, detecting and remediating application and service failures.

With application and service availability in mind, companies such as Veritas have come in to provide availability and resiliency for them. Focusing on improving how VMware can deliver application availability, Veritas Technologies LLC. has developed a set of solutions to meet the high availability and disaster recovery requirements of business critical applications. These solutions include Veritas ApplicationHA (developed in partnership with VMware) and Veritas InfoScale Availability (formerly Veritas Cluster Server). Both of these products have been enhanced to work in a VMware-based virtual infrastructure environment.

Publish date: 09/04/15

Data Protection Designed for Flash - Better Together: HP 3PAR StoreServ Storage and StoreOnce System

Flash technology has burst on the IT scene within the past few years with a vengeance. Initially seen simply as a replacement for HDDs, flash now is triggering IT and business to rethink a lot of practices that have been well established for decades. One of those is data protection. Do you protect data the same way when it is sitting on flash as you did when HDDs ruled the day? How do you take into account that at raw cost/capacity levels, flash is still more expensive than HDDs?  Do data deduplication and compression technologies change how you work with flash? Does the fact that flash technology is injected most often to alleviate severe application performance issues require you to rethink how you should protect, manage, and move this data?

These questions apply across the board when flash is injected into storage arrays but even more so when you consider all-flash arrays (AFAs), which are often associated with the most mission-critical applications an enterprise possesses. The expectations for application service levels and data protection recovery time objectives (RTOs) and recovery point objectives (RPOs) are vastly different in these environments. Given this, are existing data protection tools adequate? Or is there a better way to utilize these expensive assets and yet achieve far superior results? The short answer is yes to both.

In this Opinion piece we will focus on answering these questions broadly through the data protection lens. We will then look at a specific case of how data protection can be designed with flash in mind by considering the combination of flash-optimized HP 3PAR StoreServ Storage, HP StoreOnce System backup appliances, and HP StoreOnce Recovery Management Central (RMC) software. These elements combine to produce an exceptional solution that meets the stringent application service requirements and data protection RTOs and RPOs that one finds in flash storage environments while keeping costs in check.

Publish date: 08/17/15

Redefining the Economics of Enterprise Storage (2015 Update)

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers. Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 06/30/15

The Promise of VM-Centric Storage and VVols: Tintri VMstore Delivers the Future Promise Now

The din surrounding VMware vSphere Virtual Volumes (VVols) is deafening. It started in 2011 when VMware announced the concept of VVols and the storage industry reacted with enthusiasm and culminated with its introduction as part of vSphere 6 release in April 2015. Viewed simply, VVols is an API that enables storage arrays that support the functionality to provision and manage storage at the granularity of a VM, rather than LUNs or Volumes or mount points, as they do today. Without question, VVols is an incredibly powerful concept and will fundamentally change the interaction between storage and VMs in a way not seen since the concept of server virtualization first came to market. No surprise then that each and every storage vendor in the market is feverishly trying to build in VVols support and competing on the superiority of their implementation.

Yet one storage player, Tintri, has been delivering products with VM-centric features for four years without the benefit of VVols. How can this be so? How could Tintri do this? And what does it mean for them now that VVols are here? To do justice to this question we will briefly look at what VVols are and how they work and then dive into how Tintri has delivered the benefits of VVols for several years. We will also look at what the buyer of Tintri gets today and how Tintri plans to integrate VVols. Read on…

Publish date: 06/26/15

Journey Towards Software Defined Data Center (SDDC)

While it has always been the case that IT must respond to increasing business demands, competitive requirements are forcing IT to do so with less. Less investment in new infrastructure and less staff to manage the increasing complexity of many enterprise solutions. And as the pace of business accelerates those demands include the ability to change services… quickly. Unfortunately, older technologies can require months, not minutes to implement non-trivial changes. Given these polarizing forces, the motivation for the Software Defined Data Center (SDDC) where services can be instantiated as needed, changed as workloads require, and retired when the need is gone, is easy to understand.

The vision of the SDDC promises the benefits needed to succeed: flexibility, efficiency, responsiveness, reliability and simplicity of operation… and does so, seemingly paradoxically, with substantial cost savings. The initial steps to the SDDC clearly come from server virtualization which provides many of the desired benefits. The fact that it is already deployed broadly and hosts between half and two-thirds of all server instances simply means that existing data centers have a strong base to build on. Of the three major pillars within the data center, the compute pillar is commonly understood to be furthest along through the benefits of server virtualization.

The key to gaining the lion’s share of the remaining benefits lies in addressing the storage pillar. This is required not only to reap the same advantages through storage virtualization that have become expected in the server world, but also to allow for greater adoption of server virtualization itself. The applications that so far have resisted migration to the hypervisor world have mostly done so because of storage issues. The next major step on the journey to the SDDC has to be to virtualize the entire storage tier and to move the data from isolated, hardware-bound silos where it currently resides into a flexible, modern, software-defined environment.

While the destination is relatively clear, how to move is key as a business cannot exist without its data. There can be no downtime or data loss. Furthermore, just as one doesn’t virtualize every server at once (unless one has the luxury of a green-field deployment and no existing infrastructure and workloads to worry about) one must be cognizant of the need for prioritized migration from the old into the new.  And finally, the cost required to move into the virtualized storage world is a major, if not the primary, consideration. Despite the business benefits to be derived, if one cannot leverage one’s existing infrastructure investments, it would be hard to justify a move to virtualized storage. Just to be sure, we believe virtualized storage is a prerequisite for Software Defined Storage, or SDS.

In this Technology Brief we will first look at the promise of the SDDC, then focus on SDS and the path to get there. We then look at IBM SAN Volume Controller (SVC), the granddaddy of storage virtualization. SVC initially came to market as a heterogeneous virtualization solution then was extended to homogeneous storage virtualization, as in the case of IBM Storwize family. It is now destined to play a much more holistic role for IBM as an important piece of the overall Spectrum Storage program.

Publish date: 06/17/15
Page 1 of 41 pages  1 2 3 >  Last ›