Includes Backup/Recovery, Archiving, DPM, VTL, CDP, Data De-duplication, DRM.
Data is the lifeblood of an enterprise. And yet data has been protected in essentially the same fashion over the past two decades, i.e. by backing it up to tape and sending the tapes offsite. This method alone is no longer adequate and a spade of new technologies has become available in the last five years. These new technologies are already transforming the way data is protected, how long it is kept online, how it is archived. Recovery Management has emerged as a new discipline focused on recovering data rather than copying data. The new compliance requirements are essentially requiring companies of all sizes to upgrade their data protection infrastructures or be subject to huge fines. The level of innovation in this space is torrid. Taneja Group covers this space from end to end and has defined many of the new categories that now are considered the norm. The analysts that cover this space have deep industry backgrounds in developing and marketing these technologies.
In this Technology Validation, we set out to examine StoreVirtual VSA, and through comparison to another leading virtual storage appliance (VMware’s vSphere Storage Appliance – VMware VSA) evaluate the effectiveness of StoreVirtual VSA’s architecture in enabling superior, primary-workload-ready storage in the virtual infrastructure. With an eye on ease of use, efficiency, and flexibility, we put StoreVirtual VSA and VMware vSphere Storage Appliance through a detailed examination that included both a review of functionality and a hands-on lab examination of performance, scalability, resiliency, and ease of use.
What did we find? Clearly, not every Virtual Storage Appliance is an equal. While VMware’s vSphere Storage Appliance was a step forward for VMware in attempting to ease the storage complexity for SMB customers, VMware turned their VSA to the goal of replacing storage by essentially turning their hypervisors into storage. As we’ll observe in this report, the consequence is that the VMware VSA consumes all local storage, and makes the hypervisor and storage pairing relatively less flexible – a compromise that is highlighted in a comparison with StoreVirtual VSA.
Collaboration is a huge concept, even narrowing it down to enterprise file collaboration (EFC) is still a big undertaking. Many vendors are using “collaboration” in their marketing materials yet they mean many different things by it, ranging from simple business interaction to sophisticated groupware to data sharing and syncing on a wide scale. The result is a good deal of market confusion.
Frankly, vendors selling file collaboration into the enterprise cannot afford massive customer confusion because selling file collaboration into the enterprise is already an uphill battle. First, customers – business end-users – are resistant to changing their Dropbox and Dropbox-like file share applications. As far as the users are concerned their sharing is working just fine between their own devices and small teams.
IT is very concerned about this level of consumer-level file sharing and if they are not, they should be. But IT faces a battle when it attempts to wean thousands of end-users off of Dropbox on the users’ personal devices. There must be a business advantage and clear usability for users who are required to adopt a corporate file sharing application on their own device.
IT must also have good reasons to deploy corporate file sharing using the cloud. From their perspective the Dropboxes of the world are fueling the BYOD (Bring Your Own Device) phenomenon. They need to replace consumer-level file collaboration applications with an enterprise scale application and its robust management console. However, while IT may be anxious about BYOD and insecure file sharing it is not usually the most driving need on their full agenda. They need to understand how an EFC solution can solve a very large problem, and why they need to take advantage of the solution now.
What is the solution? Enterprise file collaboration (EFC) with: 1) high scalability, 2) security, 3) control, 4) usability, and 5) compliance. In this landscape report we will discuss these five factors and the main customer drivers for this level of enterprise file collaboration.
Finally, we will discuss the leading vendors that offer enterprise file collaboration products and see how they stack up against our definition.
Object storage has long been pigeon-holed as a necessary overhead expense for long-term archive storage, a data purgatory one step before tape or deletion. In our experience, we have seen many IT shops view object storage more as something exotic they have to implement to meet government regulations rather than as a competitive strategic asset that can help their businesses make money.
Normally when companies invest in high-end IT assets like enterprise-class storage, they hope to re-coup those investments in big ways like accelerating the performance of market competitive applica-tions or efficiently consolidating data centers. Maybe they are even starting to analyze big data to find better ways to run the business. There are far more opportunities to be sure, but these kinds of “money-making” initiatives have been mainly associated with “file” and “block” types of storage – the primary storage commonly used to power databases, host office productivity applications, and build pools of shared resources for virtualization projects. But that’s about to change. If you’ve intentionally dismissed or just over-looked object storage it is time to take deeper look. Today’s object storage provides brilliant capabilities for enhancing productivity, creating global platforms and developing new revenue streams.
Object storage has been evolving from its historical second tier data dumping ground into a value-building primary storage platform for content and collaboration. And the latest high performance cloud storage solutions could transform the whole nature of enterprise data storage. To really exploit this new generation of object storage, it is important to understand not only what it is and how it has evolved, but to start thinking about how to harness its emerging capabilities in building net new business.
Reliable storage is critical to the lifeblood of every data-driven business, and operational storage capabilities like non-disruptive scalability, continuous data protection, capacity optimization, and disaster recovery are not just desired, but required. But enterprise-class storage features have long been out of reach of organizations that don't have enterprise-sized budgets, storage experts and large data centers. Instead, they make do with low-end disk arrays or even just a box of disks patched together with a minimal amount of data protection in the form of manual backups. The problem is that disks fail, organizations change, and data continues to grow – organizations that pile up disks under the desktop are taking big risks for significant business failure while those that pay up for traditional arrays and even cloud storage incur significant cost and management overhead.
Having to step up to deliver these advanced storage requirements challenges growing organizations with big adoption hurdles, not the least of which is both OPEX and CAPEX cost. Far too many organizations struggle along with high-risk storage or feel forced to allocate significant energy, cost, and staff time into acquiring, deploying, and operating high-touch storage arrays with layers of complex add-on software. Even larger enterprises with expert storage gurus and big data centers can feel the weight of managing complex SAN’s for departmental, ROBO, and other practical rubber-meets-road storage scenarios. What’s really needed is a new approach to storage - an affordable, expandable array solution with advanced storage capabilities baked in. Ideally it should be simpler to operate than even setting up a file system on raw drives and it should be available at a justifiable cost for even small data driven businesses.
In this solution brief we are going to look at what SMB and departmental storage buyers should both require and expect from storage solutions to meet their business goals, and how traditional mid-market storage based on old technologies can fall short. We will then introduce Exablox’s new OneBlox storage array to highlight how purposefully designing storage from the ground up can lead to a simple but powerful hardware design and software architecture that features built-in high availability, easy scalability, and great data protection. Along the way we’ll see how two real-world users of OneBlox experience its actual benefits, cost effectiveness and true ease of management in their live customer deployments.
In many ways, Information Technology (IT) has become the centerpiece of business operations across the globe. This dynamic is both an opportunity and a threat to IT organizations. On one hand, IT has a very important seat at the table as businesses decide where to invest or deploy new offerings and services. On the other hand, IT organizations now become responsible for ensuring that these business services, and the data that drives them, are always available.
To ensure availability, IT must have a comprehensive business continuity plan in place, especially for critical operations that the business requires. However, business critical services are no longer just a matter of managing a single application or workload running on a solitary server. Instead, business critical services are often sets of interwoven components made up of multiple physical and virtual servers that depend upon one another. Seldom does a business critical application stand alone, or act with complete independence from other systems in the data center.
This complexity introduces challenges and compromises that the business is little prepared to understand or recognize. Often, when it comes to business continuity, issues are not recognized until it is too late. Many systems may have had a more manageable approach to continuity in the physical world. Now, with the agility characteristics that virtualization introduces, viewing, controlling and protecting the complete business service, especially when that service is made up of multiple physical and virtual components becomes a larger challenge. Considering the intersection of the business critical applications that run on physical and virtual infrastructure, IT needs a better capability for viewing and protecting the entire service being delivered to a business.
In this solution brief, we’ll look at what a Business Service is comprised of, and the challenges and options for business continuity across disparate physical and virtual infrastructure.
Deduplication took the market by storm several years ago, and backup hasn’t been the same since. With the ability to eradicate duplicate data in duplication-prone backups, deduplication made it practical to store large amounts of backup data on disk instead of tape. In short order, a number of vendors marched into the market spotlight offering products with tremendous efficiency claims, great throughput rates, and greater tolerance for the too often erratic throughput of backup jobs that was a thorn in the side for traditional tape. Today, deduplicating backup storage appliances are a common site in data centers of all types and sizes.
But deduplicating data is a tricky science. It is often not as simple as just finding matching runs of similar data. Backup applications and modifications to data can sprinkle data streams with mismatched bits and pieces, making deduplication much more challenging. The problem is worst for Virtual Tape Libraries (VTLs) that emulate traditional tape. Since they emulate tape, backup applications use all of their traditional tape formatting. Such formatting is designed to compensate for tape shortcomings and allow faster and better application access to data on tape, but it creates noise for deduplication.
The best products on the market recognize this challenge and have built “parsers” for every backup application – technology that recognizes the metadata within the backup stream and enables the backup storage appliance to read around it.
In 2012, IBM introduced a parser for IBM’s leading backup application Tivoli Storage Manager (TSM) in their ProtecTIER line of backup storage solutions. TSM has long had a reputation for a noisy tape format. That format enables richer data interaction than many competitors, but it creates enormous challenges for deduplication.
At IBM’s invitation, in November of 2012, Taneja Group Labs put ProtecTIER through the paces to evaluate whether this parser for the ProtecTIER family makes a difference. Our findings: Clearly it does; in our highly structured lab exercise, ProtecTIER looked fully poised to deliver advertised deduplication for TSM environments. In our case, we observed a reasonable 10X to 20X deduplication range for real world Microsoft Exchange data.