This paper examines CTERA’s storage and data protection solution for large-scale remote and branch offices (ROBOs), and demonstrates its fundamental advantages over alternative approaches, including a real-world customer example and comparative cost assessment.
Shifting Into High Gear with Astute Networks ViSX G4 Flash VM Storage Appliances
Virtualization is a great boon to IT shops for it’s flexible and cost-efficient infrastructure that consolidates previously physically sprawling applications and servers. But current hypervisor technologies are still evolving support for high-end performance requirements, especially for shared but external-to-the-server resources like storage. Many IT shops trying to convert deeper into their portfolio based on initial virtualization success have run up against frustrating limits as to which of their more demanding applications can be effectively virtualized and still be delivered with good service. Virtualizing I/O intensive applications remains a significant challenge, especially in IT organizations that are building their virtualization environments with traditional storage arrays in order to enable virtual machine migration and provide sufficient data protection.
Most mission critical applications are I/O intensive, relying on databases or providing end-user communication channels (e.g. email). VDI efforts in particular can become I/O performance constrained even at small scales. If virtualized I/O constraints with traditional storage are limiting the number of vms per server, or preventing virtual hosting for I/O intensive apps like databases, email, and VDI, then flash technologies seem to promise a tailor-made high-performance solution. But flash can be an expensive investment to acquire and hard to deploy and manage to assure a solid ROI. With flash, performance can be bought, but cost-conscious IT organizations must choose wisely or risk implementing expensive and ultimately unsatisfactory or limiting solutions.
An ideal performance solution would drop in to an existing virtual environment, currently constrained or otherwise, and accelerate performance without re-architecting the storage layer, without re-implementing data protection, and without creating new burdens for the virtual server administrator. It should supercharge the environment to enable greater vm density, deliver better than physical performance for designated apps, and support VDI working at scales that make sense for wholesale adoption. In this product profile we will examine in detail the new Astute Networks ViSX G4 VM Storage Appliance to see how it fits this ideal profile – deploying without disruption to truly accelerate virtualization performance, increase VM density, and further the adoption of virtualization across I/O and mission-critical applications.
Given the number of products for enterprise data protection, one would think there was no better managed data anywhere. But that would be wrong. Traditional enterprise storage is a crazy quilt of dozens to hundreds of disparate backup and archiving applications, software versions, storage systems, appliances, deduplication, replication, snapshots – each one attempting to solve a piece of the data protection puzzle. Meanwhile, growing data and constant uptime pressure have produced more aggressive backup requirements. These in turn crash against the twin obstacles of poor integration and sheer storage complexity. Yet meeting those SLAs is critical as the backup infrastructure protects the crown jewels of the business.
Moreover, backup is one of the most expensive infrastructure systems when considered altogether. Backup faces real fiscal constraints and high expectations for delivering on-going improvements in Capex and Opex as technology improves. Worse yet, given the complex backup infrastructure made up of many software layers, networks, servers, and storage systems, any improvements must be gradually dropped into place without disruption or incompatibility with existing systems. In the midst of complexity, many IT organizations faced with simultaneous demands to do more/faster/better and do it less expensively. These pressures make data protection success hard to come by.
A large part of the recipe for success revolves around the storage foundation behind the backup infrastructure. This is the final resting place of data -- and the focal point of nearly all resource contention. It is this contention that can ultimately constrain backup performance and force the organization to take expensive steps as they pursue a resolution. Real enterprise-class backup systems are built to tackle these fundamental challenges both at the time of deployment and into the indefinite future. Such systems are built with enterprise compatibility, superior density, scalability, and management efficiency that translate into both data protection capability and Opex/Capex advantage.
In this Solution Profile, we'll examine the data protection hurdles facing the enterprise given fast data growth and inefficient storage structures, and their impacts on expenses and the viability of the data center. Then we'll turn to how one vendor, Sepaton, has long been advancing their lead in scalability and efficiency to tackle just these challenges.
Future-proofed archives on scale-out architectures
The IT organization has long been challenged to come up with effective strategies storing important data and content. This is fast becoming much more critical as data evolves into having more value in on-going analysis, future reference, and monetizable reuse, while simultaneously becoming increasingly surrounded by compliance and regulatory requirements. Yet each aspect of data value comes with an even bigger challenge for IT: delivering effective strategies to achieve sufficient and lasting preservation and access, over a longer period of time than ever before.
Unfortunately, it’s more common to see a revolving door process of temporary solutions that must be replaced every few years. These “solutions” are in direct opposition to crafting long-term strategies, and are also responsible for a web of complexity that creates one of the greatest management and operational costs in today’s enterprise data center. Worse yet, the data management practices that are the result of this revolving door storage approach also limit the accessibility and use of growing and aging data. In the age of increasingly critical and value laden data sets, broken storage strategies stand to break the business.
In this technology profile, we’ll look at the consequences of this broken approach in the face of a monumental shift in the value of data and content. We will consider the challenges facing an organization trying to position itself for better long-term storage and meaningful reuse of large amounts of data. We’ll examine one solution that clearly stands out in the face of these challenges – the HP IBRIX X9730 Storage, which has big data storage and high-value archiving directly in its crosshairs.
Quest NetVault FastRecover continuously captures data and synthetically reconstructs full data sets for every point in time, allowing individual file as well as full data set access. NetVault FastRecover leverages the ability of host agents to see into host system interactions. A centralized management interface configures host agents and allows administrators to control which server volumes are protected, and which NetVault FastRecover servers (either on the local network or remotely over a WAN) are used.
In this technology profile, we’ll take a fresh look at where InfiniBand stands today and its growing role in the next generation data center. We’ll examine important technology and solution trends where higher-speed switched fabrics are required for optimal performance in several dimensions – solutions like Big Data, web-scale applications, scale-out storage, and virtual I/O for mission-critical applications. Along the way we’ll address some high-level cost and adoption risk concerns.
For more information on this report, check out:
IBTA blog - http://bit.ly/OGyNJm
IBTA Press Release - http://bit.ly/NZLRNG