Items Tagged: CommVault
Choosing the technology that's right for you depends on what your problem is.
After more than 2 years in development, CommVault recently unveiled Simpana v9 with a multitude of new features aimed at taking mind and marketshare away from Symantec’s NetBackup and IBM’s Tivoli Storage Manager in the enterprise data protection space.
Is Cloud-Enabled DR Ready for Prime Time?
Industry article by Jeff Boles and Jeff Byrne published in InfoStor that explores whether cloud-based DR technologies are suitable for adoption by small and medium sized enterprises (SMEs). The article concludes that these technologies are “ready for prime time”, as long as they meet four specific criteria for cloud-enabled DR solutions.
Data Protection marketplace is undergoing sea change right before our eyes. All the legacy players, including Symantec, IBM Tivoli, EMC NetWorker, CommVault, are feverishly trying to bring their decades-old architectures to the 21st Century. For at least a decade, I have been saying the old method of doing full and incremental backups has to go. It is archaic and smacks of the 70’s. Why a full backup must be done every week, moving all data across the application server, into the network, then into the backup server before placing it on tape or disk, when 90%+ of that data is the same as last week’s, has never made much sense to me. Ditto with incrementals. Why move an entire 2MB file across the network when only four bytes have changed? The inefficiencies have been mind boggling. Regardless, that is how it has been for three decades. Granted there have been improvements over the past several years. Data deduplication has been a godsend. Now we dedupe the original full at a sub-file level and only keep one copy of each “chunk” in the backup. But we still lug all that data across the network and only dedupe it at the target (so network efficiency remains sub-optimal). Of course, with onset of source level deduplication, we have been able to positively impact the network traffic too. But most of the world is still doing target-based deduplication. Regardless, all these have been positive movements towards the goal of simplifying data protection and ultimately making it disappear entirely as a task to be performed by IT administrators. But wait. Just as we were making these positive strides something else changed the nature of the problem.
Vendors add backup, replication, data migration and high availability products and services for disaster recovery during Amazon's AWS re:Invent conference.
Hyper-convergence has impacted primary storage, but Arun Taneja says hyper-converged vendors are bringing the concept to data protection.
- Premiered: 09/03/15
- Author: Arun Taneja
- Published: TechTarget: Search Data Backup
Full Database Protection Without the Full Backup Plan: Oracle's Cloud-Scaled Zero Data Loss Recovery
Today’s tidal wave of big data isn’t just made up of loose unstructured documents – huge data growth is happening everywhere including in high-value structured datasets kept in databases like Oracle Database 12c. This data is any company’s most valuable core data that powers most key business applications – and it’s growing fast! According to Oracle, in 5 years (by 2020) most enterprises expect 50x data growth. As their scope and coverage grow, these key databases inherently become even more critical to our businesses. At the same time, the sheer number of database-driven applications and users is also multiplying – and they increasingly need to be online, globally, 24 x 7. Which all leads to the big burning question: How can we possibly protect all this critical data, data we depend on more and more even as it grows, all the time?
We just can’t keep taking more time out of the 24-hour day for longer and larger database backups. The traditional batch window backup approach is already often beyond practical limits and its problems are only getting worse with data growth – missed backup windows, increased performance degradation, unavailability, fragility, risk and cost. It’s now time for a new data protection approach that can do away with the idea of batch window backups, yet still provide immediate backup copies to recover from failures, corruption, and other disasters.
Oracle has stepped up in a big way, and marshaling expertise and technologies from across their engineered systems portfolio, has developed a new Zero Data Loss Recovery Appliance. Note the very intentional name that is focused on total recoverability – the Recovery Appliance is definitely not just another backup target. This new appliance eliminates the pains and risks of the full database backup window approach completely through a highly engineered continuous data protection solution for Oracle databases. It is now possible to immediately recover any database to any point in time desired, as the Recovery Appliance provides “virtual” full backups on demand and can scale to protect thousands of databases and petabytes of capacity. In fact, it offloads backup processes from production database servers which can increase performance in Oracle environments typically by 25%. Adopting this new backup and recovery solution will actually give CPU cycles back to the business.
In this report, we’ll briefly review why conventional data protection approaches based on the backup window are fast becoming obsolete. Then we’ll look into how Oracle has designed the new Recovery Appliance to provide a unique approach to ensuring data protection in real-time, at scale, for thousands of databases and PBs of data. We’ll see how zero data loss, incremental forever backups, continuous validation, and other innovations have completely changed the game of database data protection. For the first time there is now a real and practical way to fully protect a global corporation’s databases—on-premise and in the cloud—even in the face of today’s tremendous big data growth.
Nutanix XCP For Demanding Enterprise Workloads: Making Infrastructure Invisible for Tier-1 Ent. Apps
Virtualization has matured and become widely adopted in the enterprise market. Approximately three in every five physical servers are deployed in a virtualized environment. After two waves of virtualization, it is safe to assume that a high percentage of business applications are running in virtualized environments. The applications last to deploy into the virtualized environment were considered the tier-1 apps. Examples of these include CRM and ERP environments running SAP NetWeaver, Oracle database and applications, and Microsoft SQL Server. In many 24X7 service industries, Microsoft Exchange and SharePoint are also considered tier-1 applications.
The initial approach to building virtualized environments that can handle these tier-1 applications was to build highly tuned infrastructure using best of breed three-tier architectures where compute, storage and networking were selected and customized for each type of workload. Newer shared storage systems have increasingly adopted virtualized all flash and hybrid architectures, which has allowed organizations to mix a few tier-1 workloads within the same traditional infrastructure and still meet stringent SLA requirements.
Enter now the new product category of enterprise-capable HyperConverged Infrastructure (HCI). With HCI, the traditional three-tier architecture has been collapsed into a single software-based system that is purpose-built for virtualization. In these solutions, the hypervisor, compute, storage, and advanced data services are integrated into an x86 industry-standard building block. These modern scale-out hyperconverged systems combine a flash-first software-defined storage architecture with VM-centric ease-of-use that far exceeds any three-tier approach on the market today. These attributes have made HCI very popular and one of the fastest growing product segments in the market today.
HCI products have been very popular with medium sized companies and specific workloads such as VDI or test and development. After a few years of hardening and maturity, are these products ready to tackle enterprise tier-1 applications? In this paper we will take a closer look at Nutanix Xtreme Computing Platform (XCP) and explore how its capabilities stack up to tier-1 application workload requirements.
Nutanix was a pioneer in HCI and is widely considered the market and visionary leader of this rapidly growing segment. Nutanix has recently announced the next step - a vision of the product beyond HCI. With this concept they plan to make the entire virtualized infrastructure invisible to IT consumers. This will encompass all three of the popular hypervisors: VMware, Hyper-V and their own Acropolis Hypervisor. Nutanix has enabled app mobility between different hypervisors, a unique concept across the converged system and HCI alike. This Solution Profile will focus on the Nutanix XCP platform and key capabilities that make it suitable for teir-1 enterprise applications. With the most recent release, we have found compelling features appropriate for most tier-1 application workloads. Combined with the value proposition of web-scale modular architecture this provides an easy pathway to data-center transformation that businesses of all sizes should take advantage of.