Items Tagged: Oracle
Riverbed Introduces Enterprise Disaster Recovery Support
Riverbed has expanded WDS to include enterprise DR support, enabling the efficient and secure transport of large datasets over the WAN from one data center to another. In this profile we will spotlight the first vendor with a product offering in this expanded WDS space: Riverbed. We believe that this ability is nothing less than a breakthrough in WDS.
IBM ProtecTIER: Peak Protection For Oracle Database Backups
Solutions such as IBM’s ProtecTIER make database backups far more efficient by providing high speed backup and recovery performance while reducing the amount of data landing on tape. This Solution Profile describes the benefits of using ProtecTIER deduplication solutions in an Oracle data protection environment: high ratios of deduplication and compression for Oracle records, integration with RMAN, and some of the fastest recovery times in the deduplication industry.
I recently revisited Avamar 6.0/Data Domain integration. The two deduplication storage systems are better suited for different workload types, but it seemed odd that the two were not integrated before the 6.0 announcement.
IBM Protectier And SAP Critical Data Protection Without Da-ta Disruption
SAP is a business-critical application for enterprise and mid-range business. SAP products generate huge volumes of critical data that must be fully protected, the majority of it stored in databases like DB2, Oracle and SQL. This Solution Profile describes IBM ProtecTIER’s fast performance, scalable capacity and replication for SAP backup environments.
Sepaton S2100 and the Database Backup Challenge
SEPATON purpose-built the S2100 platform as the only backup appliance dedicated to managing and optimizing large database backup and recovery in enterprise environments. The SEPATON S2100 appliance serves large enterprise storage environments with high backup and recovery performance, replication, deduplication and scalable capacity. This enables enterprise IT to cost-effectively meet critical database SLAs with a single unified storage system.
Massive data growth rates build geometrically, resulting in nearly inconceivable amounts of data. Data collections should be calling Houston right about now.
Tape Libraries: Why, When and Where?
A technology publication that shall go unnamed posted the news last year that Amazon Glacier was a “tape-killing cloud” and that it would “devastate the [tape] industry.”
Not exactly. We do believe that smaller tape implementations are going the way of the dodo bird. Cloud backup is quickly replacing small standalone tape drives and autoloaders for daily backup, and as low-end tape equipment ages IT replaces it with cloud for long-term backup retention.
However, tape housed in mid-sized and enterprise scale libraries are growing strongly in several high-value computing and industry segments. Thus the question for IT becomes not “Should I use tape?” but “When should I invest in tape libraries?”
Oracle has come out today with the ZS3 series of its ZFS Storage Appliances. Sometimes a later act is just incremental, sometimes you can feel the ground move. Here we think other major storage vendors are going to really feel it shake. At first blush, the deliberately "application engineered" ZFS is best-in-class Oracle Database storage....But don't put ZS3 down simply as database storage. It rocks as a unified platform for other high-performance high-write business critical apps too...
Top Performance on Mixed Workloads, Unbeatable for Oracle Databases
There is a storm brewing in IT today that will upset the core ways of doing business with standard data processing platforms. This storm is being fueled by inexorable data growth, competitive pressures to extract maximum value and insight from data, and the inescapable drive to lower costs through unification, convergence, and optimization. The storage market in particular is ripe for disruption. Surprisingly, that storage disruption may just come from a current titan only seen by many as primarily an application/database vendor —Oracle.
When Oracle bought Sun in 2009, one of the areas of expertise brought over was in ZFS, a “next generation” file system. While Oracle clearly intended to compete in the enterprise storage market, some in the industry thought that the acquisition would essentially fold any key IP into narrow solutions that would only effectively support Oracle enterprise workloads. And in fact, Oracle ZFS Storage Appliances have been successfully and stealthily moving into more and more data centers as the DBA-selected best option for “database” and “database backup” specific storage.
But the truth is that Oracle has continued aggressive development on all fronts, and its ZFS Storage Appliance is now extremely competitive as scalable enterprise storage, posting impressive benchmarks topping other comparative solutions. What happens when support for mixed workloads is also highly competitive? The latest version of Oracle ZFS Storage Appliances, the new ZS3 models, become a major contender as a unified, enterprise featured, and affordable storage platform for today’s data center, and are positioned to bring Oracle into enterprise storage architectures on a much broader basis going forward.
In this report we will take a look at the new ZS3 Series and examine how it delivers both on its “application engineered” premise and its broader capabilities for unified storage use cases and workloads of all types. We’ll briefly examine the new systems and their enterprise storage features, especially how they achieve high performance across multiple use cases. We’ll also explore some of the key features engineered into the appliance that provide unmatched support for Oracle Database capabilities like Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC) which provides heat map driven storage tiering. We’ll also review some of the key benchmark results and provide an indication of the TCO factors driving its market leading price/performance.
Hadoop is coming to enterprise IT in a big way. The competitive advantage that can be gained from analyzing big data is just too 'big' to ignore. And the amount of data available to crunch is only growing bigger, whether from new sensors, capture of people, systems and process 'data exhaust', or just longer retention of available raw or low-level details.
- Premiered: 10/16/13
- Author: Taneja Group
- Published: Storage Newsletter
The future of data storage will have storage shedding the role of a passive technology player as it integrates more closely with applications and workloads.
- Premiered: 10/31/13
- Author: Mike Matchett
- Published: Tech Target: Search Storage
Whether clay pots, wooden barrels or storage arrays, vendors have always touted how much their wares can reliably store. And invariably, the bigger the vessel, the more impressive and costly it is, both to acquire and manage. The preoccupation with size as a measure of success implies that we should judge and compare offerings on sheer volume. But today, the relationship between physical storage media capacity and the effective value of the data "services" it delivers has become much more virtual and cloudy.
In-memory processing can improve data mining and analysis, and other dynamic data processing uses. When considering in-memory, however, look out for data protection, cost and bottlenecks.
Is there a benefit to understanding how your users, suppliers or employees relate to and influence one another? It's hard to imagine that there is a business that couldn't benefit from more detailed insight and analysis, let alone prediction, of its significant relationships.
- Premiered: 06/17/14
- Author: Mike Matchett
- Published: Tech Target: Search Data Center
When Sun was an independent company I always thought their storage product line was inferior to offerings from EMC, HDS, NetApp, HP and other major purveyors of storage during that time. But I also saw that ZFS was one of the highest performing, most complete and highly available file systems in the market, even if Sun at that time wasn't fully taking advantage of that in the market. All that has changed in the hands of Oracle.
Oracle announced Oracle FS1, its first all-flash storage offering on September 29th at Oracle World. While there is much to study under the covers (which we will do in the next few weeks) it is clear that Oracle has thrown the gauntlet down against all-flash-array vendors, especially EMC and its XtremIO offering. There are essentially five things of importance with FS1. One, while it can be deployed as a true all-flash array, it is really designed as a hybrid, albeit with a flash-first engineering philosophy. Two, it is designed with four tiers, namely Performance SSD, Capacity SSD, Performance HDD and Capacity HDD. These four tiers are mapped to five QoS layers that are associated with application priority. This management framework is called QoS Plus. Three, the granularity of data movement between tiers is 640KB, compared to 1GB for EMC VNX1, 256MB for EMC VNX2 and HP 3PAR. Oracle claims for database workloads granularity matters and 640KB is much closer to ideal compared to larger chunks. Four, provisioning storage for Oracle and non-Oracle applications can be done with one click with FS1 Application Profiles that provide pre-defined and pre-tuned best practices storage profiles. Five, other Oracle differentiators, such as Hybrid Columnar Compression (HCC) and other data services available on existing Oracle systems are all available on FS1 and they are almost all free. According to Oracle, a single rack of FS1 compared to a 2-node version of EMC XtremIO configuration yields advantages in favor of FS1 of between 1.2X to 9.7X, along the dimensions of max capacity, read IOPS, write IOPS, 50/50 R/W IOPS, read GB/s and write GB/s, with the differences being huge in the sequential throughput dimension. Let’s look at these five elements in a little more detail below.
Oracle FS1 Series Flash Storage System could threaten major SAN vendors and solid-state startups that target Oracle DB, application environments.
Just because you can add a cache doesn't mean you should. It is possible to have the wrong kind, so weigh your options before implementing memory-based cache for a storage boost.
- Premiered: 10/16/14
- Author: Mike Matchett
- Published: Tech Target: Search Data Center