Taneja Group | Database
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Database

Profiles/Reports

Maximizing Database Performance With Dell Equallogic Hybrid Arrays

Today’s combination of rapidly-accelerating demand for data and rapidly-consolidating datacenter infrastructure makes choosing the right storage for each of your business applications more important—and more difficult—than ever. In our view, it’s time more of this burden is taken on by the SAN itself. In other words, it’s time for more SAN intelligence. The intelligent SAN should optimize all available storage resources—automatically. In this profile explore how dynamic, multi-tiered OLTP workloads test the limits of traditional manual storage tiering strategies, and further strengthen the case for automated tiering on the SAN itself. Then we review Dell’s internal benchmark test results and speak to Carnival Cruise Lines, an EqualLogic customer, in order to evaluate how Dell’s hybrid SSD/SAS arrays are delivering higher performance and lower overhead both in the lab and in the field.

Publish date: 05/23/11
Profiles/Reports

IBM Protectier And SAP Critical Data Protection Without Da-ta Disruption

SAP is a business-critical application for enterprise and mid-range business. SAP products generate huge volumes of critical data that must be fully protected, the majority of it stored in databases like DB2, Oracle and SQL. This Solution Profile describes IBM ProtecTIER’s fast performance, scalable capacity and replication for SAP backup environments.

Publish date: 10/31/11
Profiles/Reports

Sepaton S2100 and the Database Backup Challenge

SEPATON purpose-built the S2100 platform as the only backup appliance dedicated to managing and optimizing large database backup and recovery in enterprise environments. The SEPATON S2100 appliance serves large enterprise storage environments with high backup and recovery performance, replication, deduplication and scalable capacity. This enables enterprise IT to cost-effectively meet critical database SLAs with a single unified storage system.

Publish date: 11/22/11
news / Blog

Hitachi Data Systems Unifies Block, File and Object

Hitachi has brought together block, file and object storage in a unified array that should appeal both to mid-range and cloud storage users.

  • Premiered: 04/25/12
  • Author: Jeff Byrne
Topic(s): HDS Hitachi Hitachi Data Systems Storage Database Virtualization unified
news / Blog

A SQL Sequel Called TED - the Distributed TransLattice Elastic Database

While the focus in databases recently has been on NoSQL most enterprise applications still have mandatory requirements for relational databases. However, businesses are feeling pressured to evolve their transactional databases to enable highly available, "local" application access to globally consistent data. The TransLattice Elastic Database (TED) is a fully ACID/SQL relational implementation is designed to be logically and geographically distributed...as appliances, vm's, and in the cloud...

  • Premiered: 08/07/12
  • Author: Mike Matchett
Topic(s): Database TransLattice SQL
news

Continuity Software Expands Service Avail. Risk Management Portfolio with AvailabilityGuard/SAN

Continuity Software™ today announced it has expanded its award-winning family of service availability risk management solutions with the addition of AvailabilityGuard/SAN™.

  • Premiered: 08/26/13
  • Author: Taneja Group
  • Published: Yahoo Finance
Topic(s): TBA Continuity Software TBA SAN TBA Data protection TBA server TBA Automation TBA Database TBA DR TBA 3PAR Zero Detection TBA Cloud DR
Profiles/Reports

Top Performance on Mixed Workloads, Unbeatable for Oracle Databases

There is a storm brewing in IT today that will upset the core ways of doing business with standard data processing platforms. This storm is being fueled by inexorable data growth, competitive pressures to extract maximum value and insight from data, and the inescapable drive to lower costs through unification, convergence, and optimization. The storage market in particular is ripe for disruption. Surprisingly, that storage disruption may just come from a current titan only seen by many as primarily an application/database vendor —Oracle.

When Oracle bought Sun in 2009, one of the areas of expertise brought over was in ZFS, a “next generation” file system. While Oracle clearly intended to compete in the enterprise storage market, some in the industry thought that the acquisition would essentially fold any key IP into narrow solutions that would only effectively support Oracle enterprise workloads. And in fact, Oracle ZFS Storage Appliances have been successfully and stealthily moving into more and more data centers as the DBA-selected best option for “database” and “database backup” specific storage.

But the truth is that Oracle has continued aggressive development on all fronts, and its ZFS Storage Appliance is now extremely competitive as scalable enterprise storage, posting impressive benchmarks topping other comparative solutions. What happens when support for mixed workloads is also highly competitive? The latest version of Oracle ZFS Storage Appliances, the new ZS3 models, become a major contender as a unified, enterprise featured, and affordable storage platform for today’s data center, and are positioned to bring Oracle into enterprise storage architectures on a much broader basis going forward.

In this report we will take a look at the new ZS3 Series and examine how it delivers both on its “application engineered” premise and its broader capabilities for unified storage use cases and workloads of all types. We’ll briefly examine the new systems and their enterprise storage features, especially how they achieve high performance across multiple use cases. We’ll also explore some of the key features engineered into the appliance that provide unmatched support for Oracle Database capabilities like Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC) which provides heat map driven storage tiering. We’ll also review some of the key benchmark results and provide an indication of the TCO factors driving its market leading price/performance.

Publish date: 09/10/13
news

Tokutek Eliminates Performance Issues of MongoDB Sharding With TokuMX v1.4

Tokutek, delivering database performance at scale, today announced TokuMX v1.4 with improved sharding that resolves the bottlenecks by MongoDB users.

  • Premiered: 02/18/14
  • Author: Taneja Group
  • Published: Yahoo! Finance
Topic(s): TBA Tokutek TBA MongoDB TBA Sharding TBA Database TBA performance database
news

Database performance tuning: Five ways for IT to save the day

When database performance takes a turn for the worse, IT can play the hero. There are some new ways for IT pros to tackle slowdown problems. However, one question must be addressed first: Why is it up to IT?

  • Premiered: 04/17/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Database TBA Database Performance TBA IT TBA Optimization TBA SQL TBA NoSQL TBA Infrastructure TBA scale-up TBA scale-out TBA Active Archive TBA Archiving TBA SSD TBA Flash TBA Acceleration TBA server TBA Tokutek
news

Turn to in-memory processing when performance matters

In-memory processing can improve data mining and analysis, and other dynamic data processing uses. When considering in-memory, however, look out for data protection, cost and bottlenecks.

  • Premiered: 05/21/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Memory TBA Storage TBA Performance TBA Database TBA RAM TBA CPU TBA Amazon Web Services TBA AWS TBA HP TBA Dell TBA Oracle TBA Hadoop TBA Microsoft TBA SQL Server TBA DRAM TBA MongoDB TBA Tokutek
news

Why Facebook and the NSA love graph databases

Is there a benefit to understanding how your users, suppliers or employees relate to and influence one another? It's hard to imagine that there is a business that couldn't benefit from more detailed insight and analysis, let alone prediction, of its significant relationships.

  • Premiered: 06/17/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Mike Matchett TBA analytics TBA Big Data TBA graph database TBA graph theory TBA Database TBA SQL TBA Security TBA Infrastructure TBA Data protection TBA Data Management TBA Hadoop TBA Oracle TBA AllegroGraph TBA XML TBA RDF TBA Titan TBA Giraph TBA Sparsity Technologies TBA Neo4J TBA Objectivity TBA InfiniteGraph TBA scalability
Profiles/Reports

Fibre Channel: The Proven and Reliable Workhorse for Enterprise Storage Networks

Mission-critical assets such as virtualized and database applications demand a proven enterprise storage protocol to meet their performance and reliability needs. Fibre Channel has long filled that need for most customers, and for good reason. Unlike competing protocols, Fibre Channel was specifically designed for storage networking, and engineered to deliver high levels of reliability and availability as well as consistent and predictable performance for enterprise applications. As a result, Fibre Channel has been the most widely used enterprise protocol for many years.

But with the widespread deployment of 10GbE technology, some customers have explored the use of other block protocols, such as iSCSI and Fibre Channel over Ethernet (FCoE), or file protocols such as NAS. Others have looked to Infiniband, which is now being touted as a storage networking solution. In marketing the strengths of these protocols, vendors often promote feeds and speeds, such as raw line rates, as a key advantage for storage networking. However, as we’ll see, there is much more to storage networking than raw speed.

It turns out that on an enterprise buyer’s scorecard, raw speed doesn’t even make the cut as an evaluation criteria. Instead, decision makers focus on factors such as a solution’s demonstrated reliability, latency, and track record in supporting Tier 1 applications. When it comes to these requirements, no other protocol can measure up to the inherent strengths of Fibre Channel in enterprise storage environments.

Despite its long, successful track record, Fibre Channel does not always get the attention and visibility that other protocols receive. While it may not be winning the media wars, Fibre Channel offers customers a clear and compelling value proposition as a storage networking solution. Looking ahead, Fibre Channel also presents an enticing technology roadmap, even as it continues to meet the storage needs of today’s most critical business applications.

In this paper, we’ll begin by looking at the key requirements customers should look for in a commercial storage protocol. We’ll then examine the technology capabilities and advantages of Fibre Channel relative to other protocols, and discuss how those translate to business benefits. Since not all vendor implementations are created equal, we’ll call out the solution set of one vendor – QLogic – as we discuss each of the requirements, highlighting it as an example of a Fibre Channel offering that goes well beyond the norm.

Publish date: 02/28/14
news

New Oracle ZFS Storage Appliance adds speed, encryption

Oracle's new high-end ZFS Storage Appliance features improved performance and power, 3 TB DRAM, pluggable data analytics and data-at-rest encryption.

  • Premiered: 12/02/14
  • Author: Taneja Group
  • Published: Tech Target: Search Storage
Topic(s): TBA Oracle TBA Oracle ZFS TBA ZFS TBA Encryption TBA Storage TBA analytics TBA NAS TBA Performance TBA Database TBA Optimization TBA Infrastructure TBA Flash TBA SSD TBA EMC TBA NetApp TBA Exadata TBA hybrid storage TBA LUN
news

Amazon job listings offer many AWS roadmap clues

Amazon is hiring thousands of people, and where it’s hiring offers clues about where the AWS roadmap is headed.

  • Premiered: 03/27/15
  • Author: Taneja Group
  • Published: TechTarget: Search AWS
Topic(s): TBA Amazon TBA Amazon AWS TBA AWS TBA Storage TBA Database TBA Amazon S3 TBA S3 TBA scalable TBA scalability
Profiles/Reports

HP ConvergedSystem: Solution Positioning for HP ConvergedSystem Hyper-Converged Products

Converged infrastructure systems – the integration of compute, networking, and storage - have rapidly become the preferred foundational building block adopted by businesses of all shapes and sizes. The success of these systems has been driven by an insatiable desire to make IT simpler, faster, and more efficient. IT can no longer afford the effort and time to custom build their infrastructure from best of breed D-I-Y components. Purpose built converged infrastructure systems have been optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI).

Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, integrated at the rack level gave businesses the flexibility to cover the widest range of solution workload requirements while still using well-known infrastructure components. Emerging onto the scene recently has been a more modular approach to convergence using what we term Hyper-Convergence. With hyper-convergence, the three-tier architecture has been collapsed into a single system appliance that is purpose-built for virtualization with hypervisor, compute, and storage with advanced data services all integrated into an x86 industry-standard building block.

In this paper we will examine the ideal solution environments where Hyper-Converged products have flourished. We will then give practical guidance on solution positioning for HP’s latest ConvergedSystem Hyper-Converged product offerings.

Publish date: 05/07/15
news

Kaminario K2 array uses 3D TLC NAND flash

Kaminario unveils 5.5 version of its K2 all-flash array with 3D TLC NAND flash, claims of sub-$1 per GB pricing and support for asynchronous replication.

  • Premiered: 08/20/15
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA Kaminario TBA K2 TBA NAND TBA Flash TBA SSD TBA replication TBA all-flash TBA AFA TBA All Flash TBA all flash array TBA Compression TBA Deduplication TBA Inline TBA inline deduplication TBA Data reduction TBA Samsung TBA Database TBA Data Center TBA DRAM TBA SVC TBA Dell TBA Pure Storage TBA controller TBA SATA TBA DR TBA Disaster Recovery TBA Availability TBA Jeff Kato
Profiles/Reports

Multiplying the Value of All Existing IT Solutions

Decades of constantly advancing computing solutions have changed the world in tremendous ways, but interestingly, the IT folks running the show have long been stuck with only piecemeal solutions for managing and optimizing all that blazing computing power. Sometimes it seems like IT is a pit crew servicing a modern racing car with nothing but axes and hammers – highly skilled but hampered by their legacy tools.

While that may be a slight exaggeration, there is a serious lack of interoperability or opportunity to create joint insight between the highly varied perspectives that individual IT tools produce (even if  each is useful in its own purpose). There simply has never been a widely adopted standard for creating, storing or sharing system management data, much less a cross-vendor way to holistically merge heterogeneously collected or produced management data together – even for the beneficial use of harried and often frustrated IT owners that might own dozens or more differently sourced system management solutions. That is until now.

OpsDataStore has brought the IT management game to a new level with an easy to deploy, centralized, intelligent – and big data enabled – management data “service”.  It readily sucks in all the lowest level, fastest streaming management data from a plethora of tools (several ready to go at GA, but easily extended to any data source), automatically and intelligently relates data from disparate sources into a single unified “agile” model, directly provides fundamental visualization and analysis, and then can serve that unified and related data back out to enlightened and newly comprehensive downstream management workflows. OpsDataStore drops in and serves as the new systems management “nexus” between formerly disparate vendor and domain management solutions. 

If you have ever been in IT, you’ve no doubt written scripts, fiddled with logfiles, created massive spreadsheets, or otherwise attempted to stitch together some larger coherent picture by marrying and merging data from two (or 18) different management data sources. The more sources you might have, the more the problem (or opportunity) grows non-linearly. OpsDataStore promises to completely fill in this gap, enabling IT to automatically multiply the value of their existing management solutions.

Publish date: 12/03/15
Profiles/Reports

Virtual Instruments WorkloadCentral: Free Cloud-Based Resource for Understanding Workload Behavior

Virtual Instruments, the company created by the combination of the original Virtual Instruments and Load DynamiX, recently made available a free cloud-based service and community called WorkloadCentral. The service is designed to help storage professionals understand workload behavior and improve their knowledge of storage performance. Most will find valuable insights into storage performance with the simple use of this free service. For those who want to get a deeper understanding of workload behavior over time, or evaluate different storage products to determine which one is right for their specific application environment, or optimize their storage configurations for maximum efficiency, they can buy additional Load DynamiX Enterprise products available from the company.
The intent with WorkloadCentral is to create a web-based community that can share information about a variety of application workloads, perform workload analysis and create workload simulations. In an industry where workload sharing has been almost absent, this service will be well received by storage developers and IT users alike.
Read on to understand where WorkloadCentral fits into the overall application and storage performance spectrum...

Publish date: 05/26/16
Profiles/Reports

HPE and Micro Focus Data Protection for SAP HANA

These days the world operates in real-time all the time. Whether in making ticket purchases or getting the best deal from online retailers, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single, super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish.

SAP HANA is a growing and very popular example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both OLAP and OLTP database workloads simultaneously. To overcome the volatility of server DRAM, the HANA architecture requires persistent shared storage to enable greater scalability, disaster tolerance, and data protection.

SAP HANA is available on-premises as a dedicated appliance and/or via best-in-class components through the SAP Tailored Datacenter Integration (TDI) program. The TDI environment has become the more popular HANA option as it provides the flexibility to leverage legacy resources such as data protection infrastructure and enables a greater level of scalability that was lacking in the appliance approach. Hewlett Packard Enterprise (HPE) has long been the leader in providing mission-critical infrastructure for SAP environments. SUSE has long been the leader in providing the mission-critical Linux operating system for SAP environments. Micro Focus has long been a strategic partner of HPE, and together they have leveraged unique hardware and software integrations that enable a complete end-to-end, robust data protection environment for SAP. One of the key value propositions of SAP HANA is its ability to integrate with legacy databases. Therefore, it makes the most sense to leverage a flexible data protection solution from Micro Focus and HPE to cover both legacy database environments and modern in-memory HANA environments.

In this solution brief, Taneja Group will explore the data protection requirements for HANA TDI environments. Then we’ll briefly examine what makes Micro Focus Data Protector combined with HPE storage an ideal solution for protecting mission-critical SAP environments.

Publish date: 08/31/18