Taneja Group | Storage+Accelerators
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Storage+Accelerators

Profiles/Reports

10X The VM Density With Marvell Dragonfly: Turning up IO density, no storage change required

There’s good news this year – the long and most tenacious vexation of the storage engineer is finally receiving redress. That vexation is none other than performance. This year, the market is seemingly awash in innovators paying attention to the challenges of performance, and the solutions are very, very real.

Yet as has always been the case, most are very hard solutions to engineer into production data centers. The storage challenge is such that it isn’t easy to shoehorn a solution into the place of an existing storage system or product that simply isn’t measuring up. Aside from the basic mechanics around floor space, SAN connections and physical cabling, storage performance solutions in the past have often looked entirely different logically (different storage pool, with potentially foreign provisioning and configuration), and standard storage features such as snapshots may be completely missing. And that’s just the case with the most typical of performance solutions – when dabbling in the extremely exotic like Oracle’s Exadata, the re-engineering can become truly significant.

But amongst this backdrop, some vendors are crafting together solutions that are much different. Such products are the benefactors of significant leaps ahead in technology over the past year. It is now possible to harness extremely high power special purpose processors of all sorts, store data on high performance solid state media of many types and form factors (NAND, RAM, disk, expansion cards, appliances, and more), and the storage industry’s collective knowledge of how to architect software innovation in the path of latency sensitive IO has advanced by leaps and bounds. Emerging solutions are solving extreme IO challenges with a nearly transparent software, adapter, or appliance model, and whole new generations of performance-geared storage are entering the market.

With this in mind, Taneja Group has eagerly awaited hands-on time with one of the first products that looks poised to represent the easiest to use and most cost effective platform available to address pressing performance issues – the Marvell DragonFly storage accelerator for virtual and bare-metal applications. This product typifies a category of solution that we recently labeled “Server-based Storage Accelerators” (our recent article on this topic is available here: http://bit.ly/GU6UjS). Server-based Storage Accelerators are server-integrated devices that are coupled with a host-based software layer to intercept, cache, and optimize IO transactions so that data remains stored on backend, consolidated and feature rich storage, but the transactional IO is offloaded to an in-the-server acceleration card with massive horsepower. Market entrants have proliferated through startups, acquisitions, and major vendor announcements. Marvell was certainly one of the first to announce product development in this area, and with a long pedigree in storage intellectual property (all the way down to NAND interfaces and HDD read channels) as well as a product that embodied the perception of simplicity, we looked forward to a closer examination.

In early February of 2012, we began spending that hands-on time with the Marvell DragonFly accelerator, and as testament to our findings, we continue to run the Marvell DragonFly accelerator in our lab facility in Phoenix, Arizona to this day. 

We’ve had significant opportunity to put Marvell’s DragonFly to work in several different ways during that time.  Early on, we validated how DragonFly performed under synthetic read and write benchmarks on Linux workstations (using 4K FIO benchmarks on Redhat Enterprise Linux 5 and 6), and we’ve periodically used the Marvell DragonFly with various read/write small block workloads from various virtual and physical systems, and across both block and file storage.  But as we began intentionally evaluating the DragonFly in-depth, we most wanted to examine the DragonFly behind a meaningful, real world workload that both storage and server managers would look at as representative of their own current challenges.  With this in mind, we set out to evaluate virtual machine density, otherwise known as VM density. 

We define VM density as the maximum number of virtual machines that can be run with acceptable performance by a given set of infrastructure.  Over the past few years, the age of virtualization has rapidly changed the importance of storage performance.  Storage performance can have a drastic impact on VM density by choking off the precious IO that the hypervisor and virtual machines need, and thereby drastically increasing apparent CPU load because of IO wait cycles and latency.

The bottom line is the Marvell DragonFly accelerator can let the customer put a unique combination of commodity, off-the-shelf SSD to work behind a specialized, high performance PCIe controller, and turn solid state to the task of pure IO acceleration in a way that few others can.  Because the pure performance of SSD is unleashed without changing the centralized storage of data, and the effect is completely transparent to the infrastructure, the DragonFly accelerator’s ability to combine SSD with PCIe means massive performance without compromises.  For those who have not been watching, this is a first for a storage performance product.  A few highlights from our findings include:

  • A cost per IO that looks to be many times more cost effective than other approaches to enterprise class SSDs, or PCIe attached solid state storage.
  • Storage acceleration that can be implemented with no reconfiguration or alteration of current enterprise storage.
  • Acceleration that can optimize the sustainable IO from SSD media by as much as 19X (Marvell DragonFly + SSD, versus SSD only).  DragonFly acts as a cache for much larger backend storage by using SSD in this manner, and can deliver total IO that is well beyond the limits of dedicated server storage today (solid state or otherwise), and compares favorably to the horsepower behind entire enterprise class arrays.
  • All told, this translates into a real world 10X improvement in a tested virtual environment, specifically a 10X increase in IO-limited virtual machine density, allowing IO constrained organizations to run 10X as many desktops per physical server.

All told, the Marvell DragonFly looks like a mature product that delivers tremendous performance acceleration for enterprise and cloud data centers. For a product that may cost substantially less than any given single server, this is a notable accomplishment.  Moreover, since this technology can be deployed in front of a shared backend storage system (block or file) the performance acceleration can be applied against terabytes of capacity.  Then a business can in essence “scale-out” this performance by purchasing more accelerators for more servers, assuming a working set of data is small enough to fit inside of the SSD devices attached to the DragonFly.  The dollars per IO may not get much better than this.

Publish date: 08/20/12
news

Accelerating the SAN, with the Server

Just a few short months ago, a number of vendors in the market started introducing flash-based, in-the-server, storage acceleration technologies that we label Server-based Storage Accelerators (which I also refer to as “accelerators” in this article). Across the board, such devices were primarily PCIe form factor cards with on-board flash memory that plugged into a server motherboard slot. Through driver or other software inserted into the server OS’s software stack, the solution would intercept I/Os, cache data from SAN-attached disk onto a NAND-flash cache, and then redirect future I/O requests to that cache.

  • Premiered: 09/13/12
  • Author: Taneja Group
  • Published: InfoStor.com
Topic(s): TBA SAN TBA Storage Acceleration TBA accelerators TBA Storage Accelerators TBA IO TBA Qlogic TBA Jeff Boles
Profiles/Reports

Bringing Server-based Storage to the SAN - Making Storage Acceleration Mainstream

Just a few short months ago, a number of vendors in the market started introducing flash-based, in-the-server, storage acceleration technologies that we label Server-based Storage Accelerators (which I also refer to as “accelerators” in this article). Across the board, such devices were primarily PCIe form factor cards with on-board flash memory that plugged into a server motherboard slot. Through driver or other software inserted into the server OS’s software stack, the solution would intercept I/Os, cache data from SAN-attached disk onto a NAND-flash cache, and then redirect future I/O requests to that cache. QLogic takes an approach that may make these accelerators the most disruptive performance technology yet. Why? Because this introduction is likely to make server-based storage accelerators more widely deployable than ever before, bringing the benefits of flash optimized I/O into the SAN. First let’s take a look at why these accelerators are a key performance technology to begin with.

Publish date: 09/18/12