Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Technology Validation

Page 4 of 6 pages ‹ First  < 2 3 4 5 6 > 
Technology Validation

StorTrends 3400i - Reinventing the value of storage

Selecting a primary storage solution is undoubtedly one of the most critical decisions an IT department can make. As the foundational piece of the modern datacenter, it represents perhaps the single most important piece of IT infrastructure for businesses large, medium or small. Business critical applications will live and breathe on the performance of the selected storage system, and business data will be inevitably constrained by the capacity of that storage system.

In the mid-market, making a storage investment can be particularly daunting, as the stakes are higher, and the selection is harder. Compared to larger enterprises, in the mid-market, storage dollars are fewer and harder to come by. Precious and often limited IT staff time is spread across more systems and technologies, their core skills are often not rooted in storage, and technically vetting a storage system can be all but impossible. This makes storage a risky proposition for the small enterprise (SME) and SMB customer. We frequently hear tales of storage system purchases where I/O is not sufficient, features are missing (or require additional licenses and cost to acquire), or where architectural compromises create availability issues that regularly impact the entire business.

For several years, the developers of the StorTrends line of NAS/SAN solutions have been working hard to architect a storage system for the mid-market that puts an end to these risks and compromises. By harnessing the engineering expertise from their parent American Megatrends, Inc. (AMI) – an innovator in storage and BIOS technologies – StorTrends has been tackling the challenge of delivering abundant performance, robust reliability, and feature rich storage with the SMB and SME customer in mind. Their claim is that the StorTrends 3400i is both one of the most cost effective choices in the market, and one of the most well rounded. 

In mid-2012, StorTrends caught our attention with these claims and a series of notable customer wins in a highly competitive market. To learn more, we approached StorTrends with the idea of a hands-on lab exercise, what we call a Technology Validation, to examine in more depth how StorTrends was delivering comprehensive value for customers in the mid-market space. Utilizing our proven validation methodology that included time spent at AMI headquarters in Norcross, GA, we put a set of StorTrends 3400i storage systems through the paces, with an eye toward examining several capabilities that StorTrends claims makes the 3400i one of the best value storage options in the mid-market.

 

Publish date: 12/10/12
Technology Validation

HP 3PAR StoreServ Double Density: Delivering twice the VM density versus traditional arrays

Why does storage performance matter? There are few storage administrators who have not witnessed an unhappy moment where lack of storage performance brought an application to its knees. But quantifying how storage performance matters, aside from serving as insurance to avoid an application crippling moment, is a tricky proposition.

One way is in terms of how storage performance determines the efficiency of the infrastructure. When all else is equal and well tuned – hypervisors, servers, networks – then storage becomes a precious commodity that can determine just how many workloads or applications a given infrastructure can support, and therefore how efficiently it can operate at scale.

When it comes to virtualization, this workload density has serious impacts, but fortunately is even easier to assess. A number of years ago, Taneja Group started out testing workload density among competitive hypervisors, and we labeled it VM density – a measure of how many VMs similar systems can run without compromises in performance or usability. Our net findings clearly indicated how big of an impact a small difference in VM density can have. When poor VM density makes it necessary to add more servers and hypervisors, the cost of servers is costly enough, but licensing tools like vSphere suites can add up to many thousands of dollars and dwarf hardware costs. Meanwhile, with more servers/hypervisors comes more complexity and management overhead, along with operational data center costs (power, cooling, floorspace). The consequences can easily run to tens of thousands of dollars per server. At large scale, superior VM density can easily add up to hundreds of thousands or millions of dollars for a business.

We’ve long suggested that storage performance is one of the biggest contributors to VM density, but few vendors have been brave enough to step forward and declare that their own storage system can take VM density farther than anyone else’s. Well times have changed.

Approximately 6 months ago, HP made a bold promise with their 3PAR storage systems – guaranteeing that customers moving to HP 3PAR storage, from traditional legacy storage, will double the VM density of their existing server infrastructure. Better yet, HP’s promise wasn’t about more spindles, but rather the efficiency and power of their 3PAR StoreServ storage controller architecture – they made this promise even when the customer’s old storage system contained the same disk spindles. Our minds were immediately beset with questions – could a storage system replacement really double the workload density of most virtual infrastructures? We knew from experience that most customers are indeed IO constrained, we’ve had at least several hundred conversations with customers fighting virtual infrastructure performance problems where at the end of the day, the issue is storage performance. But a promise to double the density of these infrastructures is indeed aggressive.

It goes without saying that at Taneja Group, we were more than eager to put this claim to the test. We approached HP in mid-2012 with a proposal to do just that by standing up a 3PAR StoreServ storage array against a similar class, traditional architecture storage array. The test proved more interesting when HP provided us with only the smallest of their 3PAR StoreServ product line at that time (an F200) and a competitive system made up of the biggest controllers available in another popular but traditional architecture mid-range array. Moreover, that traditional system, while first entering the market in 2007/2008, could still be purchased at the time of testing, and is only on the edge of representing the more dated systems envisioned by HP’s Double Density promise. 

Our Findings: After having this equipment made available for our exclusive use during several months of 2012, we have a clear and undeniable conclusion: HP 3PAR StoreServ storage is quite definitely capable of doubling VM density versus typical storage systems.

Publish date: 12/03/12
Technology Validation

Integrated Disaster Recovery: Technologies for Comprehensive Storage Array Protection

DR has long been particularly challenging for the midmarket customer. It usually requires multiple layers and components, host-based software, replication gateways or appliances, and often array-based functionality that is licensed and managed separately. Add to this complexity the need for robust bandwidth or an expensive WAN optimization approach and it’s no surprise that DR can have a significant impact on both OPEX and CAPEX budgets.

The cost to manage all of these different elements can dwarf the cost of a primary storage system itself. The enterprise faces many of the same challenges, but they also have bigger budgets and more specialists to manage the complexity. Midmarket businesses and organizations may not have the same level of budget and specialists, but they certainly face the same challenges.

Recently, Taneja Group Labs put the StorTrends 3400i array through a Technology Validation exercise to evaluate how StorTrends measured up as a SMB/SME storage solution in terms of ease of use, performance, availability, adaptability, and innovative features. Over the course of our Technology Validation exercise, it was clear that one particular StorTrends capability rose above all others: StorTrends built-in multi-site WAN optimized data replication. Specifically, StorTrends suite of replication functionality looks poised to equip SMB and SME customers with the tools that for the first time makes robust DR really achievable. In this report, we’ll highlight what we found, and why it stood out.

 

Publish date: 09/06/12
Technology Validation

Scale Computing HC3:  Ending complexity with a hyper-converged, virtual infrastructure

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by many virtual workloads, IT has been able to pack more systems into the data center than ever before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before – tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance, and bring about more capability. All too often, an increase in capability has come at the cost of introducing considerable complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. Complex it can be.

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

With challenges surrounding increasing virtual complexity driving their vision of a better way to do IT, Scale Computing, long a provider of scale-out storage for the SMB, recently introduced a new line of technology – a product labeled HC3, or Hyper Convergence 3. HC3 is an integration of scale-out storage and scale-out virtualized compute within a single building block architecture that couples all of the elements of a virtual data center together inside one single system. The promised result is a system that is simple to use, and does away with the management and complexity overhead associated with virtualization in the data center. By virtualizing and intermingling all compute and storage inside a system that is already designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex clusters, provision and manage storage, and a bevy of day to day administrative tasks. Provisioning additional resources – any resource – becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service – our hands-on lab service – to the task of evaluating whether Scale Computing’s HC3 could deliver on these promises in the real world. For this task, we put several HC3 clusters through the paces to see how they deployed, how they held up under use, and what specialized features they delivered that might go beyond the features found in traditional integrations of separate compute and storage systems.

 

Publish date: 08/30/12
Technology Validation

10X The VM Density With Marvell Dragonfly: Turning up IO density, no storage change required

There’s good news this year – the long and most tenacious vexation of the storage engineer is finally receiving redress. That vexation is none other than performance. This year, the market is seemingly awash in innovators paying attention to the challenges of performance, and the solutions are very, very real.

Yet as has always been the case, most are very hard solutions to engineer into production data centers. The storage challenge is such that it isn’t easy to shoehorn a solution into the place of an existing storage system or product that simply isn’t measuring up. Aside from the basic mechanics around floor space, SAN connections and physical cabling, storage performance solutions in the past have often looked entirely different logically (different storage pool, with potentially foreign provisioning and configuration), and standard storage features such as snapshots may be completely missing. And that’s just the case with the most typical of performance solutions – when dabbling in the extremely exotic like Oracle’s Exadata, the re-engineering can become truly significant.

But amongst this backdrop, some vendors are crafting together solutions that are much different. Such products are the benefactors of significant leaps ahead in technology over the past year. It is now possible to harness extremely high power special purpose processors of all sorts, store data on high performance solid state media of many types and form factors (NAND, RAM, disk, expansion cards, appliances, and more), and the storage industry’s collective knowledge of how to architect software innovation in the path of latency sensitive IO has advanced by leaps and bounds. Emerging solutions are solving extreme IO challenges with a nearly transparent software, adapter, or appliance model, and whole new generations of performance-geared storage are entering the market.

With this in mind, Taneja Group has eagerly awaited hands-on time with one of the first products that looks poised to represent the easiest to use and most cost effective platform available to address pressing performance issues – the Marvell DragonFly storage accelerator for virtual and bare-metal applications. This product typifies a category of solution that we recently labeled “Server-based Storage Accelerators” (our recent article on this topic is available here: http://bit.ly/GU6UjS). Server-based Storage Accelerators are server-integrated devices that are coupled with a host-based software layer to intercept, cache, and optimize IO transactions so that data remains stored on backend, consolidated and feature rich storage, but the transactional IO is offloaded to an in-the-server acceleration card with massive horsepower. Market entrants have proliferated through startups, acquisitions, and major vendor announcements. Marvell was certainly one of the first to announce product development in this area, and with a long pedigree in storage intellectual property (all the way down to NAND interfaces and HDD read channels) as well as a product that embodied the perception of simplicity, we looked forward to a closer examination.

In early February of 2012, we began spending that hands-on time with the Marvell DragonFly accelerator, and as testament to our findings, we continue to run the Marvell DragonFly accelerator in our lab facility in Phoenix, Arizona to this day. 

We’ve had significant opportunity to put Marvell’s DragonFly to work in several different ways during that time.  Early on, we validated how DragonFly performed under synthetic read and write benchmarks on Linux workstations (using 4K FIO benchmarks on Redhat Enterprise Linux 5 and 6), and we’ve periodically used the Marvell DragonFly with various read/write small block workloads from various virtual and physical systems, and across both block and file storage.  But as we began intentionally evaluating the DragonFly in-depth, we most wanted to examine the DragonFly behind a meaningful, real world workload that both storage and server managers would look at as representative of their own current challenges.  With this in mind, we set out to evaluate virtual machine density, otherwise known as VM density. 

We define VM density as the maximum number of virtual machines that can be run with acceptable performance by a given set of infrastructure.  Over the past few years, the age of virtualization has rapidly changed the importance of storage performance.  Storage performance can have a drastic impact on VM density by choking off the precious IO that the hypervisor and virtual machines need, and thereby drastically increasing apparent CPU load because of IO wait cycles and latency.

The bottom line is the Marvell DragonFly accelerator can let the customer put a unique combination of commodity, off-the-shelf SSD to work behind a specialized, high performance PCIe controller, and turn solid state to the task of pure IO acceleration in a way that few others can.  Because the pure performance of SSD is unleashed without changing the centralized storage of data, and the effect is completely transparent to the infrastructure, the DragonFly accelerator’s ability to combine SSD with PCIe means massive performance without compromises.  For those who have not been watching, this is a first for a storage performance product.  A few highlights from our findings include:

  • A cost per IO that looks to be many times more cost effective than other approaches to enterprise class SSDs, or PCIe attached solid state storage.
  • Storage acceleration that can be implemented with no reconfiguration or alteration of current enterprise storage.
  • Acceleration that can optimize the sustainable IO from SSD media by as much as 19X (Marvell DragonFly + SSD, versus SSD only).  DragonFly acts as a cache for much larger backend storage by using SSD in this manner, and can deliver total IO that is well beyond the limits of dedicated server storage today (solid state or otherwise), and compares favorably to the horsepower behind entire enterprise class arrays.
  • All told, this translates into a real world 10X improvement in a tested virtual environment, specifically a 10X increase in IO-limited virtual machine density, allowing IO constrained organizations to run 10X as many desktops per physical server.

All told, the Marvell DragonFly looks like a mature product that delivers tremendous performance acceleration for enterprise and cloud data centers. For a product that may cost substantially less than any given single server, this is a notable accomplishment.  Moreover, since this technology can be deployed in front of a shared backend storage system (block or file) the performance acceleration can be applied against terabytes of capacity.  Then a business can in essence “scale-out” this performance by purchasing more accelerators for more servers, assuming a working set of data is small enough to fit inside of the SSD devices attached to the DragonFly.  The dollars per IO may not get much better than this.

Publish date: 08/20/12
Technology Validation

Dell EqualLogic FS7500 - Unifying the storage infrastructure with integrated file storage

The mantra for many storage vendors over the past few years has consisted of “doing more with less” and optimizing the total cost of storage inside the data center walls. While the market at large has shifted their focus to this topic more recently, this has been a key Dell EqualLogic differentiator since the first day they released scale-out iSCSI SAN. EqualLogic has long come packaged with the ease of use, sophisticated features, and comprehensive management tools that other vendors have been much later to roll out. Moreover, EqualLogic has often led the field in fluid adaptability – what Dell now calls their Fluid Data Architecture – that allows customers to easily and non-disruptively add more storage performance and capacity at any time, and then automatically load balance storage demands across all resources.

More recently, Dell has strategically broadened EqualLogic iSCSI SAN portfolio by integrating an enterprise-class Network-attached Storage (NAS) appliance into the EqualLogic architecture to enable scale-out unified SAN solution. Dell has labeled the first generation of this NAS appliance as EqualLogic FS7500.

Download this technology validation report free from Dell: http://dell.to/NoFizY


 

Publish date: 06/29/12
Page 4 of 6 pages ‹ First  < 2 3 4 5 6 >