Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Technology Validation

Page 1 of 5 pages  1 2 3 >  Last ›
Technology Validation

Scale Computing HC3: A Second Look at a Hyperconverged Appliance

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by virtual workloads, IT has been able to pack more systems into the data center than before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management, as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before - tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance and bring about more capability. All too often, an increase in capability has come at the cost of complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds.

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

With challenges surrounding the complexity in managing a virtualized datacenter, Scale Computing, long a provider of scale-out storage, introduced a new line of hyperconverged appliances - HC3 in April, 2012 and updated the appliances with the new HyperCore software in May, 2014. HC3 is an integration of storage and virtualized compute within a scale-out building block architecture that couples all of the elements of a virtual data center together inside a hyperconverged appliance. The result is a system that is simple to use and does away with much of the complexity associated with virtualization in the data center. By virtualizing and intermingling compute and storage inside a system that is designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex compute clusters, provision and manage storage, and a bevy of other day to day administrative tasks. Provisioning additional resources - any resource - becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service - our hands-on lab service - to the task of evaluating whether Scale Computing's HC3 could deliver on these promises in the real world. For this task, we put an HC3 cluster through the paces to see how well it deployed, how it held up under use, and what special features it delivered that might go beyond the features found in traditional integrations of discreet compute and storage systems.

Publish date: 09/25/14
Technology Validation

Accelerating the VM with FlashSoft: Software-Driven Flash-Caching for the Virtual Infrastructure

Storage performance has long been the bane of the enterprise infrastructure. Fortunately, in the past couple of years, solid-state technologies have allowed new comers as well as established storage vendors to start shaping up clever, cost effective, and highly efficient storage solutions that unlock greater storage performance. It is our opinion that the most innovative of these solutions are the ones that require no real alteration in the storage infrastructure, nor a change in data management and protection practices.

This is entirely possible with server-side caching solutions today. Server-side caching solutions typically use either PCIe solid-state NAND Flash or SAS/SATA SSDs installed in the server alongside a hardware or software IO handler component that mirrors commonly utilized data blocks onto the local high speed solid-state storage. Then the IO handler redirects server requests for data blocks to those local copies that are served up with lower latency (microseconds instead of milliseconds) and greater bandwidth than the original backend storage. Since data is simply cached, instead of moved, the solution is transparent to the infrastructure. Data remains consolidated on the same enterprise infrastructure, and all of the original data management practices – such as snapshots and backup – still work. Moreover, server-side caches can actually offload IO from the backend storage system, and can allow a single storage system to effectively serve many more clients. Clearly there’s tremendous potential value in a solution that can be transparently inserted into the infrastructure and address storage performance problems.

Publish date: 08/25/14
Technology Validation

Convergence for the Branch Office: Transforming Resiliency and TCO with Riverbed SteelFusion (TVS)

The branch office has long been a critical dilemma for the IT organization. Branch offices for many organizations are a critical point of productivity and revenue generation, yet the branch has always come with a tremendous amount of operational overhead and risk. Worse yet, challenges are often exacerbated because the branch office too often looks like a carryover of outdated IT practices.

More often than not, the branch office is still a highly manual, human-effort-driven administration exercise. Physical equipment too often sits at a remote physical office, and requires significant human management and intervention for activities like data protection and recovery, or replacement of failed hardware. Given the remote nature of the branch office, such human intervention often comes with significant overhead in the form of telephone support, less than efficient over-the-wire system configuration, equipment build and ship processes, or even significant travel to remote locations. Moreover, in an attempt to avoid issues, the branch office is often over-provisioned with equipment in order to reduce the impact of outages, or is designed in such a way as to be too dependent on across the Wide Area Network (WAN) services that impair user productivity and simply exchange the risk of equipment failure for the risk of WAN outage. But while such practices come with significant operational cost, there’s a subtler cost lurking below the surface – any branch office outage is enmeshed in data consequences. Data protection may be a slower process for the branch office, subjecting the branch to greater risks with equipment failure or disaster, and restoring branch office data and productivity after a disaster can be a long slow process compared to the capabilities of the modern datacenter.

When branch offices are a key part of a business, these practices that are routinely accepted as the standard can make the branch office one of the costliest and riskiest areas of the IT infrastructure. Worse yet, for many enterprises, the branch office has only increased its importance over time, and may generate more revenue and require more responsive and available IT systems than ever before. The branch office clearly requires better agility and efficiency than it receives today.

Riverbed Technologies has long proven their mettle in helping enterprises optimize and better enable connectivity and data sharing for distributed work teams. Over the past decade, Riverbed has come to dominate the market for WAN optimization technologies that compress data and optimize the connection between branch or remote offices and the datacenter. But Riverbed rose to this position of dominance because their SteelHead appliances do far more than just optimize a connection – Riverbed’s dominance of this market sprung from deep collaboration and interaction optimization of CIFS/SMB and other protocols by way of intelligent interception and caching of the right data to make the remote experience feel like a local experience. Moreover, Riverbed SteelHead could do this while making that remote connection effectively stateless, and eliminating the need to protect or manage data in the branch office.

Almost two years ago, Riverbed announced a continuing evolution of their “location independent computing” focus with the introduction of their SteelFusion family of solutions. The vision behind SteelFusion was a focus on delivering far more performance and capability in branch offices, while doing away with the complexity of multiple component parts and scattered data. SteelFusion does this by transforming the branch office into a stateless “projection” of data, applications, and VMs stored in the datacenter. Moreover, SteelFusion does this with a converged solution that combines storage, networking, and compute all in one device – the first comprehensive converged infrastructure solution purpose-built for the branch. This converged offering though, is built on branch office “statelessness” that, as we’ll review, transparently stores data in the datacenter, and allows the business to configure, change, protect, and manage the branch office with enterprise tools, while eradicating the risk associated with traditional branch office infrastructure.

SteelFusion today does this by virtualizing VMware ESXi VMs on a stateless appliance that in essence “projects” data from the datacenter to a remote location, while maintaining localized speed of access and resilient availability that can tolerate even severe network outages. Three innovative technology components that make up Riverbed’s SteelFusion allow it to host virtual machines that access their primary data via the datacenter, from where it is cached on the SteelFusion appliance while maintaining a highly efficient but near synchronous connection back to the datacenter storage. In turn, SteelFusion makes it possible to run many local applications in a rich, complex branch office while requiring no other servers or devices. Riverbed promises that SteelFusion’s architecture can tolerate outages, but synchronize data so effectively that it will operate as a stateless appliance, enabling branch data to be completely protected by datacenter synchronization and backup, with more up to date protection and faster recovery regardless of whether there’s a loss of a single file, or the loss of an entire system. In short, this is a promise to comprehensively revolutionize the practice of branch office IT.

In January of 2014, Taneja Group took a deeper look at what Riverbed is doing with SteelFusion. While we’ve provided other written assessments on the use case and value of Riverbed SteelFusion, we also wanted to take a hands-on look at how the technology works, and whether in real world use it really delivers management effort reductions, availability improvements, and increased IT capabilities along with consequent improvements in the risks around branch office IT. To do this, we turned to a hands-on lab exercise – what we call a Technology Validation.

What did we find?  We found that Riverbed SteelFusion does indeed deliver a transformation of branch office management and capabilities, by fundamentally reducing complexity, injecting a number of powerful capabilities (such as enterprise snapshots and access to all data, copies, and tools in the enterprise) and making the branch office resilient, constantly protected, and instantly recoverable. While the change in capabilities is significant, this also translates into a significant impact on time and effort, and we captured a number of metrics throughout our hands-on look at SteelFusion. For the details, we turn to the full report.

Publish date: 04/14/14
Technology Validation

Tintri VMstore: Zero Management Storage (TVS)

Storage challenges in the virtual infrastructure are tremendous. Virtualization consolidates more IO than ever before, and then obscures the sources of that IO so that end-to-end visibility and understanding become next to impossible. As the storage practitioner labors on with business-as-usual, deploying yet more storage and fighting fires attempting to keep up with demands, the business is losing the battle around trying to do more with less.

The problem is that inserting the virtual infrastructure in the middle of the application-to-storage connection, and then massively expanding the virtual infrastructure, introduces a tremendous amount of complexity. A seemingly endless stream of storage vendors are circling this problem today with an apparent answer – storage systems that deliver more performance. But more “bang for the buck” is too often just an attempt to cover up the lack of an answer for complexity-induced management inefficiency – ranging across activities like provisioning, peering into utilization, troubleshooting performance problems, and planning for the future.

With an answer to this problem, one vendor has been sailing to wide spread adoption, and leaving a number of fundamentally changed enterprises in their wake. That vendor is Tintri, and they’ve focused on changing the way storage is integrated and used, instead of just tweaking storage performance. Tintri integrates more deeply with the virtual infrastructure than any other product we’ve seen, and creates distinct advantages in both storage capabilities and on-going management.

Taneja Group recently had the opportunity to put Tintri’s VMstore array through a hands-on exercise, to see for ourselves whether there’s mileage to be had from a virtualization-specific storage solution. Without doubt, there is clear merit to Tintri’s approach. A virtualization specific storage system can reinvent a broad range of storage management interactions – by being VM-aware – and fundamentally alter the complexity of the virtual infrastructure for the better. In our view, these changes stand to have massive impact on the TCO of virtualization initiatives (some of which are identified in the table of highlights below) but the story doesn’t end there. At the same time they’ve fundamentally changed management, Tintri has also innovated around storage technology that enables Tintri VMstore to serve up storage beneath even the most extreme virtual infrastructures.

Publish date: 12/20/13
Technology Validation

AMI StorTrends 3500i: Solid State Performance for Everyday Business (TVS)

Solid state storage technology – typically storage devices based on NAND flash – have opened up new horizons for storage systems over the past couple of years.  The storage market has seemingly been flooded by new products incorporating solid-state storage somewhere within their product line while making promises of break through levels of storage performance.  Vendors have found there are some challenges when it comes to putting solid-state technology in the storage system.   Many storage products entering the market in turn have a few wrinkles beneath the surface.

Just recently, AMI has introduced yet another SAN storage appliance. The StorTrends 3500i integrates a comprehensive solid-state storage layer into the StorTrends iTX architecture.  The 3500i brings with it the ability to use Solid State Drives (SSDs) in multiple roles – as a full flash array or hybrid storage array. As a hybrid storage array, the SSDs can be utilized as cache, tier, or a combination of the two. Along with SSD caching and tiering, the StorTrends 3500i incorporates these performance features with the field proven storage architecture validated by more than 1,100 global installs. The net result is a high performance and cost effective storage array.  In fact, the 3500i array looks poised to be one of the most comprehensively equipped storage system options for the mid-range enterprise storage customer looking for solid state acceleration for their workloads.

With this most recent storage system launch, StorTrends once again caught our attention, and we approached AMI with the idea of a hands-on lab exercise that we call a Taneja Group Technology Validation.  Our goal with this testing?  To see whether StorTrends truly preserved all of their storage functionality with the integration of SSD into the 3500i storage system, and whether the 3500i was up to the task of harnessing the blazing fast storage performance of SSD.

Publish date: 11/30/13
Technology Validation

EMC Avamar 7 - Protecting Data at Scale in the Virtual Data Center (TVS)

Storing digital data has long been a perilous task.  Not only are stored digital bits subject to the catastrophic failure of the devices they rest upon, but the nature of shared digital bits subjects them to error and even intentional destruction.   In the virtual infrastructure, the dangers and challenges subtly shift.  Data is more highly consolidated and more systems depend wholly on shared data repositories; this increases data risks.  Many virtual machines connecting to single shared storage pools mean that IO or storage performance has become an incredibly precious resource; this complicates backup, and means that backup IO can cripple a busy infrastructure.  Backup is a more important operation than ever before, but it is also fundamentally more challenging than ever before.

Fortunately, the industry rapidly learned this lesson in the earlier days of virtualization, and has aggressively innovated to bring tools and technologies to bear on the challenge of backup and recovery for virtualized environments.  APIs have unlocked more direct access to data, and products have finally come to market that make protection easier to use, and more compatible with the dynamic, mobile workloads of the virtual data center.  Nonetheless differences abound between product offerings, often rooted in the subtleties of architecture – architectures that ultimately determine whether a backup product is best suited for SMB-sized needs, or whether a solution can scale to support the large enterprise.

Moreover, within the virtual data center, TCO centers on resource efficiency, and a backup strategy can be one of the most significant determinants of that efficiency. On one hand, traditional backup just does not work and can cripple efficiency.  There is simply too much IO contention and application complexity in trying to convert a legacy physical infrastructure backup approach to the virtual infrastructure.  On the other hand, there are a number of specialized point solutions designed to tackle some of the challenges of virtual infrastructure backup.  But too often, these products do not scale sufficiently, lack consolidated management, and stand to impose tremendous operational overhead as the customer’s environment and data grows.  When taking a strategic look at the options, it often looks like backup approaches fly directly in the face of resource efficiency.

Publish date: 10/31/13
Page 1 of 5 pages  1 2 3 >  Last ›