Taneja Group | Virtual+Infrastructure
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Virtual+Infrastructure

news / Blog

Virtualization Energizes the Data Center Automation Segment

When it comes time to automate IT processes, we recommend taking stock of your existing trusted vendor relationships as well as your most glaring pain points: do you struggle to keep operating systems and servers patched to the right levels to meet compliance?

  • Premiered: 08/26/09
  • Author: Taneja Group
Topic(s): Automation Bartoletti Datacenter Management Server Virtualization Virtual Infrastructure
news

Virtual Infrastructure Performance Management: The Storage View

Last time, I discussed the growing need for end-to-end Virtual Infrastructure Performance Management solutions to help datacenter managers successfully virtualize more performance-hungry and business-critical applications with confidence. While no vendor (in my view) has yet to solve the VIPM challe.

  • Premiered: 05/24/11
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA VIPM TBA Virtual Infrastructure
Profiles/Reports

Storage for the Integrated Virtual Infrastructure: HP P4000 SAN Solutions

This paper examines the features and capabilities of the HP P4000 product family, and shows how these SAN solutions help businesses to overcome the storage-related growing pains they typically encounter as they deploy and scale out a virtual infrastructure.

Publish date: 06/14/11
Profiles/Reports

Doubling VM Density with HP 3PAR Storage

This paper examines HP 3PAR Utility Storage and describes how the solution overcomes typical virtual infrastructure storage issues, enabling customers to increase VM density by at least two-fold as a result.

Publish date: 03/06/12
Profiles/Reports

Scale Computing HC3: Ending complexity with a hyper-converged, virtual infrastructure

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by many virtual workloads, IT has been able to pack more systems into the data center than ever before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before – tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance, and bring about more capability. All too often, an increase in capability has come at the cost of introducing considerable complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. Complex it can be.

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

With challenges surrounding increasing virtual complexity driving their vision of a better way to do IT, Scale Computing, long a provider of scale-out storage for the SMB, recently introduced a new line of technology – a product labeled HC3, or Hyper Convergence 3. HC3 is an integration of scale-out storage and scale-out virtualized compute within a single building block architecture that couples all of the elements of a virtual data center together inside one single system. The promised result is a system that is simple to use, and does away with the management and complexity overhead associated with virtualization in the data center. By virtualizing and intermingling all compute and storage inside a system that is already designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex clusters, provision and manage storage, and a bevy of day to day administrative tasks. Provisioning additional resources – any resource – becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service – our hands-on lab service – to the task of evaluating whether Scale Computing’s HC3 could deliver on these promises in the real world. For this task, we put several HC3 clusters through the paces to see how they deployed, how they held up under use, and what specialized features they delivered that might go beyond the features found in traditional integrations of separate compute and storage systems.

 

Publish date: 08/30/12
Profiles/Reports

Protocol Choices for Storage in the Virtual Infrastructure

Over the past few years, server virtualization has rapidly emerged as the defacto standard for today’s data center. But the path has not been an easy one, as server virtualization has brought with it near upheaval in traditional infrastructure integrations.

From network utilization, to data backup, almost no domain of the infrastructure has been untouched, but by far, some of the deepest challenges have revolved around storage. It may well be the case that no single infrastructure layer has ever imposed as great of a challenge to any single IT initiative as the challenges that storage has cast before virtualization.

After experiencing wide-reaching initial rewards, IT managers have aggressively expanded their virtualization initiatives, and in turn the virtual infrastructure has grown faster than any other infrastructure technology ever before deployed. But with rapid growth, demands against storage rapidly exceed the level any business could have anticipated, requiring performance, capacity, control, adaptability, and resiliency like never before. In an effort to address these new demands, it quickly becomes obvious that storage cannot be delivered in the same old way. For organizations facing scale-driven, virtualization storage challenges, it quickly becomes clear that storage must be delivered in a more utility-like fashion than ever before. 

What do we mean by utility-like? Storage must be highly efficient, more easily presented, scaled and managed, and more consistently delivered with acceptable performance and reliability than ever before.
In the face of challenges, storage has advanced by leaps and bounds, but differences still remain between products and vendors. This is not a matter of performance or even purely interoperability, but rather one of suitability over time in the face of growing and constantly changing virtual infrastructures – changes that don’t solely revolve around the number and types of workloads, but also includes a constantly evolving virtualization layer. A choice today is still routinely made – typically at the time of storage system acquisition – between iSCSI, Fibre Channel (FC), and NFS. While often a choice between block and file for the customer, there are substantial differences between these block and file architectures, and even iSCSI and FC that will define the process of presenting and using storage, and determine the customer’s efficiency and scale as they move forward with virtualization. Even minor differences will have long ranging effects and ultimately determine whether an infrastructure can ever be operated with utility-like efficiency. 

Recently, in this Technology in Depth report, Taneja Group set out to evaluate these protocol choices and determine what fits the requirements of the virtual infrastructure. We built our criteria with the expectation that storage was about much more than just performance or interoperability, or up-front ease of use – factors that are too often bandied about by vendors who conduct their own assessments while using their own alternative offerings as proxies for the competition. Instead, we defined a set of criteria that we believe are determinative in how customer infrastructure can deliver, adapt, and last over the long term. 

We summarize these characteristics as five key criteria. They are:

• Efficiency – in capacity and performance
• Presentation and Consumption
• Storage Control and Visibility
• Scalable and autonomic adaptation
• Resiliency

These are not inconsequent criteria, as a key challenge before the business is effectively realizing intended virtualization gains as the infrastructure scales. Moreover, our evaluation is not a matter of performance or interoperability – as the protocols themselves have comparable marks here. Rather our assessment is a broader consideration of storage architecture suitability over time in the face of a growing and constantly changing virtual infrastructure. As we’ll discuss, mismatched storage can create a number of inefficiencies that defeat virtualization gains and create significant problems for the virtual infrastructure at scale, and these criteria highlight the alignment of storage protocol choices with the intended goals of virtualization.

What did we find? Block storage solutions carry significant advantages today. Key capabilities such as VMware API integrations, and approaches to scaling, performance, and resiliency make a difference. While advantages may be had in initial deployment with NAS/NFS, architectural and scalability characteristics suggest this is a near term advantage that does not hold up in the long run. Meanwhile, between block-based solutions, we see the difference today surfacing mostly at scale. At mid-sized scale, iSCSI may have a serious cost advantage while “converged” form factors may let the mid-sized business/enterprise scale with ease into the far future. But for businesses facing serious IO pressure, or looking to build an infrastructure for long term use that can serve an unexpected multitude of needs, FC storage systems delivery utility-like storage with a level of resiliency that likely won’t be matched without the FC SAN.
 

Publish date: 10/15/12
Profiles/Reports

Dell AppAssure: Data Protection made simple

Over the past few years, backup has become a busy market. For the first time in many years, a new wave of energy hit this market as small innovators sprang forth to try to tackle pressing challenges around virtual server backup. The market has taken off because of a unique set of challenges and simultaneous opportunities within the virtual infrastructure – with large amounts of highly similar data, interesting APIs for automation, and a uniquely limited set of IO and processing resources, the data behind the virtual server can be captured and protected in unique new ways. As innovators in turn attacked these opportunities, backup has been fundamentally changed. In many cases, backup has been put in the hands of the virtual infrastructure administrator, made lighter weight and vastly more accessible, and has become a powerful tool for data protection and data management.

In reality, the innovations with virtual backup have leveraged the unifying layer of virtualization to tackle several key backup challenges. These challenges have been long-standing in the practice of data protection, and include ever-tightening backup windows, ever more demanding recovery point objectives (RPO or the amount of tolerable data loss when recovering), short recovery time objectives (RTO or how long it takes to complete a recovery), recovery reliability, and complexity. Specialized data protection for the virtual infrastructure has made enormous progress in tackling these challenges, and simplifying the practice of data protection to boot.

But we’ve often wondered what it would take to bring the innovation from virtual infrastructure protection to a full-fledged backup product that could tackle both physical and virtual systems. At the recent request of Dell, Taneja Group Labs had the opportunity to look at just such a product. That product is AppAssure – a set of technology that seems destined to be the future architectural anchor for the many data protection technologies in Dell’s rapidly growing product portfolio. We jumped at the chance to run AppAssure through the paces in a hands-on exercise, as we wanted to see whether AppAssure had an architecture that might be poised to change how datacenter-wide protection is typically done, perhaps by making it more agile and accessible.

Publish date: 03/29/13
Profiles/Reports

Are Your Critical Business Services Protected?

In many ways, Information Technology (IT) has become the centerpiece of business operations across the globe.  This dynamic is both an opportunity and a threat to IT organizations.  On one hand, IT has a very important seat at the table as businesses decide where to invest or deploy new offerings and services.  On the other hand, IT organizations now become responsible for ensuring that these business services, and the data that drives them, are always available.


To ensure availability, IT must have a comprehensive business continuity plan in place, especially for critical operations that the business requires.   However, business critical services are no longer just a matter of managing a single application or workload running on a solitary server.  Instead, business critical services are often sets of interwoven components made up of multiple physical and virtual servers that depend upon one another.    Seldom does a business critical application stand alone, or act with complete independence from other systems in the data center. 


This complexity introduces challenges and compromises that the business is little prepared to understand or recognize.  Often, when it comes to business continuity, issues are not recognized until it is too late.  Many systems may have had a more manageable approach to continuity in the physical world.  Now, with the agility characteristics that virtualization introduces, viewing, controlling and protecting the complete business service, especially when that service is made up of multiple physical and virtual components becomes a larger challenge.  Considering the intersection of the business critical applications that run on physical and virtual infrastructure, IT needs a better capability for viewing and protecting the entire service being delivered to a business. 


In this solution brief, we’ll look at what a Business Service is comprised of, and the challenges and options for business continuity across disparate physical and virtual infrastructure.
 

Publish date: 04/12/13
news

Software-defined storage might not be so radical after all

While software-defined storage is receiving lots of buzz, it isn't as new an idea as it may seem; storage virtualization vendors have been working toward it for years.

  • Premiered: 09/12/13
  • Author: Taneja Group
  • Published: Tech Target: Search Virtual Storage
Topic(s): TBA SDDC TBA Software Defined Data Center TBA Storage TBA SDS TBA Virtualization TBA Virtual Infrastructure TBA Storage Virtualization TBA software-centric architecture TBA VSA
news

Storage infrastructure management is still elusive

We've been on a multi-decade crusade to address performance and basic storage management tasks to handle things such as protecting data in place, and scaling and expanding our data storage systems to meet new requirements. But today, when performance and scaling and expansion issues are addressed, it will be revealed that the last major challenge in the data center is storage management.

  • Premiered: 12/17/13
  • Author: Taneja Group
  • Published: Tech Target: Search Storage
Topic(s): TBA Jeff Boles TBA scalability TBA Storage TBA storage infrastructure TBA VM TBA Virtualization TBA Virtual Infrastructure TBA Virtual Infrastructure Management TBA converged storage TBA Converged Infrastructure TBA hyper-converged TBA HP TBA Hitachi TBA IBM TBA Nutanix TBA SimpliVity TBA VSA TBA FalconStor TBA Nexenta TBA StorMagic TBA VMWare TBA VSAN TBA Gridstore TBA Tintri TBA Virtual Machine TBA SDS TBA software defined
Profiles/Reports

Tintri VMstore: Zero Management Storage (TVS)

Storage challenges in the virtual infrastructure are tremendous. Virtualization consolidates more IO than ever before, and then obscures the sources of that IO so that end-to-end visibility and understanding become next to impossible. As the storage practitioner labors on with business-as-usual, deploying yet more storage and fighting fires attempting to keep up with demands, the business is losing the battle around trying to do more with less.

The problem is that inserting the virtual infrastructure in the middle of the application-to-storage connection, and then massively expanding the virtual infrastructure, introduces a tremendous amount of complexity. A seemingly endless stream of storage vendors are circling this problem today with an apparent answer – storage systems that deliver more performance. But more “bang for the buck” is too often just an attempt to cover up the lack of an answer for complexity-induced management inefficiency – ranging across activities like provisioning, peering into utilization, troubleshooting performance problems, and planning for the future.

With an answer to this problem, one vendor has been sailing to wide spread adoption, and leaving a number of fundamentally changed enterprises in their wake. That vendor is Tintri, and they’ve focused on changing the way storage is integrated and used, instead of just tweaking storage performance. Tintri integrates more deeply with the virtual infrastructure than any other product we’ve seen, and creates distinct advantages in both storage capabilities and on-going management.

Taneja Group recently had the opportunity to put Tintri’s VMstore array through a hands-on exercise, to see for ourselves whether there’s mileage to be had from a virtualization-specific storage solution. Without doubt, there is clear merit to Tintri’s approach. A virtualization specific storage system can reinvent a broad range of storage management interactions – by being VM-aware – and fundamentally alter the complexity of the virtual infrastructure for the better. In our view, these changes stand to have massive impact on the TCO of virtualization initiatives (some of which are identified in the table of highlights below) but the story doesn’t end there. At the same time they’ve fundamentally changed management, Tintri has also innovated around storage technology that enables Tintri VMstore to serve up storage beneath even the most extreme virtual infrastructures.

Publish date: 12/20/13
Profiles/Reports

Software Storage Solutions for Virtualization

Storage has long been a major source of operational and architectural challenges for IT practitioners, but today these challenges are most felt in the virtual infrastructure. The challenges are sprung from the physicality of storage – while the virtual infrastructure has made IT entirely more agile and adaptable than ever before, storage still depends upon digital bits that are permanently stored on a physical device somewhere within the data center.

For practitioners who have experienced the pain caused by this – configuration hurdles, painful data migrations, and even disasters – the idea of software-defined storage likely sounds somewhat ludicrous. But the term also holds tremendous potential to change the way IT is done by tackling this one last vestige of the traditional, inflexible IT infrastructure.

The reality is that software-defined storage isn’t that far away. In the virtual infrastructure, a number of vendors have long offered Virtual Storage Appliances (VSAs) that can make storage remarkably close to software-defined. These solutions allow administrators to easily and rapidly deploy storage controllers within the virtual infrastructure, and equip either networked storage pools or the direct-attached storage within a server with enterprise-class storage features that are consistent and easily managed by the virtual administrator, irrespective of where the virtual infrastructure is (in the cloud, or on premise). Such solutions can make comprehensive storage functionality available in places where it could never be had before, allow for higher utilization of stranded pools of storage (such as local disk in the server), and enable a homogeneous management approach even across many distributed locations.

The 2012-2013 calendar years have brought an increasing amount of attention and energy in the VSA marketplace. While the longest established, major vendor VSA solution in the marketplace has been HP’s StoreVirtual VSA, in 2013 an equally major vendor – VMware – introduced a similar, software-based, scale-out storage solution for the virtual infrastructure – VSAN. While VMware’s VSAN does not directly carry a VSA moniker, and in fact stands separate from VMware’s own vSphere Storage Appliance, VSAN has an architecture very similar to the latest generation of HP’s own StoreVirtual VSA. Both of these products are scale-out storage software solutions that are deployed in the virtual infrastructure and contain solid-state caching/tiering capabilities that enhance performance and make them enterprise-ready for production workloads. VMware’s 2013 announcement finally meant HP is no longer the sole major vendor (Fortune 500) with a primary storage VSA approach. This only adds validation to other vendors who have long offered VSA-based solutions, vendors like FalconStor, Nexenta, and StorMagic.

We’ve turned to a high level assessment of five market leaders who are today offering VSA or software storage in the virtual infrastructure. We’ve assessed these solutions here with an eye toward how they fit as primary storage for the virtual infrastructure. In this landscape, we’ve profiled the key characteristics and capabilities critical to storage systems fulfilling this role. At the end of our assessment, clearly each solution has a place in the market, but not all VSA solutions are ready for primary storage. Those that are, may stand to reinvent the practice of storage in customer data centers. 

Publish date: 01/03/14
Profiles/Reports

Accelerating the VM with FlashSoft: Software-Driven Flash-Caching for the Virtual Infrastructure

Storage performance has long been the bane of the enterprise infrastructure. Fortunately, in the past couple of years, solid-state technologies have allowed new comers as well as established storage vendors to start shaping up clever, cost effective, and highly efficient storage solutions that unlock greater storage performance. It is our opinion that the most innovative of these solutions are the ones that require no real alteration in the storage infrastructure, nor a change in data management and protection practices.

This is entirely possible with server-side caching solutions today. Server-side caching solutions typically use either PCIe solid-state NAND Flash or SAS/SATA SSDs installed in the server alongside a hardware or software IO handler component that mirrors commonly utilized data blocks onto the local high speed solid-state storage. Then the IO handler redirects server requests for data blocks to those local copies that are served up with lower latency (microseconds instead of milliseconds) and greater bandwidth than the original backend storage. Since data is simply cached, instead of moved, the solution is transparent to the infrastructure. Data remains consolidated on the same enterprise infrastructure, and all of the original data management practices – such as snapshots and backup – still work. Moreover, server-side caches can actually offload IO from the backend storage system, and can allow a single storage system to effectively serve many more clients. Clearly there’s tremendous potential value in a solution that can be transparently inserted into the infrastructure and address storage performance problems.

Publish date: 08/25/14
news / Blog

Pain Points within the Virtual Infrastructure: How to Not Lose Your Cost Savings

If you're reading this blog chances are you're a storage practitioner, a virtual administrator, or the boss of some of those folks - and you likely know that storage is expensive. But do you know just how expensive?

  • Premiered: 03/28/14
  • Author: Taneja Group
Topic(s): Storage Virtualization Virtual Infrastructure Scale Riverbed SteelFusion VMWare VSAN Tintri
Profiles/Reports

Scale Computing HC3 And VMware Virtual SAN Hyperconverged Solutions - Head to Head

Scale Computing was an early proponent of hyperconverged appliances and is one of the innovators in this marketplace. Since the release of Scale Computing’s first hyperconverged appliance, many others have come to embrace the elegance of having storage and compute functionality combined on a single server. Even the virtualization juggernaut VMware has seen the benefits of abstracting, pooling, and running storage and compute on shared commodity hardware. VMware’s current hyperconverged storage initiative, VMware Virtual SAN, seems to be gaining traction in the marketplace. We thought it would be an interesting exercise to compare and contrast Scale Computing’s hyperconverged appliance to a hyperconverged solution built around VMware Virtual SAN. Before we delve into this exercise, however, let’s go over a little background history on the topic.

Taneja Group defines hyperconvergence as the integration of multiple previously separate IT domains into one system in order to serve up an entire IT infrastructure from a single device or system. This means that hyperconverged systems contain all IT infrastructure—networking, compute and storage—while promising to preserve the adaptability of the best traditional IT approaches. Such capability implies an architecture built for seamless and easy scaling over time, in a "grow as needed” fashion.

Scale Computing got its start with scale-out storage appliances and has since morphed these into a hyperconverged appliance—HC3. HC3 was the natural evolution of its well-regarded line of scale-out storage appliances, which includes both a hypervisor and a virtual infrastructure manager. HC3’s strong suit is its ease of use and affordability. The product has seen tremendous growth and now has over 900 deployments.

VMware got its start with compute virtualization software and is by far the largest virtualization company in the world. VMware has always been a software company, and takes pride in its hardware agnosticism. VMware’s first attempt to combine shared direct-attached storage (DAS) storage and compute on the same server resulted in a product called “VMware vSphere Storage Appliance” (VSA), which was released in June of 2011. VSA had many limitations and didn’t seem to gain traction in the marketplace and reached its end of availability (EOA) in June of 2014. VMware’s second attempt, VMware Virtual SAN (VSAN), which was announced at VMworld in 2013, shows a lot of promise and seems to be gaining acceptance, with over 300 paying customers using the product. We will be comparing VMware Virtual SAN to Scale Computing’s hyperconverged appliance, HC3, in this paper.

Here we have two companies: Scale Computing, which has transformed from an early innovator in scale-out storage to a company that provides a hyperconverged appliance; and VMware, which was an early innovator in compute virtualization and since has transformed into a company that provides the software needed to create build-your-own hyperconverged systems. We looked deeply into both systems (HC3 and VSAN) and walked both through a series of exercises to see how they compare. We aimed this review at what we consider a sweet spot for these products: small to medium-sized enterprises with limited dedicated IT staff and a limited budget. After spending time with these two solutions, and probing various facets of them, we came up with some strong conclusions about their ability to provide an affordable, easy to use, scalable solution for this market.

The observations we have made for both products are based on hands-on testing both in our lab and on-site at Scale Computing’s facility in Indianapolis, Indiana. Although we talk about performance in general terms, we do not, and you should not, construe this to be a benchmarking test. We have, in good faith, verified all conclusions made around any timing issues. Moreover, the numbers that we are using are generalities that we believe are widely known and accepted in the virtualization community.

Publish date: 10/01/14
news

Can your cluster management tools pass muster?

The right designs and cluster management tools ensure your clusters don't become a cluster, er, failure.

  • Premiered: 11/17/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA cluster TBA Cluster Management TBA Cluster Server TBA Storage TBA Cloud TBA Public Cloud TBA Private Cloud TBA Virtual Infrastructure TBA Virtualization TBA hyperconvergence TBA hyper-convergence TBA software-defined TBA software-defined storage TBA SDS TBA Big Data TBA scale-up TBA CAPEX TBA IT infrastructure TBA OPEX TBA Hypervisor TBA Migration TBA QoS TBA Virtual Machine TBA VM TBA VMWare TBA VMware VVOLs TBA VVOLs TBA Virtual Volumes TBA cloud infrastructure TBA OpenStack
news

Building Docker infrastructure still tough, but maybe not for long

Docker could become the next generation compute platform, and even replace server virtualization, but the container technology has some growing up to do first.

  • Premiered: 12/18/15
  • Author: Taneja Group
  • Published: TechTarget: Search Server Virtualization
Topic(s): TBA Docker TBA Application TBA Server Virtualization TBA Virtualization TBA computer TBA containers TBA container TBA Infrastructure TBA Performance TBA Mike Matchett TBA functionality TBA application performance TBA Citrix TBA NetScaler TBA CPX TBA VM TBA Virtual Machine TBA Virtual Infrastructure TBA Storage TBA ClusterHQ TBA Weaveworks TBA API TBA Cloud
Profiles/Reports

Scale Computing HC3: A Look at a Hyperconverged Appliance

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by virtual workloads, IT has been able to pack more systems into the data center than before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management, as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before - tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance and bring about more capability. All too often, an increase in capability has come at the cost of complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. 

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

Scale Computing, an early pioneer in HyperConverged solutions, has released multiple versions of HC3 appliances and now includes the 6th generation of Scale’s HyperCore Operating System. Scale Computing continues to push the boundary in regards to simplicity, value and availability that many SMB IT departments everywhere have come to rely on.  HC3 is an integration of storage and virtualized compute within a scale-out building block architecture that couples all of the elements of a virtual data center together inside a hyperconverged appliance. The result is a system that is simple to use and does away with much of the complexity associated with virtualization in the data center. By virtualizing and intermingling compute and storage inside a system that is designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex compute clusters, provision and manage storage, and a bevy of other day to day administrative tasks. Provisioning additional resources - any resource - becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service - our hands-on lab service - to the task of evaluating whether Scale Computing's HC3 could deliver on these promises in the real world. For this task, we put an HC3 cluster through the paces to see how well it deployed, how it held up under use, and what special features it delivered that might go beyond the features found in traditional integrations of discreet compute and storage systems.

While we did touch upon whether Scale's architecture could scale performance as well as capacity, we focused our testing upon how the seamless integration of storage and compute within HC3 tackles key complexity challenges in the traditional virtual infrastructure.

As it turns out, HC3 is a far different system than the traditional compute and storage systems that we've looked at before. HC3's combination of compute and storage takes place within a scale-out paradigm, where adding more resources is simply a matter of adding additional nodes to a cluster. This immediately brings on more storage and compute resources, and makes adapting and growing the IT infrastructure a no-brainer exercise. On top of this adaptability, virtual machines (VMs) can run on any of the nodes, without any complex external networking. This delivers seamless utilization of all datacenter resources, in a dense and power efficient footprint, while significantly enhancing storage performance.

Meanwhile, within an HC3 cluster, these capabilities are all delivered on top of a uniquely robust system architecture that can tolerate any failure - from a disk to an entire cluster node - and guarantee a level of availability seldom seen by mid-sized customers. Moreover, that uniquely robust, clustered, scale-out architecture can also intermix different generation of nodes in a way that will put an end to painful upgrades by reducing them to simply decommissioning old nodes as new ones are introduced.

HC3’s flexibility, ease of deployment, robustness and a management interface is the simplest and easiest to use that we have seen. This makes HC3 a disruptive game changer for SMB and SME businesses. HC3 stands to banish complex IT infrastructure deployment, permanently alter on-going operational costs, and take application availability to a new level. With those capabilities in focus, single bottom-line observations don’t do HC3 justice. In our assessment, HC3 may take as little as 1/10th the effort to setup and install as traditional infrastructure, 1/4th the effort to configure and deploy a virtual machine (VM) versus doing so using traditional infrastructure, and can banish the planning, performance troubleshooting, and reconfiguration exercises that can consume as much as 25-50% of an IT administrator’s time. HC3 is about delivering on all of these promises simultaneously, and with the additional features we'll discuss, transforming the way SMB/SME IT is done.

Publish date: 09/30/15
news

The copy data management market is starting to go mainstream

Products from copy data management vendors protect and manage production data to lower storage costs, speed data access and streamline self-service access to data copies.

  • Premiered: 09/06/17
  • Author: Steve Ricketts
  • Published: TechTarget: Search Storage
Topic(s): TBA CDM TBA copy data management TBA Steve Ricketts TBA data backup TBA Data Management TBA Data protection TBA secondary data TBA compliance TBA visibility TBA Actifio TBA Catalogic TBA Dell EMC TBA IBM TBA Hybrid Cloud TBA multi cloud TBA data lifecycle management TBA DLM TBA DevOps TBA Data reduction TBA Deduplication TBA Compression TBA Cohesity TBA Veritas TBA Data Domain TBA NetBackup TBA hyperconverged TBA hyperconvergence TBA Storage TBA Hitachi TBA CommVault