Taneja Group | Technology+Validation
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Technology+Validation

news

Taneja Group Validation: Making Endpoint Protection Practical

The idea of desktop, or endpoint, protection has long been the thorn in the side of IT. We’ve struggled to figure out how to effectively protect desktops, and nearly anything proposed has suffered in one dimension or another. Some solutions have been too cumbersome and complex for either administrators or users (or both), some solutions simply overwhelm the existing infrastructure, and some solutions are simply overbuilt and too intrusive. Moreover, the vast majority have suffered from cost models that simply are too expensive to protect what is at the end of the day one of the most important, but also most price sensitive, IT assets – the desktop.

  • Premiered: 07/08/11
  • Author: Taneja Group
  • Published: InfoStor
Topic(s): TBA TVS TBA Technology Validation TBA i365 TBA endpoint protection
Profiles/Reports

Scale Computing HC3: Ending complexity with a hyper-converged, virtual infrastructure

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by many virtual workloads, IT has been able to pack more systems into the data center than ever before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before – tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance, and bring about more capability. All too often, an increase in capability has come at the cost of introducing considerable complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. Complex it can be.

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

With challenges surrounding increasing virtual complexity driving their vision of a better way to do IT, Scale Computing, long a provider of scale-out storage for the SMB, recently introduced a new line of technology – a product labeled HC3, or Hyper Convergence 3. HC3 is an integration of scale-out storage and scale-out virtualized compute within a single building block architecture that couples all of the elements of a virtual data center together inside one single system. The promised result is a system that is simple to use, and does away with the management and complexity overhead associated with virtualization in the data center. By virtualizing and intermingling all compute and storage inside a system that is already designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex clusters, provision and manage storage, and a bevy of day to day administrative tasks. Provisioning additional resources – any resource – becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service – our hands-on lab service – to the task of evaluating whether Scale Computing’s HC3 could deliver on these promises in the real world. For this task, we put several HC3 clusters through the paces to see how they deployed, how they held up under use, and what specialized features they delivered that might go beyond the features found in traditional integrations of separate compute and storage systems.

 

Publish date: 08/30/12
Profiles/Reports

Integrated Disaster Recovery: Technologies for Comprehensive Storage Array Protection

DR has long been particularly challenging for the midmarket customer. It usually requires multiple layers and components, host-based software, replication gateways or appliances, and often array-based functionality that is licensed and managed separately. Add to this complexity the need for robust bandwidth or an expensive WAN optimization approach and it’s no surprise that DR can have a significant impact on both OPEX and CAPEX budgets.

The cost to manage all of these different elements can dwarf the cost of a primary storage system itself. The enterprise faces many of the same challenges, but they also have bigger budgets and more specialists to manage the complexity. Midmarket businesses and organizations may not have the same level of budget and specialists, but they certainly face the same challenges.

Recently, Taneja Group Labs put the StorTrends 3400i array through a Technology Validation exercise to evaluate how StorTrends measured up as a SMB/SME storage solution in terms of ease of use, performance, availability, adaptability, and innovative features. Over the course of our Technology Validation exercise, it was clear that one particular StorTrends capability rose above all others: StorTrends built-in multi-site WAN optimized data replication. Specifically, StorTrends suite of replication functionality looks poised to equip SMB and SME customers with the tools that for the first time makes robust DR really achievable. In this report, we’ll highlight what we found, and why it stood out.

 

Publish date: 09/06/12
Profiles/Reports

StorTrends 3400i - Reinventing the value of storage

Selecting a primary storage solution is undoubtedly one of the most critical decisions an IT department can make. As the foundational piece of the modern datacenter, it represents perhaps the single most important piece of IT infrastructure for businesses large, medium or small. Business critical applications will live and breathe on the performance of the selected storage system, and business data will be inevitably constrained by the capacity of that storage system.

In the mid-market, making a storage investment can be particularly daunting, as the stakes are higher, and the selection is harder. Compared to larger enterprises, in the mid-market, storage dollars are fewer and harder to come by. Precious and often limited IT staff time is spread across more systems and technologies, their core skills are often not rooted in storage, and technically vetting a storage system can be all but impossible. This makes storage a risky proposition for the small enterprise (SME) and SMB customer. We frequently hear tales of storage system purchases where I/O is not sufficient, features are missing (or require additional licenses and cost to acquire), or where architectural compromises create availability issues that regularly impact the entire business.

For several years, the developers of the StorTrends line of NAS/SAN solutions have been working hard to architect a storage system for the mid-market that puts an end to these risks and compromises. By harnessing the engineering expertise from their parent American Megatrends, Inc. (AMI) – an innovator in storage and BIOS technologies – StorTrends has been tackling the challenge of delivering abundant performance, robust reliability, and feature rich storage with the SMB and SME customer in mind. Their claim is that the StorTrends 3400i is both one of the most cost effective choices in the market, and one of the most well rounded. 

In mid-2012, StorTrends caught our attention with these claims and a series of notable customer wins in a highly competitive market. To learn more, we approached StorTrends with the idea of a hands-on lab exercise, what we call a Technology Validation, to examine in more depth how StorTrends was delivering comprehensive value for customers in the mid-market space. Utilizing our proven validation methodology that included time spent at AMI headquarters in Norcross, GA, we put a set of StorTrends 3400i storage systems through the paces, with an eye toward examining several capabilities that StorTrends claims makes the 3400i one of the best value storage options in the mid-market.

 

Publish date: 12/10/12
Profiles/Reports

HP 3PAR StoreServ Double Density: Delivering twice the VM density versus traditional arrays

Why does storage performance matter? There are few storage administrators who have not witnessed an unhappy moment where lack of storage performance brought an application to its knees. But quantifying how storage performance matters, aside from serving as insurance to avoid an application crippling moment, is a tricky proposition.

One way is in terms of how storage performance determines the efficiency of the infrastructure. When all else is equal and well tuned – hypervisors, servers, networks – then storage becomes a precious commodity that can determine just how many workloads or applications a given infrastructure can support, and therefore how efficiently it can operate at scale.

When it comes to virtualization, this workload density has serious impacts, but fortunately is even easier to assess. A number of years ago, Taneja Group started out testing workload density among competitive hypervisors, and we labeled it VM density – a measure of how many VMs similar systems can run without compromises in performance or usability. Our net findings clearly indicated how big of an impact a small difference in VM density can have. When poor VM density makes it necessary to add more servers and hypervisors, the cost of servers is costly enough, but licensing tools like vSphere suites can add up to many thousands of dollars and dwarf hardware costs. Meanwhile, with more servers/hypervisors comes more complexity and management overhead, along with operational data center costs (power, cooling, floorspace). The consequences can easily run to tens of thousands of dollars per server. At large scale, superior VM density can easily add up to hundreds of thousands or millions of dollars for a business.

We’ve long suggested that storage performance is one of the biggest contributors to VM density, but few vendors have been brave enough to step forward and declare that their own storage system can take VM density farther than anyone else’s. Well times have changed.

Approximately 6 months ago, HP made a bold promise with their 3PAR storage systems – guaranteeing that customers moving to HP 3PAR storage, from traditional legacy storage, will double the VM density of their existing server infrastructure. Better yet, HP’s promise wasn’t about more spindles, but rather the efficiency and power of their 3PAR StoreServ storage controller architecture – they made this promise even when the customer’s old storage system contained the same disk spindles. Our minds were immediately beset with questions – could a storage system replacement really double the workload density of most virtual infrastructures? We knew from experience that most customers are indeed IO constrained, we’ve had at least several hundred conversations with customers fighting virtual infrastructure performance problems where at the end of the day, the issue is storage performance. But a promise to double the density of these infrastructures is indeed aggressive.

It goes without saying that at Taneja Group, we were more than eager to put this claim to the test. We approached HP in mid-2012 with a proposal to do just that by standing up a 3PAR StoreServ storage array against a similar class, traditional architecture storage array. The test proved more interesting when HP provided us with only the smallest of their 3PAR StoreServ product line at that time (an F200) and a competitive system made up of the biggest controllers available in another popular but traditional architecture mid-range array. Moreover, that traditional system, while first entering the market in 2007/2008, could still be purchased at the time of testing, and is only on the edge of representing the more dated systems envisioned by HP’s Double Density promise. 

Our Findings: After having this equipment made available for our exclusive use during several months of 2012, we have a clear and undeniable conclusion: HP 3PAR StoreServ storage is quite definitely capable of doubling VM density versus typical storage systems.

Publish date: 12/03/12
Profiles/Reports

Making The Virtual Infrastructure Non-stop: And Making Availability Efficient with Symantec’s VCS

The past few years have seen virtualization rapidly move into the mainstream of the data center. Today, virtualization is often the defacto standard in the data center for deployment of any application or service. This includes important operational and business systems that are the lifeblood of the business.

For mission critical systems, customers necessarily demand a broader level of services than is common among the test and development environments where virtualization often gains its foothold in the data center. It goes almost without saying that topmost in customer’s minds are issues of availability.

Availability is a spectrum of technology that offers businesses many different levels of protection – from general recoverability to uninterruptable applications. At the most fundamental level, are mechanisms that protect the data and the server beneath applications. While in the past these mechanisms have often been hardware and secondary storage systems, VMware has steadily advanced the capabilities of their vSphere virtualization offering, and it includes a long list of features – vMotion, Storage vMotion, vSphere Replication, VMware vCenter Site Recovery Manager, vSphere High Availability, and vSphere Fault Tolerance. While clearly VMware is serious about the mission critical enterprise, each of these offerings have retained a VMware-specific orientation toward protecting the “compute instance”.

The challenge is that protecting a compute instance does not go far enough. It is the application that matters, and detecting VM failures may fall short of detecting and mitigating application failures.

With this in mind, Symantec has steadily advanced a range of solutions for enhancing availability protection in the virtual infrastructure. Today this includes ApplicationHA – developed in partnership with VMware – and their gold standard offering of Veritas Cluster Server (VCS) enhanced for the virtual infrastructure. We recently turned an eye toward how these solutions enhance virtual availability in a hands-on lab exercise, conducted remotely from Taneja Group Labs in Phoenix, AZ. Our conclusion: VCS is the only HA/DR solution that can monitor and recover applications on VMware that is fully compatible with typical vSphere management practices such as vMotion, Dynamic Resource Scheduler and Site Recovery Manager, and it can make a serious difference in the availability of important applications.

Publish date: 01/31/13
news / Blog

What are they hiding?

When vendors aren't willing, or act genuinely funny, about having their solution tested, what do you think?

  • Premiered: 08/29/13
  • Author: Taneja Group
Topic(s): Technology Validation Testing Lab Performance Useability Storage Virtualization Compute Networking SSD
news

Storage for virtual servers getting smarter

New products designed from the ground up to specifically serve storage for virtual servers can offer dramatic savings in terms of dollars and the time spent managing storage.

  • Premiered: 03/03/14
  • Author: Taneja Group
  • Published: Tech Target: Search Virtual Storage
Topic(s): TBA VSAN TBA Taneja Group Labs TBA Technology Validation TBA TVS TBA VM TBA Tintri TBA VMstore TBA VMWare TBA HP TBA StoreVirtual VSA TBA SPBM TBA Virtualization TBA Storage TBA Automation
Profiles/Reports

Scale Computing HC3: A Second Look at a Hyperconverged Appliance

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by virtual workloads, IT has been able to pack more systems into the data center than before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management, as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before - tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance and bring about more capability. All too often, an increase in capability has come at the cost of complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds.

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

With challenges surrounding the complexity in managing a virtualized datacenter, Scale Computing, long a provider of scale-out storage, introduced a new line of hyperconverged appliances - HC3 in April, 2012 and updated the appliances with the new HyperCore software in May, 2014. HC3 is an integration of storage and virtualized compute within a scale-out building block architecture that couples all of the elements of a virtual data center together inside a hyperconverged appliance. The result is a system that is simple to use and does away with much of the complexity associated with virtualization in the data center. By virtualizing and intermingling compute and storage inside a system that is designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex compute clusters, provision and manage storage, and a bevy of other day to day administrative tasks. Provisioning additional resources - any resource - becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service - our hands-on lab service - to the task of evaluating whether Scale Computing's HC3 could deliver on these promises in the real world. For this task, we put an HC3 cluster through the paces to see how well it deployed, how it held up under use, and what special features it delivered that might go beyond the features found in traditional integrations of discreet compute and storage systems.

Publish date: 09/30/14
Profiles/Reports

HPE StoreVirtual 3200: A Look at the Only Entry Array with Scale-out and Scale-up

Innovation in traditional external storage has recently taken a back seat to the current market darlings of All Flash Arrays and Software-defined Scale-out Storage. Can there be a better way to redesign the mainstream dual-controller array that has been a popular choice for entry-level shared storage for the last 20 years?  Hewlett-Packard Enterprise (HPE) claims the answer is a resounding Yes.

HPE StoreVirtual 3200 (SV3200) is a new entry storage device that leverages HPE’s StoreVirtual Software-defined Storage (SDS) technology combined with an innovative use of low-cost ARM-based controller technology found in high-end smartphones and tablets. This approach allows HPE to leverage StoreVirtual technology to create an entry array that is more cost effective than using the same software on a set of commodity X86 servers. Optimizing the cost/performance ratio using ARM technology instead of excessive power hungry processing and memory using X86 computers unleashes an attractive SDS product unmatched in affordability. For the first time, an entry storage device can both Scale-up and Scale-out efficiently and also have the additional flexibility to be compatible with a full complement of hyper-converged and composable infrastructure (based on the same StoreVirtual technology). This unique capability gives businesses the ultimate flexibility and investment protection as they transition to a modern infrastructure based on software-defined technologies. The SV3200 is ideal for SMB on-premises storage and enterprise remote office deployments. In the future, it will also enable low-cost capacity expansion for HPE’s Hyper Converged and Composable infrastructure offerings.

Taneja Group evaluated the HPE SV3200 to validate the fit as an entry storage device. Ease-of-use, advanced data services, and supportability were just some of the key attributes we validated with hands-on testing. What we found was that the SV3200 is an extremely easy-to-use device that can be managed by IT generalists. This simplicity is good news for both new customers that cannot afford dedicated administrators and also for those HPE customers that are already accustomed to managing multiple HPE products that adhere to the same HPE OneView infrastructure management paradigm. We also validated that the advanced data services of this entry array match that of the field proven enterprise StoreVirtual products already in the market. The SV3200 can support advanced features such as linear scale-out and multi-site stretch cluster capability that enables advanced business continuity techniques rarely found in storage products of this class. What we found is that HPE has raised the bar for entry arrays, and we strongly recommend businesses that are looking at either SDS technology or entry storage strongly consider HPE’s SV3200 as a product that has the flexibility to provide the best of both. A starting price at under $10,000 makes it very affordable to start using this easy, powerful, and flexible array. Give it a closer look.

Publish date: 04/11/17