Taneja Group | VI
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: VI

news / Blog

Brocade/Virtual Instruments Brawl: Is it Really Necessary?

I recently came across an email note written by John Thompson, CEO of Virtual Instruments, a virtual infrastructure optimization vendor, to Mike Klayko, CEO of Brocade, clearly the major supplier of FC switches, HBAs, CNAs (and Ethernet equipment) to the market. The note was disturbing, to say the least, as it discusses how the relationship between the companies, which was excellent in the past, has deteriorated to the point where Brocade is essentially willing to tell customers and prospects not to use the Virtual Instrument's product. I believe this is not in the interest of the customer and frankly, not in the interest of Brocade either.

  • Premiered: 10/26/11
  • Author: Arun Taneja
  • Published: Taneja Group Blog
Topic(s): Virtual Instruments VI Brocade VirtualWisdom
news / Blog

Virtual Instruments and Data Centers, and Why it Matters. A Lot.

In March I published my article on meaningful visibility into the data center. I mentioned several vendors who are doing good work in this challenging field including Virtual Instruments (VI). There are lots of monitoring, capacity planning and general trending products out there. These vendors market their products to virtualization and cloud markets hoping to catch the big wave of investment in those fields. There is nothing wrong with these tools and IT needs them in discrete settings, but they don’t go nearly far enough in managing new levels of data center complexity. They offer some visibility, but dynamic data centers require not only information but also the ability to correlate information across multiple systems and to automatically act on it. We call this crucial piece of the puzzle “instrumentation.”

  • Premiered: 04/26/12
  • Author: Taneja Group
Topic(s): instrumentation Data Center Virtual Instruments VI correlation
Profiles/Reports

VI - Top Six Physical Layer Best Practices: Maintaining Fiber Optics for the High Speed Data Center

Whether it’s handling more data, accelerating mission-critical applications, or ultimately delivering superior customer satisfaction, businesses are requiring IT to go faster, farther, and at ever-larger scales. In response vendors keep evolving newer generations of higher-performance technology. It’s an IT arms race full of uncertainty, but one thing is inevitable – the interconnections that tie it all together, the core data center networks, will be driven faster and faster.

Unfortunately, many data center owners are under the impression that their current “certified” fiber cabling plant is inherently future-proofed and will readily handle tomorrow’s networking speeds. This is especially true for the high-speed critical SAN’s at the heart of the data center. For example, most of today’s fiber plants supporting protocols like 2Gb or 4Gb Fibre Channel (FC) simply do not meet the required physical layer specifications to support upgrades to 8Gb or 16Gb FC. And faster speeds like 20Gb FC are on the horizon.

It is not just the plant design that’s a looming problem. Fiber cabling has always deserved special handling but is often robust enough that it can withstand a certain amount of dirt and mistreatment at today’s speeds. While lack of good cable hygiene and maintenance can and does cause significant problems today, at higher networking speeds the tolerance for dust, bends, and other optical distractions is much smaller. Careless practices need to evolve to whole new level of best practice now or future network upgrades are doomed.

In this paper we’ll consider the tighter requirements of higher speed protocols and examine the critical reasons why standard fiber cabling designs may not be “up to speed”. We’ll introduce some redesign considerations and also look at how an improperly maintained plant can easily degrade or defeat higher-speed network protocols, including some real world experiences that we’ve drawn from experienced field experts in SAN troubleshooting at Virtual Instruments. Along the way we will come to recommend the top six physical layer best practices we see necessary to designing and maintaining fiber to handle whatever comes roaring down the technology highway.

Publish date: 07/31/12
Profiles/Reports

Virtual Instruments Field Study Report

Taneja Group conducted in-depth telephone interviews with six Virtual Instruments (VI) customers. The customers represented enterprises from different industry verticals. The interviews took place over a 3-month period in late 2012 and early 2013. We were pursuing user insights into how VI is bringing new levels of performance monitoring and troubleshooting to customers running large virtualized server and storage infrastructures.

Running large virtualized data centers with hundreds or even thousands of servers, petabytes of data and a large distributed storage network, requires a comprehensive management platform. Such a platform must provide insight into performance and enable proactive problem avoidance and troubleshooting to drive both OPEX and CAPEX savings. Our interviewees revealed that they consider VI to be an invaluable partner in helping to manage the performance of their IT infrastructure supporting mission critical applications.

VI’s expertise and the VirtualWisdom platform differ significantly from other tools’ monitoring, capacity planning and trending capabilities. Their unique platform approach provides true, real-time, system-wide visibility into performance—and correlates data from multiple layers—for proactive remediation of problems and inefficiencies before they affect application service levels. Other existing tools have their usefulness, but they don’t provide the level of detail required for managing through the layers of abstraction and virtualization that characterize today’s complex enterprise data center.

Most of the representative companies were using storage array-specific or fabric device monitoring tools but not system-wide performance management solutions. They went looking for a more comprehensive platform that would monitor, alert and remediate the end-to-end compute infrastructure. The customers we interviewed talked about why they needed this level of instrumentation and why they chose VI over other options. Their needs fell into 6 primary areas:

1. Demonstrably decrease system-wide CAPEX and OPEX while getting more out of existing assets.
2. Align expenditures on server, switch and storage infrastructure with actual requirements.
3. Proactively improve data center performance including mixed workloads and I/O.
4. Manage and monitor multiple data centers and complex computing environments.
5. Troubleshoot performance slowdowns and application failures across the stack.
6. Create customized dashboards and comprehensive reports on the end-to-end environment.

The consensus of opinion is that VI’s VirtualWisdom is by far the best solution for meeting complex data center infrastructure performance challenges, and that the return on investment is unparalleled.

For more information, check out the press release issued by Virtual Instruments.

You can also download this report directly from Virtual Instruments.

Publish date: 03/06/13
news

Customers Recognize VI as an Invaluable Partner in Managing Infrastructure Performance

SAN JOSE, CA--(Marketwire - Mar 6, 2013) - Virtual Instruments, the leader in Infrastructure Performance Management (IPM) for physical, virtual and cloud computing environments, today announced the results of a field study report from the Taneja Group. According to the study, leading companies, including T-Mobile and Wm Morrison Supermarkets PLC, recognize Virtual Instruments as an invaluable partner in managing the performance of IT infrastructures running mission-critical applications.

  • Premiered: 03/07/13
  • Author: Taneja Group
  • Published: MarketWire.com
Topic(s): TBA Virtual Instruments TBA VI TBA Field Study TBA VirtualWisdom TBA Validation
Profiles/Reports

Scale Computing HC3: A Look at a Hyperconverged Appliance

Consolidation and enhanced management enabled by virtualization has revolutionized the practice of IT around the world over the past few years. By abstracting compute from the underlying hardware systems, and enabling oversubscription of physical systems by virtual workloads, IT has been able to pack more systems into the data center than before. Moreover, for the first time in seemingly decades, IT has also taken a serious leap ahead in management, as this same virtual infrastructure has wrapped the virtualized workload with better capabilities than ever before - tools like increased visibility, fast provisioning, enhanced cloning, and better data protection. The net result has been a serious increase in overall IT efficiency.

But not all is love and roses with the virtual infrastructure. In the face of serious benefits and consequent rampant adoption, virtualization continues to advance and bring about more capability. All too often, an increase in capability has come at the cost of complexity. Virtualization now promises to do everything from serving up compute instances, to providing network infrastructure and network security, to enabling private clouds. 

For certain, much of this complexity exists between the individual physical infrastructures that IT must touch, and the simultaneous duplication that virtualization often brings into the picture. Virtual and physical networks must now be integrated, the relationship between virtual and physical servers must be tracked, and the administrator can barely answer with certainty whether key storage functions, like snapshots, should be managed on physical storage systems or in the virtual infrastructure.

Scale Computing, an early pioneer in HyperConverged solutions, has released multiple versions of HC3 appliances and now includes the 6th generation of Scale’s HyperCore Operating System. Scale Computing continues to push the boundary in regards to simplicity, value and availability that many SMB IT departments everywhere have come to rely on.  HC3 is an integration of storage and virtualized compute within a scale-out building block architecture that couples all of the elements of a virtual data center together inside a hyperconverged appliance. The result is a system that is simple to use and does away with much of the complexity associated with virtualization in the data center. By virtualizing and intermingling compute and storage inside a system that is designed for scale-out, HC3 does away with the need to manage virtual networks, assemble complex compute clusters, provision and manage storage, and a bevy of other day to day administrative tasks. Provisioning additional resources - any resource - becomes one-click-easy, and adding more physical resources as the business grows is reduced to a simple 2-minute exercise.

While this sounds compelling on the surface, Taneja Group recently turned our Technology Validation service - our hands-on lab service - to the task of evaluating whether Scale Computing's HC3 could deliver on these promises in the real world. For this task, we put an HC3 cluster through the paces to see how well it deployed, how it held up under use, and what special features it delivered that might go beyond the features found in traditional integrations of discreet compute and storage systems.

While we did touch upon whether Scale's architecture could scale performance as well as capacity, we focused our testing upon how the seamless integration of storage and compute within HC3 tackles key complexity challenges in the traditional virtual infrastructure.

As it turns out, HC3 is a far different system than the traditional compute and storage systems that we've looked at before. HC3's combination of compute and storage takes place within a scale-out paradigm, where adding more resources is simply a matter of adding additional nodes to a cluster. This immediately brings on more storage and compute resources, and makes adapting and growing the IT infrastructure a no-brainer exercise. On top of this adaptability, virtual machines (VMs) can run on any of the nodes, without any complex external networking. This delivers seamless utilization of all datacenter resources, in a dense and power efficient footprint, while significantly enhancing storage performance.

Meanwhile, within an HC3 cluster, these capabilities are all delivered on top of a uniquely robust system architecture that can tolerate any failure - from a disk to an entire cluster node - and guarantee a level of availability seldom seen by mid-sized customers. Moreover, that uniquely robust, clustered, scale-out architecture can also intermix different generation of nodes in a way that will put an end to painful upgrades by reducing them to simply decommissioning old nodes as new ones are introduced.

HC3’s flexibility, ease of deployment, robustness and a management interface is the simplest and easiest to use that we have seen. This makes HC3 a disruptive game changer for SMB and SME businesses. HC3 stands to banish complex IT infrastructure deployment, permanently alter on-going operational costs, and take application availability to a new level. With those capabilities in focus, single bottom-line observations don’t do HC3 justice. In our assessment, HC3 may take as little as 1/10th the effort to setup and install as traditional infrastructure, 1/4th the effort to configure and deploy a virtual machine (VM) versus doing so using traditional infrastructure, and can banish the planning, performance troubleshooting, and reconfiguration exercises that can consume as much as 25-50% of an IT administrator’s time. HC3 is about delivering on all of these promises simultaneously, and with the additional features we'll discuss, transforming the way SMB/SME IT is done.

Publish date: 09/30/15