Taneja Group | Virtual+Instruments
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Virtual+Instruments

Profiles/Reports

Making the invisible visible - the next generation data center (Virtual Instruments)

It has never been more important to be able to peer into the infrastructure to see what is going on - in the age of virtualization, consolidation, cloud services like automated provisioning and deployment, user self service, and more, there are now more complex interactions in the infrastructure than ever before.  Yet visibility comes down to approaching the physical infrastructure the right way to begin with, and we've seldom learned our lessons as well as we should.  In this solution profile, we'll take a look at the importance of designing for access and visibility.

Publish date: 04/19/10
news / Blog

Akorri Acquisition Proves that Visibility is Valuable

The NetApp acquisition of Akorri should make the other large storage players take notice. Virtual infrastructure optimization tools are increasingly valuable, and independent vendors of them are in short supply.

Resources

The New Best Practice of SAN TAPs

TECHNICAL WEBINAR SERIES

The New Best Practice of SAN TAPs
"Reduce Troubleshooting From Days To Minutes"

Is your Fibre Channel SAN infrastructure TAP'd? With your most mission critical applications running on your FC SAN and the growing use of virtualization, there has never been a greater need for comprehensive, real-time insight into your SAN infrastructure. Despite attempts by some vendors to provide limited features such as port mirroring, these practices are inadequate as they fail to see lower level interactions or provide full visibility into sessions that exceed bandwidth demands.

With all of the recent advancements in the datacenter, designing a cabling infrastructure that provides visibility into the SAN has remained fairly stagnant...until now.

The revolutionary breakthrough offered by fiber-optic network TAPs (Traffic Access Points) are being broadly adopted by enterprise datacenters across the globe as end users now depend on this inexpensive, non-intrusive and effective way of gaining full performance and utilization visibility of their SAN infrastructures.

Join David Bartoletti, Senior Analyst & Consultant, Taneja Group, Andrew Varley, Business Director of Splice, and Alex D'Anna, Director of Solutions Consulting, Virtual Instrument as they explain how such simple devices have enabled complex IT environments to reduce their SAN troubleshooting times from days to minutes.

When: Thursday, February 17, 2011
Time: 9:00 am PST; 12:00 pm EST; 5:00 pm GMT


In this webinar, we'll examine the keys to SAN visibility and how to:

Implement the right physical access to fibre-based networks
Reduce troubleshooting times from days to minutes
Obtain key perfomance metrics to measure SLAs

  • Premiered: 02/17/11 at 9:00 am PST; 12:00 pm EST; 5:00 pm GMT
  • Location: Online
  • Speaker(s): Dave Bartoletti
  • Sponsor(s): Virtual Instruments
Topic(s): TBA Topic(s): Virtual Instruments Topic(s): TBA Topic(s): SAN Topic(s): TBA Topic(s): TAP
news / Blog

Brocade/Virtual Instruments Brawl: Is it Really Necessary?

I recently came across an email note written by John Thompson, CEO of Virtual Instruments, a virtual infrastructure optimization vendor, to Mike Klayko, CEO of Brocade, clearly the major supplier of FC switches, HBAs, CNAs (and Ethernet equipment) to the market. The note was disturbing, to say the least, as it discusses how the relationship between the companies, which was excellent in the past, has deteriorated to the point where Brocade is essentially willing to tell customers and prospects not to use the Virtual Instrument's product. I believe this is not in the interest of the customer and frankly, not in the interest of Brocade either.

  • Premiered: 10/26/11
  • Author: Arun Taneja
  • Published: Taneja Group Blog
Topic(s): Virtual Instruments VI Brocade VirtualWisdom
news / Blog

Virtual Instruments and Data Centers, and Why it Matters. A Lot.

In March I published my article on meaningful visibility into the data center. I mentioned several vendors who are doing good work in this challenging field including Virtual Instruments (VI). There are lots of monitoring, capacity planning and general trending products out there. These vendors market their products to virtualization and cloud markets hoping to catch the big wave of investment in those fields. There is nothing wrong with these tools and IT needs them in discrete settings, but they don’t go nearly far enough in managing new levels of data center complexity. They offer some visibility, but dynamic data centers require not only information but also the ability to correlate information across multiple systems and to automatically act on it. We call this crucial piece of the puzzle “instrumentation.”

  • Premiered: 04/26/12
  • Author: Taneja Group
Topic(s): instrumentation Data Center Virtual Instruments VI correlation
Profiles/Reports

VI - Top Six Physical Layer Best Practices: Maintaining Fiber Optics for the High Speed Data Center

Whether it’s handling more data, accelerating mission-critical applications, or ultimately delivering superior customer satisfaction, businesses are requiring IT to go faster, farther, and at ever-larger scales. In response vendors keep evolving newer generations of higher-performance technology. It’s an IT arms race full of uncertainty, but one thing is inevitable – the interconnections that tie it all together, the core data center networks, will be driven faster and faster.

Unfortunately, many data center owners are under the impression that their current “certified” fiber cabling plant is inherently future-proofed and will readily handle tomorrow’s networking speeds. This is especially true for the high-speed critical SAN’s at the heart of the data center. For example, most of today’s fiber plants supporting protocols like 2Gb or 4Gb Fibre Channel (FC) simply do not meet the required physical layer specifications to support upgrades to 8Gb or 16Gb FC. And faster speeds like 20Gb FC are on the horizon.

It is not just the plant design that’s a looming problem. Fiber cabling has always deserved special handling but is often robust enough that it can withstand a certain amount of dirt and mistreatment at today’s speeds. While lack of good cable hygiene and maintenance can and does cause significant problems today, at higher networking speeds the tolerance for dust, bends, and other optical distractions is much smaller. Careless practices need to evolve to whole new level of best practice now or future network upgrades are doomed.

In this paper we’ll consider the tighter requirements of higher speed protocols and examine the critical reasons why standard fiber cabling designs may not be “up to speed”. We’ll introduce some redesign considerations and also look at how an improperly maintained plant can easily degrade or defeat higher-speed network protocols, including some real world experiences that we’ve drawn from experienced field experts in SAN troubleshooting at Virtual Instruments. Along the way we will come to recommend the top six physical layer best practices we see necessary to designing and maintaining fiber to handle whatever comes roaring down the technology highway.

Publish date: 07/31/12
Profiles/Reports

Cloud Building – Requirements For Private Clouds

Instrumentation and the Private Cloud

Large numbers of organizations are undertaking private cloud initiatives, but this shouldn’t be surprising – the cloud is a small step in a continued pursuit of better data center IT. Typified by Amazon, the company who orchestrated basic computer services into easily deployable, fly-by-wire, pay-per-moment IT, we think the idea of cloud is well underway to reaching a definitional stage where it will be considered a “type of technology”. But even the lack of clarity in what cloud is, hasn’t restricted companies from broadly moving toward what they see as cloud infrastructures – typically in the form of private clouds. Why is this?

In reality every incarnation of cloud – public, private, object, hosted, multi-tenant, etc. – is less a new invention than an advancement of IT management born on the back of maturing capabilities that have given the practitioner a bevy of new tools with which to do IT better. The private cloud represents efforts to build highly automated and orchestrated infrastructures that will yield higher levels of utilization, data center efficiency, service level reliability, and responsiveness. Today’s energy around “cloud” is a realization of the very things IT has pursued for years now; simplified, and finally made possible with less than a Federal government scale organization.

In this series of overview pieces on Private Cloud Requirements, Taneja Group is providing a technology by technology, vendor neutral assessment of the mandatory building blocks for private cloud infrastructures, and an identification of key capabilities within these technology domains that we see as critical in enabling private cloud compute. First up, instrumentation – often referred to as monitoring or visibility. But in the private cloud, instrumentation must reach far further than what has traditionally been expected of a conglomeration of single dimension visibility and monitoring tool sets.

Publish date: 02/20/12
Profiles/Reports

Closing the Virtual IO Management Gap

Assuring Service Throughout the Data Center with Infrastructure Performance Management

There is a significant and potentially costly management gap in virtualized server environments that rely solely on hypervisor-centric solutions. As organizations virtualize more of their mission-critical applications, they are discovering that the virtual versions of these apps continue to depend on the rock-solid storage availability and top-notch IO performance they had when physically hosted. Assuring great service to virtualized clients still requires deep performance management capabilities along the whole IO infrastructure path down to and including shared storage resources.

Cohesive hypervisor management solutions like VMware’s vCenter Operations Management Suite provide a significant advantage to virtual administration by centralizing and simplifying many traditionally disparate management tasks. However, there is a significant management blind spot in the view of end-to-end IO infrastructure when looking at it from the native virtual server perspective. Enterprises relying more and more on virtualized IT delivery need to address this natural management gap with Infrastructure Performance Management (IPM). A lack of robust IPM will degrade or even prevent the deployment of critical applications into a virtual environment – at best losing out on the benefits of virtualization and the opportunities for cloud, at worst causing severe degradation and service outages for all applications sharing the same virtual infrastructure pools.

In this paper we review the virtual performance management landscape and the management strengths of the most well-known hypervisor management solution – VMware’s vCenter Operations Suite - to understand why both the market perception and resulting admin reliance on it is so high. We look at how that reliance overlooks a critical gap for IO and storage, and what the implications of that blind spot are for ensuring total performance. Finally, we examine how the unique IO-centric capabilities of Virtual Instruments’ VirtualWisdom close that gap by correlating complete IO path monitoring with both physical and virtual infrastructure, and how by using VirtualWisdom with vCenter Ops one can achieve a complete end-to-end picture that enables mission-critical applications to be successfully virtualized.
 

Publish date: 08/30/12
Profiles/Reports

Virtual Instruments Field Study Report

Taneja Group conducted in-depth telephone interviews with six Virtual Instruments (VI) customers. The customers represented enterprises from different industry verticals. The interviews took place over a 3-month period in late 2012 and early 2013. We were pursuing user insights into how VI is bringing new levels of performance monitoring and troubleshooting to customers running large virtualized server and storage infrastructures.

Running large virtualized data centers with hundreds or even thousands of servers, petabytes of data and a large distributed storage network, requires a comprehensive management platform. Such a platform must provide insight into performance and enable proactive problem avoidance and troubleshooting to drive both OPEX and CAPEX savings. Our interviewees revealed that they consider VI to be an invaluable partner in helping to manage the performance of their IT infrastructure supporting mission critical applications.

VI’s expertise and the VirtualWisdom platform differ significantly from other tools’ monitoring, capacity planning and trending capabilities. Their unique platform approach provides true, real-time, system-wide visibility into performance—and correlates data from multiple layers—for proactive remediation of problems and inefficiencies before they affect application service levels. Other existing tools have their usefulness, but they don’t provide the level of detail required for managing through the layers of abstraction and virtualization that characterize today’s complex enterprise data center.

Most of the representative companies were using storage array-specific or fabric device monitoring tools but not system-wide performance management solutions. They went looking for a more comprehensive platform that would monitor, alert and remediate the end-to-end compute infrastructure. The customers we interviewed talked about why they needed this level of instrumentation and why they chose VI over other options. Their needs fell into 6 primary areas:

1. Demonstrably decrease system-wide CAPEX and OPEX while getting more out of existing assets.
2. Align expenditures on server, switch and storage infrastructure with actual requirements.
3. Proactively improve data center performance including mixed workloads and I/O.
4. Manage and monitor multiple data centers and complex computing environments.
5. Troubleshoot performance slowdowns and application failures across the stack.
6. Create customized dashboards and comprehensive reports on the end-to-end environment.

The consensus of opinion is that VI’s VirtualWisdom is by far the best solution for meeting complex data center infrastructure performance challenges, and that the return on investment is unparalleled.

For more information, check out the press release issued by Virtual Instruments.

You can also download this report directly from Virtual Instruments.

Publish date: 03/06/13
news

Customers Recognize VI as an Invaluable Partner in Managing Infrastructure Performance

SAN JOSE, CA--(Marketwire - Mar 6, 2013) - Virtual Instruments, the leader in Infrastructure Performance Management (IPM) for physical, virtual and cloud computing environments, today announced the results of a field study report from the Taneja Group. According to the study, leading companies, including T-Mobile and Wm Morrison Supermarkets PLC, recognize Virtual Instruments as an invaluable partner in managing the performance of IT infrastructures running mission-critical applications.

  • Premiered: 03/07/13
  • Author: Taneja Group
  • Published: MarketWire.com
Topic(s): TBA Virtual Instruments TBA VI TBA Field Study TBA VirtualWisdom TBA Validation
news / Blog

Deep and Broad - Following IO from VM through SAN with Virtual Instruments

Recently we had the pleasure to conduct an in-depth field study for Virtual Instruments in which we got to interview, without interference, six of their large enterprise customers spanning verticals including technology, manufacturing, retail, telecom, financial services and government. The results were both expected and surprising....

Resources

Achieving ROI w/Infrastructure Performance Management

Taneja Group Field Study Report - Achieving ROI with Infrastructure Performance Management—How VI Customers Drive IT Innovation, Agility and Business Alignment

Today, mission-critical IT infrastructure teams are tasked to do more than ever before to maximize utilization, minimize cost and mitigate risk—while assuring availability and performance throughout perpetual migration, consolidation and expansion cycles. In this reality, a purpose-built, vendor independent platform that continuously monitors system-wide performance throughout heterogeneous environments without bias and in real-time can be a genuine competitive advantage.

In this webcast, we’ll talk about how customers from Telco, Financial Services, Retail, Technology, Manufacturing and Government verticals achieve significant IT and business value by implementing VirtualWisdom® Infrastructure Performance Management (IPM) to transform IT into a true business partner..

Join host Mike Matchett, Sr. Analyst and Consultant from the Taneja Group, and VI’s President of Customer Operations Sean Maxwell as they share customer insights on achieving ROI by:

*Demonstrably decreasing system-wide CAPEX and OPEX while getting more out of their existing assets
*Eliminating risk and troubleshooting performance slowdowns and application failures across the stack
*Aligning expenditures on server, switch and storage infrastructure with business and application requirements
*Proactively improving data center performance including mixed workloads and I/O
*Managing and monitoring multiple data centers and complex computing environments
*Creating customized dashboards and comprehensive reports on the end-to-end environment

Around the world, industry leaders in every major vertical rely on VirtualWisdom to guarantee performance and availability, quickly identify threats to service levels, and resolve their most complex performance challenges—all at a substantially lower total cost.

REGISTER for this webcast!

Click Here to download the Taneja Field Report

  • Premiered: 09/12/13 at 10am Pacific Time (1:00pm ET)
  • Location: OnDemand
  • Speaker(s): Mike Matchett
  • Sponsor(s): Virtual Instruments
Topic(s): TBA Topic(s): Virtual Instruments Topic(s): TBA Topic(s): Field Study Topic(s): TBA Topic(s): Field Report Topic(s): TBA Topic(s): IT infrastructure Topic(s): TBA Topic(s): ROI Topic(s): TBA Topic(s): VirtualWisdom Topic(s): TBA Topic(s): Infrastructure Performance Management Topic(s): TBA Topic(s): IPM Topic(s): TBA Topic(s): Mike Matchett Topic(s): TBA Topic(s): Sean Maxwell
news / Blog

Virtual Instruments Goes Deeper Into Storage Performance

Virtual Instruments, well-known for being able to address storage performance across infrastructure from the high to the low...has just released a denser SAN performance probe that crams 48 Fibre Channel ports into a 2U appliance...In addition to saving datacenter rack space et.al., at this density it really signals enterprises that they should implement SAN performance management on every important port....

  • Premiered: 10/08/13
  • Author: Mike Matchett
Topic(s): Virtual Instruments SAN Infrastructure Performance Management IPM
news

Virtual Instruments Merges with Load DynamiX; Secures $20 Million Investment

Virtual Instruments, the market leader in real-time infrastructure performance management, and Load DynamiX, the leader in storage performance analytics, today announced that they have entered into a definitive agreement under which the companies will merge to create the industry’s first end-to-end infrastructure DevOps platform.

  • Premiered: 03/29/16
  • Author: Taneja Group
  • Published: Business Wire
Topic(s): TBA Virtual Instruments TBA application performance TBA analytics TBA Load DynamiX TBA application performance management TBA Infrastructure TBA Arun Taneja
news

Virtual Instruments taps into Load DynamiX, storage analytics

Storage performance monitoring vendor Virtual Instruments combines forces with load simulation vendor Load DynamiX in a merger that broadens their storage analytics platforms.

  • Premiered: 03/29/16
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Virtual Instruments TBA Load DynamiX TBA Storage Performance TBA storage performance validation TBA Performance TBA Storage TBA application performance TBA application performance management TBA Arun Taneja TBA FC TBA Fibre Channel
news

Virtual Instruments Debuts WorkloadCentral, Free Cloud-based Workload Analysis

Virtual Instruments, the leader in infrastructure performance analytics, today launched the beta version of WorkloadCentral, a free storage workload analysis service and community designed to help IT teams better understand how their application workloads interact with the underlying storage infrastructure.

  • Premiered: 05/02/16
  • Author: Taneja Group
  • Published: Business Wire
Topic(s): TBA Virtual Instruments TBA analytics TBA Storage TBA storage infrastructure TBA application performance TBA Cloud TBA Load DynamiX TBA Metadata TBA Arun Taneja
Profiles/Reports

Virtual Instruments WorkloadCentral: Free Cloud-Based Resource for Understanding Workload Behavior

Virtual Instruments, the company created by the combination of the original Virtual Instruments and Load DynamiX, recently made available a free cloud-based service and community called WorkloadCentral. The service is designed to help storage professionals understand workload behavior and improve their knowledge of storage performance. Most will find valuable insights into storage performance with the simple use of this free service. For those who want to get a deeper understanding of workload behavior over time, or evaluate different storage products to determine which one is right for their specific application environment, or optimize their storage configurations for maximum efficiency, they can buy additional Load DynamiX Enterprise products available from the company.
The intent with WorkloadCentral is to create a web-based community that can share information about a variety of application workloads, perform workload analysis and create workload simulations. In an industry where workload sharing has been almost absent, this service will be well received by storage developers and IT users alike.
Read on to understand where WorkloadCentral fits into the overall application and storage performance spectrum...

Publish date: 05/26/16
news / Blog

Virtual Instruments Finally Gets NAS-ty

When Virtual Instruments merged in/acquired Load Dynamix recently, we thought good things were going to happen. VI could now offer its users a full performance management "loop" of monitoring and testing in a common suite. Apparently VI's clientele agreed because they've just finished out a stellar first half of year financially. Now, to sweeten the offer even more, VI is broadening its traditionally Fibre Channel/block focused monitoring (historically rooted in their original FC SAN probes) to fully encompass NAS monitoring too.

  • Premiered: 09/20/16
  • Author: Mike Matchett
Topic(s): Virtual Instruments NAS Monitoring Performance
news

New Virtual Instruments software improves NAS, flash models

Version 5.3 of Virtual Instruments' Load DynamiX 5 software supports time-based NFSv3 workloads, eases preconditioning of flash arrays and improves the user interface.

  • Premiered: 04/19/17
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA NAS TBA Virtual Instruments TBA Flash TBA SSD TBA Load DynamiX TBA NFS TBA Block Storage TBA flash storage TBA iSCSI TBA Fibre Channel TBA FC TBA VirtualWisdom TBA analytics TBA Storage Performance TBA NetApp TBA OnCommand TBA Oracle TBA Oracle ZFS TBA Data Deduplication TBA Compression TBA Data reduction TBA Backup TBA Arun Taneja TBA all flash array TBA all-flash TBA AFA TBA Optimization