Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Taneja Blog

Taneja Blog / Systems and Technology

Load DynamiX and Virtual Instruments Merge: The Customer Wins

Load DynamiX announced today its merger with Virtual Instruments, with the combined entity being called Virtual Instruments. We think this is a solid win for the customer, for a whole variety of reasons. We will cover the details in a more elaborate report in a week or two but here’s the gist of what we see.

Virtual Instruments has been around since 2008 and mainly focused its attention on real time and historical monitoring of the infrastructure from VM to storage, in FC-based environments. It has the industry’s most sophisticated hardware and software probes to gather infinitesimal data flowing across the wire to determine how the environment is changing and what impact it is having on the performance of the applications it is serving. The focus has been primarily on the IT operator, to help them respond quickly to changing infrastructure characteristics. It spots everything from anomalies around response time trends and seasonal I/O pattern changes to misbehaving HBA ports, improperly cabled infrastructure, and underutilized SAN switch ports, and immediately reports on its impact on application performance. But it is targeted squarely at IT operations.

Load DynamiX, on the other hand, has been focused since 2008 on storage (block, file and object) workload analytics, workload generation and workload modeling. It has aimed to analyze storage workloads on a given infrastructure to determine whether the storage and the infrastructure are adequate for the required application SLA. In addition to analyzing existing workloads, it provides workload modeling to determine how changes in workload (or new workloads) will impact existing application SLAs on any storage platform. Its workload generator is the most sophisticated in the industry. The target customer for Load DynamiX has always been the storage architect and engineer, not the storage or the infrastructure operator. Its product is designed to determine the scalability of storage switches and arrays, but more importantly plan for the future by aligning purchasing and deployment decisions to workload performance requirements.

In reality any large IT shop today needs both visibility into the infrastructure and the ability to plan for future workloads and changes to workloads. IT operators need to manage what they have and architects need to plan for upcoming workloads. These are two sides of the same coin and if the same vendor can provide solutions for both, the solutions are likely to be more synergistic. This is why this merger makes infinite sense.

The fact is infrastructure is mainly there to serve one purpose: allow the application(s) to run within required SLAs. In simple environments of yesterday, where each application was silo’ed and had its own infrastructure, one could get by with simple tools to diagnose issues and fine tune the environment. But today one could have a thousand or more VMs sharing an infrastructure. Without microscopic visibility and the automated ability to analyze the changes in the environment it is practically impossible to deliver application services with any degree of certainty.

As if this was not enough, consider that we are increasingly using scale-out, clustered infrastructures that complicate the issue by orders of magnitude. Add to this the advent of hyperconvergence, where the storage and compute layers (and over time even the networking layers) are tightly integrated and you know trouble is around the corner. Add to this the fact that portions of the workload may be running in the public cloud. The only way to maintain sanity in this increasingly complex environment is to seek help from automation and specially designed systems that can monitor, diagnose and analyze the environment on a 7x24x365 basis.

We think the timing of the merger couldn’t be better. In spite of the fact that new infrastructures are being designed to be easier to manage and scale, the reality is multiple workloads of different (and often unpredictable) characteristics are impinging on these infrastructures and the only way to maintain sanity  in your upcoming transformation is to have tools that are constantly watching, analyzing and predicting to avert impending issues. We believe the new Virtual Instruments, with its broader vision, has a solid shot at being the leader in this space.

Bookmark and Share

Comments

There are no comments to display. Scroll down to leave your own!

 

Leave a Comment

You must be logged in to comment. Click here to log in or register if you don't have an account.