Taneja Group | Consolidation
Join Newsletter
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Consolidation


The Greening of the Data Center Technology in Depth

As industry analysts and consultants, the Taneja Group has a single driving mission: to deliver hype-free, accurate, and responsible information about important issues impacting the server and storage industry. This is why we have turned our attention to the green data center, which we believe badly needs insight and clarity in the midst of marketing smokescreens and competing claims. This Technology in Depth represents Taneja Group's take on this important issue. We present the background of the energy crisis, explore important data center trends, and tell you how we believe the industry should respond to a real problem and a real opportunity.

Publish date: 08/01/07
news / Blog

9 Top eDiscovery Trends for 2012

So many people come out with end of the year “Trends” articles that I sometimes skip it. However, there is so much rapid morphing in the eDiscovery industry that if I didn’t share my 2012 trends I’d be a slacker. We can’t have that. The following 9 points are some of the trends you need to know about the complex field of eDiscovery coming into 2012.

news / Blog

Top Trends in eDiscovery #8: Continuing Consolidation

Some analysts are “predicting” widespread eDiscovery consolidation, to which I answer: no one needs a crystal ball to know that. It’s happening already. Over the next couple of years we will see the majority of consolidations as large storage and server companies eat up... I mean, thoughtfully acquire... smaller eDiscovery outfits.

  • Premiered: 01/09/12
  • Author: Taneja Group
  • Published: Taneja Blog
Topic(s): eDiscovery Consolidation

Expert Podcast: How to evaluate a cloud gateway appliance

Can cloud gateways compete on technical merits? How much consolidation will occur in this space? Arun Taneja, founder and consulting analyst at the Taneja Group, answers key questions about cloud gateway appliances and how they factor into disaster recovery and primary storage use cases.

Click the link to listen to this podcast.

Bookmark and Share

  • Premiered: 12/19/12 at OnDemand
  • Location: OnDemand
  • Speaker(s): Arun Taneja
  • Sponsor(s): TechTarget: SearchCloudStorage.com
Topic(s): TBA Topic(s): Cloud Topic(s): TBA Topic(s): Arun Taneja Topic(s): TBA Topic(s): Podcast Topic(s): TBA Topic(s): Consolidation Topic(s): TBA Topic(s): gateway Topic(s): TBA Topic(s): appliances Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): Primary Topic(s): TBA Topic(s): DR Topic(s): TBA Topic(s): Disaster Recovery

Now Big Data Works for Every Enterprise: Pepperdata Adds Missing Performance QoS to Hadoop

While a few well-publicized web 2.0 companies are taking great advantage of foundational big data solution that they have themselves created (e.g. Hadoop), most traditional enterprise IT shops are still thinking about how to practically deploy their first business-impacting big data applications – or have dived in and are now struggling mightily to effectively manage a large Hadoop cluster in the middle of their production data center. This has led to the common perception that realistic big data business value may yet be just out of reach for most organizations – especially those that need to run lean and mean on both staffing and resources.   

This new big data ecosystem consists of scale-out platforms, cutting-edge open source solutions, and massive storage that is inherently difficult for traditional IT shops to optimally manage in production – especially with still evolving ecosystem management capabilities. In addition, most organizations need to run large clusters supporting multiple users and applications to control both capital and operational costs. Yet there are no native ways to guarantee, control, or even gain visibility into workload-level performance within Hadoop. Even if there wasn’t a real high-end skills and deep expertise gap for most, there still isn’t any practical way that additional experts could tweak and tune mixed Hadoop workload environments to meet production performance SLA’s.

At the same time, the competitive game of mining of value from big data has moved from day-long batch ELT/ETL jobs feeding downstream BI systems, to more user interactive queries and business process “real time” applications. Live performance matters as much now in big data as it does in any other data center solution. Ensuring multi-tenant workload performance within Hadoop is why Pepperdata, a cluster performance optimization solution, is critical to the success of enterprise big data initiatives.

In this report we’ll look deeper into today’s Hadoop deployment challenges and learn how performance optimization capabilities are not only necessary for big data success in enterprise production environments, but can open up new opportunities to mine additional business value. We’ll look at Pepperdata’s unique performance solution that enables successful Hadoop adoption for the common enterprise. We’ll also examine how it inherently provides deep visibility and reporting into who is doing what/when for troubleshooting, chargeback and other management needs. Because Pepperdata’s function is essential and unique, not to mention its compelling net value, it should be a checklist item in any data center Hadoop implementation.

To read this full report please click here.

Publish date: 12/17/15