Taneja Group | Infrastructure
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Infrastructure

Profiles/Reports

Building the Virtual Infrastructure with DataCore SANsymphony-V

There's a ball and chain hanging on your virtualization projects - and it's that pesky storage stuff. Storage that isn't flexible, can't adapt to changes, is wrapped with poor provisioning practices, and where sheer physicality and lack of abstraction seem to get in the way of every virtual server task. There's no secret in the fact we think storage virtualization can set the virtual infrastructure free, but let's take a look at one vendor's product, and how it tackles some of those issues in the Hyper-V infrastructure.

Publish date: 02/08/11
news

Three key VDI storage challenges

Virtual desktops offer some attractive benefits, but storage systems that aren’t up to the task can make it hard to realize those benefits.

  • Premiered: 02/15/12
  • Author: Jeff Byrne
  • Published: TechTarget: SearchStorage.com
Topic(s): TBA VDI TBA Storage TBA Performance TBA Virtualization TBA desktop TBA Infrastructure
news

Storage networking infrastructure trends must figure in upgrade plans

Enterprise IT shops looking to consolidate data centers and virtualize their servers will need to weigh the latest storage networking technology developments as they plot any infrastructure upgrade.

  • Premiered: 03/29/12
  • Author: Taneja Group
  • Published: TechTarget: SearchStorage.com
Topic(s): TBA Data Center TBA Virtualization TBA Infrastructure TBA TechTarget
Profiles/Reports

Making The Virtual Infrastructure Non-stop: And Making Availability Efficient with Symantec’s VCS

The past few years have seen virtualization rapidly move into the mainstream of the data center. Today, virtualization is often the defacto standard in the data center for deployment of any application or service. This includes important operational and business systems that are the lifeblood of the business.

For mission critical systems, customers necessarily demand a broader level of services than is common among the test and development environments where virtualization often gains its foothold in the data center. It goes almost without saying that topmost in customer’s minds are issues of availability.

Availability is a spectrum of technology that offers businesses many different levels of protection – from general recoverability to uninterruptable applications. At the most fundamental level, are mechanisms that protect the data and the server beneath applications. While in the past these mechanisms have often been hardware and secondary storage systems, VMware has steadily advanced the capabilities of their vSphere virtualization offering, and it includes a long list of features – vMotion, Storage vMotion, vSphere Replication, VMware vCenter Site Recovery Manager, vSphere High Availability, and vSphere Fault Tolerance. While clearly VMware is serious about the mission critical enterprise, each of these offerings have retained a VMware-specific orientation toward protecting the “compute instance”.

The challenge is that protecting a compute instance does not go far enough. It is the application that matters, and detecting VM failures may fall short of detecting and mitigating application failures.

With this in mind, Symantec has steadily advanced a range of solutions for enhancing availability protection in the virtual infrastructure. Today this includes ApplicationHA – developed in partnership with VMware – and their gold standard offering of Veritas Cluster Server (VCS) enhanced for the virtual infrastructure. We recently turned an eye toward how these solutions enhance virtual availability in a hands-on lab exercise, conducted remotely from Taneja Group Labs in Phoenix, AZ. Our conclusion: VCS is the only HA/DR solution that can monitor and recover applications on VMware that is fully compatible with typical vSphere management practices such as vMotion, Dynamic Resource Scheduler and Site Recovery Manager, and it can make a serious difference in the availability of important applications.

Publish date: 01/31/13
Profiles/Reports

Optimizing Performance Across Systems and Storage: Best Practices with TeamQuest

In this paper, we’ll briefly review the challenges to assuring good performance in today’s competitive IT environment, and discuss what it takes to overcome these challenges to deploy appropriate end-to-end infrastructure and operationally deliver high-performance service levels. We’ll then introduce TeamQuest, a long-time leading vendor in IT Service Optimization who has recently expanded their world-class performance and capacity management capabilities with deep storage domain coverage. This new solution is unique in both its non-linear predictive modeling leveraged to produce application-specific performance KPIs and its comprehensive span of visibility and management that extends from applications all the way down into SAN storage systems. Ultimately, we’ll see how TeamQuest empowers IT to take full advantage of agility and efficiency solutions like infrastructure virtualization, even for the most performance-sensitive and storage-intensive applications.

For additional information, check out this Virtualization Review published article by Mike Matchett: Is Virtualization Stalled On Performance?

Publish date: 02/20/13
Profiles/Reports

Virtual Instruments Field Study Report

Taneja Group conducted in-depth telephone interviews with six Virtual Instruments (VI) customers. The customers represented enterprises from different industry verticals. The interviews took place over a 3-month period in late 2012 and early 2013. We were pursuing user insights into how VI is bringing new levels of performance monitoring and troubleshooting to customers running large virtualized server and storage infrastructures.

Running large virtualized data centers with hundreds or even thousands of servers, petabytes of data and a large distributed storage network, requires a comprehensive management platform. Such a platform must provide insight into performance and enable proactive problem avoidance and troubleshooting to drive both OPEX and CAPEX savings. Our interviewees revealed that they consider VI to be an invaluable partner in helping to manage the performance of their IT infrastructure supporting mission critical applications.

VI’s expertise and the VirtualWisdom platform differ significantly from other tools’ monitoring, capacity planning and trending capabilities. Their unique platform approach provides true, real-time, system-wide visibility into performance—and correlates data from multiple layers—for proactive remediation of problems and inefficiencies before they affect application service levels. Other existing tools have their usefulness, but they don’t provide the level of detail required for managing through the layers of abstraction and virtualization that characterize today’s complex enterprise data center.

Most of the representative companies were using storage array-specific or fabric device monitoring tools but not system-wide performance management solutions. They went looking for a more comprehensive platform that would monitor, alert and remediate the end-to-end compute infrastructure. The customers we interviewed talked about why they needed this level of instrumentation and why they chose VI over other options. Their needs fell into 6 primary areas:

1. Demonstrably decrease system-wide CAPEX and OPEX while getting more out of existing assets.
2. Align expenditures on server, switch and storage infrastructure with actual requirements.
3. Proactively improve data center performance including mixed workloads and I/O.
4. Manage and monitor multiple data centers and complex computing environments.
5. Troubleshoot performance slowdowns and application failures across the stack.
6. Create customized dashboards and comprehensive reports on the end-to-end environment.

The consensus of opinion is that VI’s VirtualWisdom is by far the best solution for meeting complex data center infrastructure performance challenges, and that the return on investment is unparalleled.

For more information, check out the press release issued by Virtual Instruments.

You can also download this report directly from Virtual Instruments.

Publish date: 03/06/13
news

The downsides of a software-defined infrastructure

The data center of the future could lead to a software-defined infrastructure, but current technology is still relatively immature, with more work needed in the area of automation.

  • Premiered: 02/26/13
  • Author: Taneja Group
  • Published: TechTarget: SearchDataCenter.com
Topic(s): TBA Infrastructure TBA software-defined TBA SDDC TBA Software-Defined Data Center TBA Virtualization TBA Performance
news

Atlantis Computing handles virtual desktops' persistent data in RAM

Atlantis Computing this week launched a version of its virtual desktop infrastructure (VDI) software that speeds performance and reduces storage requirements for persistent virtual desktops.

  • Premiered: 03/01/13
  • Author: Taneja Group
  • Published: TechTarget: SearchVirtualStorage.com
Topic(s): TBA Atlantis TBA RAM TBA VDI TBA virtual desktops TBA Virtualization TBA Infrastructure TBA desktops TBA Performance TBA Mike Matchett
news

Atlantis ILIO moves into hyper-converged storage

Atlantis Computing Inc. aims to move beyond optimizing virtual desktop infrastructure storage with its latest software, which uses inline memory to deliver the startup's version of software-defined storage.

  • Premiered: 02/28/14
  • Author: Taneja Group
  • Published: Tech Target: Search Virtual Storage
Topic(s): TBA Atlantis Computing TBA Virtualization TBA Storage TBA Infrastructure TBA SDS TBA software-defined storage TBA ILIO TBA NAS TBA SAN TBA DAS TBA RAM TBA Flash TBA SSD TBA VDI TBA Low latency TBA VSAN TBA VMWare TBA CloudStack TBA OpenStack TBA IBM TBA SmartCloud TBA BMC TBA Cloud Lifecycle Manager
news

Database performance tuning: Five ways for IT to save the day

When database performance takes a turn for the worse, IT can play the hero. There are some new ways for IT pros to tackle slowdown problems. However, one question must be addressed first: Why is it up to IT?

  • Premiered: 04/17/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Database TBA Database Performance TBA IT TBA Optimization TBA SQL TBA NoSQL TBA Infrastructure TBA scale-up TBA scale-out TBA Active Archive TBA Archiving TBA SSD TBA Flash TBA Acceleration TBA server TBA Tokutek
Resources

Proven ROI for Internet of Things (IoT) Data Center Infrastructure Market

Internet of Things (IoT) is a hot trend in today’s economy of  connected devices. Every high-tech device in the data center industry – storage, servers, switches, application software, or any appliance – generates copius amounts of machine data that can be analyzed to help the manufacturer gain operational and strategic insights. These insights assist in reducing costs to support customers by lowering mean time to resolution (MTTR) per case.  They also enable manufacturers to build and sell specific value add services to their customers with the objective of being proactive, predictive and prescriptive in front of their top enterprise accounts.  Finally, deep insights can be gleaned off this kind of machine data analytics, if done the right way, to funnel strategic information onto future product roadmaps.  All of these benefits are very quantifiable if there is a solution like Glassbeam in place in such markets.

This webinar will discuss the right elements of such a solution, lay out the foundation of ROI with specific benefits, and discuss a case study with a Fortune 100 account of Glassbeam.

  • Premiered: 06/09/14 at 11 am PT/2 pm ET
  • Location: OnDemand
  • Speaker(s): Mike Matchett, Senior Analyst at Taneja Group; Puneet Pandit, Co-founder & CEO, Glassbeam
  • Sponsor(s): Glassbeam, BrightTALK
Topic(s): TBA Topic(s): Glassbeam Topic(s): TBA Topic(s): Internet of Things Topic(s): TBA Topic(s): IoT Topic(s): TBA Topic(s): Data Center Topic(s): TBA Topic(s): Infrastructure Topic(s): TBA Topic(s): analytics
news

Why Facebook and the NSA love graph databases

Is there a benefit to understanding how your users, suppliers or employees relate to and influence one another? It's hard to imagine that there is a business that couldn't benefit from more detailed insight and analysis, let alone prediction, of its significant relationships.

  • Premiered: 06/17/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Mike Matchett TBA analytics TBA Big Data TBA graph database TBA graph theory TBA Database TBA SQL TBA Security TBA Infrastructure TBA Data protection TBA Data Management TBA Hadoop TBA Oracle TBA AllegroGraph TBA XML TBA RDF TBA Titan TBA Giraph TBA Sparsity Technologies TBA Neo4J TBA Objectivity TBA InfiniteGraph TBA scalability
news

Hyperconvergence Melds Compute and Storage in a Single Box

The city of Brighton, Colo., aims to take advantage of as many new technologies as possible to better serve its citizens and improve the city’s operations. Doing so comes with a price.

  • Premiered: 07/09/14
  • Author: Taneja Group
  • Published: State Tech Magazine
Topic(s): TBA hyperconvergence TBA convergence TBA Storage TBA SAN TBA SimpliVity TBA Nimble Storage TBA Nutanix TBA iSCSI TBA Data Center TBA Virtualization TBA SQL TBA Infrastructure
news

'Software-defined' to define data center of the future

Is there a real answer for how "software" can define "data center" underneath the software-defined hype?

  • Premiered: 07/16/14
  • Author: Mike Matchett
  • Published: Modern Infrastructure Magazine
Topic(s): TBA software defined TBA Software Defined Data Center TBA Storage TBA Cloud TBA SDN TBA Software-Defined Networking TBA VMWare TBA VSAN TBA HP TBA StoreVirtual TBA StoreVirtual VSA TBA EMC TBA ScaleIO TBA Virtual SAN Appliance TBA SDDC TBA Infrastructure
news

Five VM-Level Infrastructure Adaptations

It used to be that IT struggled to intimately understand every app in order to provide the right supporting infrastructure. Today, server virtualization makes the job much easier, because IT can now just cater to VMs. By working and communicating at the VM level, both app owners and infrastructure admins stay focused, using a common API to help ensure apps are hosted effectively and IT runs efficiently.

  • Premiered: 07/21/14
  • Author: Mike Matchett
  • Published: Virtualization Review
Topic(s): TBA VM TBA Virtual Machine TBA Virtualization TBA Infrastructure TBA Server Virtualization TBA API TBA VMWare TBA Virtual SAN Appliance TBA VSAN TBA Virtual Volumes TBA VVOL TBA LUN TBA Backup TBA Hybrid Cloud TBA convergence TBA hyperconvergence TBA Converged Infrastructure TBA HP TBA IBM TBA VCE TBA SimpliVity TBA Scale Computing TBA Nutanix TBA server TBA Storage TBA Networking TBA Cloud TBA Tintri TBA Mike Matchett
Profiles/Reports

Software-defined Storage and VMware's Virtual SAN Redefining Storage Operations

The massive trend to virtualize servers has brought great benefits to IT data centers everywhere, but other domains of IT infrastructure have been challenged to likewise evolve. In particular, enterprise storage has remained expensively tied to a traditional hardware infrastructure based on antiquated logical constructs that are not well aligned with virtual workloads – ultimately impairing both IT efficiency and organizational agility.

Software-Defined Storage provides a new approach to making better use of storage resources in the virtual environment. Some software-defined solutions are even enabling storage provisioning and management on an object, database or per-VM level instead of struggling with block storage LUN’s or file volumes. In particular, VM-centricity, especially when combined with an automatic policy-based approach to management, enables virtual admins to deal with storage in the same mindset and in the same flow as other virtual admin tasks.

In this paper, we will look at VMware’s Virtual SAN product and its impact on operations. Virtual SAN brings both virtualized storage infrastructure and VM-centric storage together into one solution that significantly reduces cost compared to a traditional SAN. While this kind of software-defined storage alters the acquisition cost of storage in several big ways (avoiding proprietary storage hardware, dedicated storage adapters and fabrics, et.al.) here at Taneja Group what we find more significant is the opportunity for solutions like VMware’s Virtual SAN to fundamentally alter the on-going operational (or OPEX) costs of storage.

In this report, we will look at how Software-Defined Storage stands to transform the long term OPEX for storage by examining VMware’s Virtual SAN product. We’ll do this by examining a representative handful of key operational tasks associated with enterprise storage and the virtual infrastructure in our validation lab. We’ll examine the key data points recorded from our comparative hands-on examination, estimating the overall time and effort required for common OPEX tasks on both VMware Virtual SAN and traditional enterprise storage.

Publish date: 08/08/14
news

5 Tips for Working with the Scale Computing HC3 Hyperconvergence Appliance

Is this all-in-one virtualization box a good choice for an SMB?

  • Premiered: 10/28/14
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA Scale Computing TBA HC3 TBA hyperconverged TBA hyperconvergence TBA KVM TBA Hypervisor TBA VM TBA Virtual Machine TBA Virtualization TBA Storage TBA Performance TBA converged TBA convergence TBA Datacenter TBA Infrastructure TBA Infrastructure Performance
Profiles/Reports

What Admins Choose For Performance Management: Galileo's Cross-Domain Insight Served via Cloud

Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.

When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.

Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.

There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.

Publish date: 10/29/14
news

Maxta Storage Platform gains validation from Cisco, HP

Cisco tests MaxDeploy reference architecture for Unified Computing System, while HP approves it for the ProLiant 2500 series.

  • Premiered: 11/19/14
  • Author: Taneja Group
  • Published: Tech Target: Search Virtual Storage
Topic(s): TBA Cisco TBA Arun Taneja TBA Unified Computing System TBA MaxDeploy TBA HP TBA HP ProLiant TBA ProLiant TBA Maxta TBA hyperconverged TBA hyperconvergence TBA UCS TBA server TBA VM TBA Virtual Machine TBA ConvergedSystem TBA EVO:RAIL TBA VMWare TBA Compute TBA Storage TBA SimpliVity TBA Nimboxx TBA Nutanix TBA Hypervisor TBA Infrastructure TBA reference architectures
news

New choices bring enterprise big data home

Enterprises recognize the tantalizing value of big data analytics, but traditional concerns about data management and security have held back deployments -- until now.

  • Premiered: 12/02/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Mike Matchett TBA TechTarget TBA Big Data TBA Storage TBA analytics TBA Data Management TBA Security TBA Infrastructure TBA HDFS TBA Hadoop TBA Hadoop Distributed File System TBA scale-out TBA Commodity Storage TBA DAS TBA Direct attached storage TBA replication TBA Optimization TBA EMC TBA Isilon TBA NFS TBA Network File System TBA NetApp TBA VMWare TBA Red Hat TBA OpenStack TBA Virtualization TBA BlueData TBA MapR TBA GFPS TBA API