Taneja Group | LAN
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: LAN

news

What you should know about local area network disaster recovery

Local area network disaster recovery planning is frequently overlooked, although many IT functions depend on the LAN. Jeff Boles, senior analyst with Taneja Group, recently spoke with SearchDisasterRecovery.com Editor Andrew Burton about how to prepare that unsung hero of networks – your LAN – for disaster recovery, as well as whether virtualization can help streamline the process.

  • Premiered: 10/31/11
  • Author: Taneja Group
  • Published: TechTarget: SearchDisasterRecovery.com
Topic(s): TBA LAN TBA Disaster Recovery TBA Jeff Boles
news / Blog

AMI StorTrends Optimizes WANs for Primary Data

AMI StorTrends already optimizes iSCSI primary storage for high performance over the LAN. Their iTX Data Storage Software optimizes data for fast transport over the WAN as well.

  • Premiered: 07/06/12
  • Author: Taneja Group
Topic(s): AMI StorTrends LAN WAN accelerator Optimization Storage Primary replication
news

Finally time to declare full backups dead

Continuous data protection (CDP) was bleeding-edge a few years ago, but it’s re-emerging as the best technology for protecting organization's virtual environments.

  • Premiered: 07/16/12
  • Author: Arun Taneja
  • Published: TechTarget: SearchStorage.com
Topic(s): TBA CDP TBA Arun Taneja TBA Backup TBA Continuous Data Protection TBA Data protection TBA LAN
Profiles/Reports

EMC Avamar 7 - Protecting Data at Scale in the Virtual Data Center (TVS)

Storing digital data has long been a perilous task.  Not only are stored digital bits subject to the catastrophic failure of the devices they rest upon, but the nature of shared digital bits subjects them to error and even intentional destruction.   In the virtual infrastructure, the dangers and challenges subtly shift.  Data is more highly consolidated and more systems depend wholly on shared data repositories; this increases data risks.  Many virtual machines connecting to single shared storage pools mean that IO or storage performance has become an incredibly precious resource; this complicates backup, and means that backup IO can cripple a busy infrastructure.  Backup is a more important operation than ever before, but it is also fundamentally more challenging than ever before.

Fortunately, the industry rapidly learned this lesson in the earlier days of virtualization, and has aggressively innovated to bring tools and technologies to bear on the challenge of backup and recovery for virtualized environments.  APIs have unlocked more direct access to data, and products have finally come to market that make protection easier to use, and more compatible with the dynamic, mobile workloads of the virtual data center.  Nonetheless differences abound between product offerings, often rooted in the subtleties of architecture – architectures that ultimately determine whether a backup product is best suited for SMB-sized needs, or whether a solution can scale to support the large enterprise.

Moreover, within the virtual data center, TCO centers on resource efficiency, and a backup strategy can be one of the most significant determinants of that efficiency. On one hand, traditional backup just does not work and can cripple efficiency.  There is simply too much IO contention and application complexity in trying to convert a legacy physical infrastructure backup approach to the virtual infrastructure.  On the other hand, there are a number of specialized point solutions designed to tackle some of the challenges of virtual infrastructure backup.  But too often, these products do not scale sufficiently, lack consolidated management, and stand to impose tremendous operational overhead as the customer’s environment and data grows.  When taking a strategic look at the options, it often looks like backup approaches fly directly in the face of resource efficiency.

Publish date: 10/31/13
Profiles/Reports

Fibre Channel: The Proven and Reliable Workhorse for Enterprise Storage Networks

Mission-critical assets such as virtualized and database applications demand a proven enterprise storage protocol to meet their performance and reliability needs. Fibre Channel has long filled that need for most customers, and for good reason. Unlike competing protocols, Fibre Channel was specifically designed for storage networking, and engineered to deliver high levels of reliability and availability as well as consistent and predictable performance for enterprise applications. As a result, Fibre Channel has been the most widely used enterprise protocol for many years.

But with the widespread deployment of 10GbE technology, some customers have explored the use of other block protocols, such as iSCSI and Fibre Channel over Ethernet (FCoE), or file protocols such as NAS. Others have looked to Infiniband, which is now being touted as a storage networking solution. In marketing the strengths of these protocols, vendors often promote feeds and speeds, such as raw line rates, as a key advantage for storage networking. However, as we’ll see, there is much more to storage networking than raw speed.

It turns out that on an enterprise buyer’s scorecard, raw speed doesn’t even make the cut as an evaluation criteria. Instead, decision makers focus on factors such as a solution’s demonstrated reliability, latency, and track record in supporting Tier 1 applications. When it comes to these requirements, no other protocol can measure up to the inherent strengths of Fibre Channel in enterprise storage environments.

Despite its long, successful track record, Fibre Channel does not always get the attention and visibility that other protocols receive. While it may not be winning the media wars, Fibre Channel offers customers a clear and compelling value proposition as a storage networking solution. Looking ahead, Fibre Channel also presents an enticing technology roadmap, even as it continues to meet the storage needs of today’s most critical business applications.

In this paper, we’ll begin by looking at the key requirements customers should look for in a commercial storage protocol. We’ll then examine the technology capabilities and advantages of Fibre Channel relative to other protocols, and discuss how those translate to business benefits. Since not all vendor implementations are created equal, we’ll call out the solution set of one vendor – QLogic – as we discuss each of the requirements, highlighting it as an example of a Fibre Channel offering that goes well beyond the norm.

Publish date: 02/28/14
Profiles/Reports

Transforming the Data Center: SimpliVity Delivers Hyperconverged Platform with Native DP

Hyperconvergence has come a long way in the past five years. Growth rates are astronomical and customers are replacing traditional three-layer configurations with hyperconverged solutions at record numbers. But not all hyperconverged solutions in the market are alike. As the market matures, this fact is coming to light. Of course, all hyperconverged solutions tightly integrate compute and storage (that is par for the course) but beyond that similarities end quickly.

One of the striking differences between SimpliVity’s hyperconverged infrastructure architecture and others is the tight integration of data protection functionality. The DNA for that is built in from the very start: SimpliVity hyperconverged infrastructure systems perform inline deduplication and compression of data at the time of data creation. Thereafter, data is kept in the “reduced” state throughout its lifecycle. This has serious positive implications on latency, performance, and bandwidth but equally importantly, it transforms data protection and other secondary uses of data. 

At Taneja Group, we have been very aware of this differentiating feature of SimpliVity’s solution. So when we were asked to interview five SimpliVity customers to determine if they were getting tangible benefits (or not), we jumped at the opportunity.

This Field Report is about their experiences. We must state at the beginning that we focused primarily on their data protection experiences in this report. Hyperconvergence is all about simplicity and cost reduction. But SimpliVity’s hyperconverged infrastructure also eliminated another big headache: data protection. These customers may not have bought SimpliVity for data protection purposes, but the fact that they were essentially able to get rid of all their other data protection products was a very pleasant surprise for them. That was a big plus for these customers. To be sure, data protection is not simply backup and restore but also includes a number of other functions such as replication, DR, WAN optimization, and more. 

For a broader understanding of SimpliVity’s product capabilities, other Taneja Group write-ups are available. This one focuses on data protection. Read on for these five customers’ experiences.

Publish date: 02/01/16
Profiles/Reports

Qumulo Tackles the Machine Data Challenge: Six Customers Explain How

We are moving into a new era of data storage. The traditional storage infrastructure that we know (and do not necessarily love) was designed to process and store input from human beings. People input emails, word processing documents and spreadsheets. They created databases and recorded business transactions. Data was stored on tape, workstation hard drives, and over the LAN.

In the second stage of data storage development, humans still produced most content but there was more and more of it, and file sizes got larger and larger. Video and audio, digital imaging, websites streaming entertainment content to millions of users; and no end to data growth. Storage capacity grew to encompass large data volumes and flash became more common in hybrid and all-flash storage systems.

Today, the storage environment has undergone another major change. The major content producers are no longer people, but machines. Storing and processing machine data offers tremendous opportunities: Seismic and weather sensors that may lead to meaningful disaster warnings. Social network diagnostics that display hard evidence of terrorist activity. Connected cars that could slash automotive fatalities. Research breakthroughs around the human brain thanks to advances in microscopy.

However, building storage systems that can store raw machine data and process it is not for the faint of heart. The best solution today is massively scale-out, general purpose NAS. This type of storage system has a single namespace capable of storing billions of differently sized files, linearly scales performance and capacity, and offers data-awareness and real-time analytics using extended metadata.

There are a very few vendors in the world today who offer this solution. One of them is Qumulo. Qumulo’s mission is to provide high volume storage to business and scientific environments that produce massive volumes of machine data.

To gauge how well Qumulo works in the real world of big data, we spoke with six customers from life sciences, media and entertainment, telco/cable/satellite, higher education and the automotive industries. Each customer deals with massive machine-generated data and uses Qumulo to store, manage, and curate mission-critical data volumes 24x7. Customers cited five major benefits to Qumulo: massive scalability, high performance, data-awareness and analytics, extreme reliability, and top-flight customer support.

Read on to see how Qumulo supports large-scale data storage and processing in these mission-critical, intensive machine data environments.

Publish date: 10/26/16
news

Layer 3 network integration from Datera uses nodes as endpoints

An updated version presents Datera Elastic Data Fabric storage nodes as a cluster of L3 network endpoints in anticipation of increased container adoption.

  • Premiered: 04/27/17
  • Author: Taneja Group
  • Published: TechTarget: Search Storage
Topic(s): TBA Datera TBA cluster TBA Jeff Kato TBA container TBA Storage TBA Cloud TBA scale-out TBA Docker TBA LAN TBA Mobility TBA Amazon AWS TBA AWS TBA Block Storage TBA Elastic Block Storage TBA scalable TBA scalability TBA Dell EMC TBA ScaleIO TBA Hedvig TBA object storage TBA Flash TBA SSD TBA hybrid flash TBA HPE TBA Supermicro TBA hyperscale TBA Intel TBA VMWare TBA VMware VSAN TBA VSAN
news

VMware: Kubernetes, vRealize Automation add value to VIO 4.0 – so pay up

VMware Integrated OpenStack 4.0 touts Kubernetes support and vRealize Automation integration, but is it enough to justify the new pricetag?

  • Premiered: 09/05/17
  • Author: Taneja Group
  • Published: TechTarget: Search Server Virtualization
Topic(s): TBA Mike Matchett TBA VMWare TBA OpenStack TBA Kubernetes TBA vRealize TBA Automation TBA scalability TBA Networking TBA VIO TBA VMWorld TBA LAN TBA API TBA vSphere TBA NSX TBA VSAN