Taneja Group | scalable
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: scalable

Profiles/Reports

The Dell FS7600 and FS7610 - Advancing unified, scalable storage

Unified storage – combined block and file storage from one system – has made serious inroads into customer datacenters over the past couple of years. It is little wonder, as it offers tremendous value and flexibility. Unified storage can serve up multiple types of data – both file and block – and help businesses support a wider number of storage demands from fewer better consolidated storage systems. The business in turn can increase storage utilization, simplify management of storage, and deliver storage services that are both more cost efficient and agile.

But despite these benefits, a historic compromise has often faced the unified storage customer. Unified storage systems were often highly capable, but lagged behind the most recent storage innovations in at least two key dimensions – adaptability and easy to use unified management. In terms of adaptability, the underlying architecture of many of these systems often made next generation capabilities like simultaneous performance and capacity scaling much harder to implement. In terms of management, these systems often fell short of allowing typical administrators to easily manage the increased functionality delivered by a unified system. 

In 2011, Dell announced the pairing of FS7500 NAS controllers with their family of EqualLogic iSCSI storage arrays – a solution set designed to unleash a new level of adaptability in unified storage. The FS7500 was no paltry piece of add-on equipment – it was in fact built for a considerable amount of performance that could make full use of big eight-array EqualLogic storage pools, which could contain up to 384 of the fastest disks on the market. Moreover, the FS7500 came with another powerful ingredient: when paired with EqualLogic storage, the combined system retained all of the classic EqualLogic scale-out capability (within the underlying iSCSI storage) while the FS7500 was itself also scalable, easily going from 2 to 4 controllers. 

This meant for the first time, the small and medium enterprise (SME) customer could purchase a truly scalable unified storage system from a major vendor – a system that could start small, and grow with them as their business needs changed over time. Just as importantly, these systems were nicely integrated. An FS7500 continued to leverage all of the SME-empowering management functionality within Dell’s class-leading (and free) Group Manager and SANHQ storage management tools, which are the same tools used to manage the iSCSI storage. We previously reviewed the FS7500 storage system in a hands-on Technology Validation exercise, available here.

Recently, Dell announced an update to the EqualLogic-paired FS family – the EqualLogic FS7600. To be clear, other Dell products exist based on the same underlying FS technology – Fluid FS – including the Dell MD storage-integrated NX3600 and the Dell Compellent storage-integrated FS8600. But with an eye toward our findings in our original FS7500 Technology Validation exercise, we were keenly interested in how the FS7600 may have advanced in a relatively short period of time since the FS7500 hit the market. This was all the more intriguing because Dell EqualLogic has long excelled in rapid storage capability innovation. A closer look revealed an all-new hardware architecture, the addition of several key storage capabilities, and some claims about performance improvements. In this Product Brief, we will examine the FS7600, and evaluate how well Dell has advanced capabilities and tackled some of the challenges in its first generation FS7500 NAS.
 

Publish date: 11/30/12
Resources

Tricks of the Trade: Building a Big Data Storage Strategy [Live Roundtable]

It’s the year 2013 and the problems facing storage professionals are not getting smaller. The data and its complexity continue to increase, and the business wants to use it all. How can you make a big data storage system scalable, available, and reliable? How can you predict your storage needs down the road? 

Join this live panel of experts to learn: 

•How to assess storage needs and create a storage strategy for big data growth
•What technologies and tools to use to storage more efficient
•Secrets and tips of creating powerful storage systems

Speakers include: Mike Matchett - Taneja Group (moderator), Douglas Brockett - Exablox, Brian Mitchell - Avnet, & Patrick Osborne - HP

REGISTER HERE.

  • Premiered: 09/12/13 at Live on 9/12 @ 10am ET (7am PT)
  • Location: OnDemand
  • Speaker(s): Mike Matchett (moderator)
  • Sponsor(s): BrightTalk, Taneja Group
Topic(s): TBA Topic(s): HP Topic(s): TBA Topic(s): Exablox Topic(s): TBA Topic(s): Avnet Topic(s): TBA Topic(s): Mike Matchett Topic(s): TBA Topic(s): Big Data Topic(s): TBA Topic(s): scalable Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): storage systems
Profiles/Reports

Fibre Channel: The Proven and Reliable Workhorse for Enterprise Storage Networks

Mission-critical assets such as virtualized and database applications demand a proven enterprise storage protocol to meet their performance and reliability needs. Fibre Channel has long filled that need for most customers, and for good reason. Unlike competing protocols, Fibre Channel was specifically designed for storage networking, and engineered to deliver high levels of reliability and availability as well as consistent and predictable performance for enterprise applications. As a result, Fibre Channel has been the most widely used enterprise protocol for many years.

But with the widespread deployment of 10GbE technology, some customers have explored the use of other block protocols, such as iSCSI and Fibre Channel over Ethernet (FCoE), or file protocols such as NAS. Others have looked to Infiniband, which is now being touted as a storage networking solution. In marketing the strengths of these protocols, vendors often promote feeds and speeds, such as raw line rates, as a key advantage for storage networking. However, as we’ll see, there is much more to storage networking than raw speed.

It turns out that on an enterprise buyer’s scorecard, raw speed doesn’t even make the cut as an evaluation criteria. Instead, decision makers focus on factors such as a solution’s demonstrated reliability, latency, and track record in supporting Tier 1 applications. When it comes to these requirements, no other protocol can measure up to the inherent strengths of Fibre Channel in enterprise storage environments.

Despite its long, successful track record, Fibre Channel does not always get the attention and visibility that other protocols receive. While it may not be winning the media wars, Fibre Channel offers customers a clear and compelling value proposition as a storage networking solution. Looking ahead, Fibre Channel also presents an enticing technology roadmap, even as it continues to meet the storage needs of today’s most critical business applications.

In this paper, we’ll begin by looking at the key requirements customers should look for in a commercial storage protocol. We’ll then examine the technology capabilities and advantages of Fibre Channel relative to other protocols, and discuss how those translate to business benefits. Since not all vendor implementations are created equal, we’ll call out the solution set of one vendor – QLogic – as we discuss each of the requirements, highlighting it as an example of a Fibre Channel offering that goes well beyond the norm.

Publish date: 02/28/14
Profiles/Reports

Enterprise Flash - Scalable, Smart, and Economical

There is a serious re-hosting effort going on in data center storage as flash-filled systems replace large arrays of older spinning disks for tier 1 apps. Naturally as costs drop and the performance advantages of flash-accelerated IO services become irresistible, they also begin pulling in a widening circle of applications with varying QoS needs. Yet this extension leads to a wasteful tug-of-war between high-end flash only systems that can’t effectively serve a wide variety of application workloads and so-called hybrid solutions originally architected for HDDs that are often challenged to provide the highest performance required by those tier 1 applications.

Someday in its purest form all-flash storage theoretically could drop in price enough to outright replace all other storage tiers even at the largest capacities, although that is certainly not true today. Here at Taneja Group we think storage tiering will always offer a better way to deliver varying levels of QoS by balancing the latest in performance advances appropriately with the most efficient capacities. In any case, the best enterprise storage solutions today need to offer a range of storage tiers, often even when catering to a single application’s varying storage needs.

There are many entrants in the flash storage market, with the big vendors now rolling out enterprise solutions upgraded for flash. Unfortunately many of these systems are shallow retreads of older architectures, perhaps souped-up a bit to better handle some hybrid flash acceleration but not able to take full advantage of it. Or they are new dedicated flash-only point products with big price tags, immature or minimal data services, and limited ability to scale out or serve a wider set of data center QoS needs.

Oracle saw an opportunity for a new type of cost-effective flash-speed storage system that could meet the varied QoS needs of multiple enterprise data center applications – in other words, to take flash storage into the mainstream of the data center. Oracle decided they had enough storage chops (from Exadata, ZFS, Pillar, Sun, etc.) to design and build a “flash-first” enterprise system intended to take full advantage of flash as a performance tier, but also incorporate other storage tiers naturally including slower “capacity” flash, performance HDD, and capacity HDD. Tiering by itself isn’t a new thing – all the hybrid solutions do it and there are other vendor solutions that were designed for tiering – but Oracle built the FS1 Flash Storage System from the fast “flash” tier down, not by adding flash to a slower or existing HDD-based architecture working “upwards.” This required designing intelligent automated management to take advantage of flash for performance while leveraging HDD to balance out cost. This new architecture has internal communication links dedicated to flash media with separate IO paths for HDDs, unlike traditional hybrids that might rely solely on their older, standard HDD-era architectures that can internally constrain high-performance flash access.

Oracle FS1 is a highly engineered SAN storage system with key capabilities that set it apart from other all-flash storage systems, including built in QoS management that incorporates business priorities, best-practices provisioning, and a storage alignment capability that is application aware – for Oracle Database naturally, but that can also address a growing body of other key enterprise applications (such as Oracle JD Edwards, PeopleSoft, Siebel, MS Exchange/SQL Server, and SAP) – and a “service provider” capability to carve out multi-tenant virtual storage “domains” while online that are actually enforced at the hardware partitioning level for top data security isolation.

In this report, we’ll dive in and examine some of the great new capabilities of the Oracle FS1. We’ll look at what really sets it apart from the competition in terms of its QoS, auto-tiering, co-engineering with Oracle Database and applications, delivered performance, capacity scaling and optimization, enterprise availability, and OPEX reducing features, all at a competitive price point that will challenge the rest of the increasingly flash-centric market.

Publish date: 02/02/15
Profiles/Reports

Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
Profiles/Reports

Acaveo Smart Information Server: Bringing Dark Data into Light

In 2009, a fully burdened computing infrastructure figured storage at about 20% of all components.  By 2015, we’ve surged to 40% storage in the infrastructure (and counting) as companies pour in more and more data. And most of this data is hard-to-manage unstructured data, which typically represents 75%-80% of corporate data. This burdened IT infrastructure presents two broad and serious consequences: it increases capital and operating expenses, and cripples unstructured data management.  Capital and operating expenses scale up sharply with the swelling storage tide. Today’s storage costs alone include buying and deploying storage for file shares, email, and ECMs like SharePoint. Additional services such as third-party file sharing services and cloud-based storage add to cost and complexity.

And growing storage and complexity make managing unstructured data extraordinarily difficult. A digital world is delivering more data to more applications than ever before. IT’s inability to visualize and act upon widely distributed data impacts retention, compliance, value, and security. In fact, this visibility (or invisibility) problem is so prevalent that it has gained it own stage name: dark data. Dark data plagues IT with hard-to-answer questions: What data is on those repositories? How old is it? What application does it belong to? Which users can access it?

IT may be able to answer those questions on a single storage system with file management tools. But across a massive storage infrastructure including the cloud? No. Instead, IT must do what it can to tier aging data, to safely delete when possible, and try to keep up with application storage demands across the map. The status quo is not going to get any better in the face of data growth. Data is growing at 55% and higher per year in the enterprise. The energy ramifications alone of storing that much data are sobering. Data growth is getting to the point that it is overrunning the storage budget’s capacity to pay for it. And managing that data for cost control and business processes is harder still.

Conventional wisdom would have IT simply move data to the cloud. But conventional wisdom is mistaken. The problem is not how to store all of that data – IT can solve that problem with a cloud subscription. The problem is that once stored, IT lacks the tools to intelligently manage that data where it resides.

This is where highly scalable, unstructured file management comes into the picture: the ability to find, classify, and act upon files spread throughout the storage universe. In this Product Profile we’ll present Acaveo, a file management platform that discovers and acts on data-in-place, and federates classification and search activities across the enterprise storage infrastructure. The result is highly intelligent and highly scalable file management that cuts cost and adds value to business processes across the enterprise. 

Publish date: 02/27/15
Profiles/Reports

TVS: Qumulo Core: Data-Aware Scale-Out NAS Software

New enterprise-grade file systems don’t come around very often. Over the last two decades we have seen very few show up; ZFS was introduced in 2004, Isilon’s OneFS in 2003, Lustre in 2001, and WAFL in 1992. There is a good reason behind this: the creation of a unique enterprise-grade file system is not a trivial undertaking and takes considerable resources and vision. During the last ten years, we have seen seismic changes in the data center and storage industry. Today’s data center runs a far different workload than what was prevalent when these first-generation file systems were developed. For example, today’s data center runs virtual machines, and it is the exception when there is a one-to-one correlation between a server and storage. Databases span the largest single disk drives. A huge amount of data is ingested by big data applications, social media. Data must be kept in order to meet the requirements of government and corporate policy. Technology has dramatically changed over the last decade. For instance, flash memory has become prevalent, commodity x86 processors now rival ASIC chips in power and performance, and new software development and delivery methodologies such as “agile” have become mainstream. In the past, we were concerned with how to deal with the underlying storage, but now we are concerned with how to deal with this huge amount of data that we have stored.

What could be accomplished if a new file system was created from the ground up to take advantage of the latest advances in technology and, more importantly, had an experienced engineering team that had done this once before? That is, in fact, what Qumulo has done with the Qumulo Core data-aware scale-out NAS software, powered by its new file system, QSFS (Qumulo Scalable File System). Qumulo’s three co-founders were the primary inventors of OneFS – Peter Godman, Neal Fachan, Aaron Passey – and they assembled some of the brightest minds in the storage industry to create a new modern file system designed to support the requirements of today’s datacenter, not the datacenter of decades ago.

Qumulo embraced from day one an agile software development and release model to create their product. This allows them to push out fully tested and certified releases every two weeks. Doing this allows for new feature releases and bug fixes that can be seamlessly introduced to the system as soon as they are ready – not based on an arbitrary 6, 12 or even 18-month release schedule.

Flash storage has radically changed the face of the storage industry. All of the major file systems in use today were designed to work with HDD devices that could produce 150 IOPS; if you were willing to sacrifice capacity and short-stroke them you might get twice that. Now flash is prevalent in the industry, and we have commodity flash devices that can produce up to 250,000 IOPs.  Traditional file systems were optimized for slower HDD drives – not to take advantage of the lower latency and higher performance of today’s solid state drives. Many traditional file systems and storage arrays have devised ways to “bolt on” SSD devices to their storage to boost their performance. However, their initial architecture was based around the capabilities of yesterday’s HDD drives and not the capabilities of today’s new flash technology.

An explosion in scale-out large capacity file systems has empowered enterprises to do very interesting things, but has also come with some very interesting problems. Even one of the most trivial tasks—determining how much space the files on a file system are consuming—is very complicated to answer on first-generation file systems. Other questions that are difficult to answer without being aware of the data on a file system include finding out who is consuming the most space on a file system, and which clients and files or applications are consuming the most bandwidth. Second-generation file systems need to be designed to be data-aware, not just storage-aware.

In order to reach performance targets, traditional high performance storage arrays were designed to take advantage of an ASIC-optimized architecture. ASIC architecture is good in the sense that it can speed up performance for some storage related operations; however, this benefit comes at a heavy price – both in dollars and in flexibility. It can take years and millions of dollars to embed new features in an ASIC. By using very powerful and relative inexpensive x86 processors new features can be introduced very quickly via software. The slight performance advantage of ASIC-based storage is disappearing fast as x86 processors get more cores (Intel E5-2600 V3 has up to 18 cores) and advanced features.

When Qumulo approached us to take a look at the world’s first data-aware, scale-out enterprise-grade storage system we welcomed the opportunity. Qumulo’s new storage system is not based on an academic project or designed around an existing storage system, but has been designed and built on entirely new code that the principals at Qumulo developed based on what they learned in interviews with more than 600 storage professionals. What they came up with after these conversations was a new data-aware, scale-out NAS file system designed to take advantage of the latest advances in technology. We were interested in finding out how this file system would work in today’s data center.

Publish date: 03/17/15
news

New approaches to scalable storage

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.

  • Premiered: 03/16/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA Mike Matchett TBA TechTarget TBA Storage TBA scalable TBA scalability TBA analytics TBA Data Storage TBA Big Data TBA Block Storage TBA File Storage TBA object storage TBA scale-out TBA scale-up TBA Performance TBA Capacity TBA HA TBA high availability TBA latency TBA IOPS TBA Flash TBA SSD TBA File System TBA Security TBA NetApp TBA Data ONTAP TBA ONTAP TBA EMC TBA Isilon TBA OneFS TBA Cloud
news

Amazon job listings offer many AWS roadmap clues

Amazon is hiring thousands of people, and where it’s hiring offers clues about where the AWS roadmap is headed.

  • Premiered: 03/27/15
  • Author: Taneja Group
  • Published: TechTarget: Search AWS
Topic(s): TBA Amazon TBA Amazon AWS TBA AWS TBA Storage TBA Database TBA Amazon S3 TBA S3 TBA scalable TBA scalability
news

Integrate cloud tiering with on-premises storage

Cloud and on-premises storage are increasingly becoming integrated so cloud is just another tier available to storage administrators.

  • Premiered: 06/02/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Cloud TBA Storage TBA Mike Matchett TBA Public Cloud TBA elasticity TBA Hybrid Cloud TBA Performance TBA hyperconverged TBA hyperconvergence TBA Cloud Storage TBA EFSS TBA Amazon EBS TBA AWS TBA Amazon Web Services TBA Amazon TBA EFS TBA Elastic Block Storage TBA Block Storage TBA Elastic File Store TBA SoftNAS TBA API TBA SDS TBA software-defined storage TBA OpenStack TBA Maxta TBA Nexenta TBA Qumulo TBA Tarmin TBA WANO TBA WAN Optimization
Profiles/Reports

Enterprise Storage that Simply Runs and Runs: Infinidat Infinibox Delivers Incredible New Standard

Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.

Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.

Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.

In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.

Publish date: 09/29/15
news

VMware vSphere 6 release good news for storage admins

VMware's vSphere 6 release shows that the vendor is aiming for a completely software-defined data center with a fully virtualized infrastructure.

  • Premiered: 10/05/15
  • Author: Arun Taneja
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA VMWare TBA VMware vSphere TBA vSphere TBA vSphere 6 TBA software-defined TBA Software-Defined Data Center TBA SDDC TBA Virtualization TBA virtualized infrastructure TBA VSAN TBA VVOLs TBA VMware VVOLs TBA Virtual Volumes TBA VMotion TBA high availability TBA Security TBA scalability TBA Data protection TBA replication TBA VMware PEX TBA Fault Tolerance TBA Virtual Machine TBA VM TBA Provisioning TBA Storage Management TBA SLA TBA 3D Flash TBA FT TBA vCPU TBA CPU
news

Hybrid cloud infrastructure provides low-cost backup

For many hybrid cloud infrastructure adopters, backup is the most obvious use case. And for good reason - the cloud is a low-cost option that can easily scale to house large backup images.

  • Premiered: 09/30/15
  • Author: Taneja Group
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Mike Matchett TBA Hybrid Cloud TBA Hybrid Cloud Storage TBA Storage TBA cloud infrastructure TBA Cloud TBA scalability TBA scalable TBA Public Cloud TBA elasticity TBA elastic cloud TBA Disaster Recovery TBA Disaster Recovery as a Service TBA DR TBA API TBA Private Cloud
news

Navigate today's hyper-converged market

The hyper-converged market is rife with products from competing vendors, making choosing a hyper-converged system a difficult proposition.

  • Premiered: 11/05/15
  • Author: Jeff Kato
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA hyper convergence TBA hyper-converged TBA Storage TBA HCI TBA Dell TBA EMC TBA HP TBA VMWare TBA Nutanix TBA Scale Computing TBA SimpliVity TBA DataCore TBA Gridstore TBA Maxta TBA Nimboxx TBA Pivot3 TBA modular TBA scalability TBA scalable TBA VDI TBA Virtual Desktop Infrastructure TBA Performance TBA Virtual Machine TBA VM TBA VM-centric TBA Virtualization TBA Microsoft TBA KVM TBA Hypervisor TBA Open Source
Profiles/Reports

Nutanix Versus VCE: Web-Scale Versus Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix in late 2014 with updates in 2015. The Taneja Group analyzed the experiences of seven Nutanix Xtreme Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership (see an opinion of the DELL/EMC merger at end of this document). VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments. 

Publish date: 01/14/16
Profiles/Reports

The HPE Solution to Backup Complexity and Scale: HPE Data Protector and StoreOnce

There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.

While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.

For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HPE as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.

First, HPE is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.

But HPE hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HPE people have taken to heart a “customer first” message to provide a truly solution-focused HPE experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HPE business unit components are in the “box”. And significantly, this approach elevates HPE from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HPE is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.

In this report, we’ll examine first why HPE StoreOnce and Data Protector products are truly game changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.

Publish date: 01/15/16
news

Plexistor debuts with software to converge memory and storage

Plexistor claims its 'software-defined memory' platform lets in-memory databases and traditional enterprise workloads run without dedicated compute-storage clusters.

  • Premiered: 01/28/16
  • Author: Taneja Group
  • Published: TechTarget: Search Solid State Storage
Topic(s): TBA Plexistor TBA software-defined TBA Software-Defined Memory TBA SDM TBA In-Memory TBA cluster TBA Compute TBA Storage TBA nonvolatile memory TBA Primary Storage TBA high capacity TBA SATA TBA NVMe TBA Flash TBA scalability TBA scalable TBA Low latency TBA Performance TBA MongoDB TBA Kafka TBA Cassandra TBA NVDIMM TBA Arun Taneja TBA Data Center TBA datacenter management TBA DCM TBA tiering TBA Virtualization
news

Amazon EFS stuck in beta, lacks native Windows support

Amazon's long-awaited Elastic File System is expected to hit the market soon, but it won't natively support Windows or high-performance workloads.

  • Premiered: 02/01/16
  • Author: Taneja Group
  • Published: TechTarget: Search AWS
Topic(s): TBA Amazon TBA AWS TBA Amazon AWS TBA Elastic File System TBA elasticity TBA High Performance TBA Amazon EFS TBA EFS TBA Performance TBA Cloud TBA Amazon EBS TBA Elastic Block Storage TBA Block Storage TBA Amazon S3 TBA Simple Storage Service TBA Mike Matchett TBA Metadata TBA scalable TBA scalability TBA NetApp TBA ONTAP TBA Cloud ONTAP TBA SoftNAS TBA NFS TBA CIFS TBA iSCSI
Profiles/Reports

Cohesity Data Platform: Hyperconverged Secondary Storage

Primary storage is often defined as storage hosting mission-critical applications with tight SLAs, requiring high performance.  Secondary storage is where everything else typically ends up and, unfortunately, data stored there tends to accumulate without much oversight.  Most of the improvements within the overall storage space, most recently driven by the move to hyperconverged infrastructure, have flowed into primary storage.  By shifting the focus from individual hardware components to commoditized, clustered and virtualized storage, hyperconvergence has provided a highly-available virtual platform to run applications on, which has allowed IT to shift their focus from managing individual hardware components and onto running business applications, increasing productivity and reducing costs. 

Companies adopting this new class of products certainly enjoyed the benefits, but were still nagged by a set of problems that it didn’t address in a complete fashion.  On the secondary storage side of things, they were still left dealing with too many separate use cases with their own point solutions.  This led to too many products to manage, too much duplication and too much waste.  In truth, many hyperconvergence vendors have done a reasonable job at addressing primary storage use cases, , on their platforms, but there’s still more to be done there and more secondary storage use cases to address.

Now, however, a new category of storage has emerged. Hyperconverged Secondary Storage brings the same sort of distributed, scale-out file system to secondary storage that hyperconvergence brought to primary storage.  But, given the disparate use cases that are embedded in secondary storage and the massive amount of data that resides there, it’s an equally big problem to solve and it had to go further than just abstracting and scaling the underlying physical storage devices.  True Hyperconverged Secondary Storage also integrates the key secondary storage workflows - Data Protection, DR, Analytics and Test/Dev - as well as providing global deduplication for overall file storage efficiency, file indexing and searching services for more efficient storage management and hooks into the cloud for efficient archiving. 

Cohesity has taken this challenge head-on.

Before delving into the Cohesity Data Platform, the subject of this profile and one of the pioneering offerings in this new category, we’ll take a quick look at the state of secondary storage today and note how current products haven’t completely addressed these existing secondary storage problems, creating an opening for new competitors to step in.

Publish date: 03/30/16
news

Three Tips for Optimizing Big Data Analytics

Converged infrastructure systems provide many of the resources required for effective big data analytics, from the ability to handle Hadoop to storage scalability.

  • Premiered: 03/30/16
  • Author: Taneja Group
  • Published: Windows IT Pro
Topic(s): TBA Big Data TBA big data analytics TBA analytics TBA Hadoop TBA Storage TBA scalability TBA Converged Infrastructure TBA convergence TBA scalable TBA Mike Matchett