Taneja Group | Enterprise+Storage
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: Enterprise+Storage

Resources

Big Data Storage Options for Enterprise Hadoop

In this webcast, Sr. IT Analyst Mike Matchett from Taneja Group will briefly review the storage architecture of Hadoop and HDFS, and then examine some of the more prominent big data storage options for enterprises with data protection, integration, and governance concerns that might lead them to choose an advanced SAN/NAS solution over the default local DAS design.

  • Premiered: 12/10/13 at 10 am PT/ 1 pm ET
  • Location: OnDemand
  • Speaker(s): Mike Matchett, Senior Analyst, Taneja Group
Topic(s): TBA Topic(s): BrightTALK Topic(s): TBA Topic(s): Mike Matchett Topic(s): TBA Topic(s): Hadoop Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): Enterprise Storage Topic(s): TBA Topic(s): SAN Topic(s): TBA Topic(s): NAS Topic(s): TBA Topic(s): DAS Topic(s): TBA Topic(s): HDFS Topic(s): TBA Topic(s): MapReduce
Profiles/Reports

Redefining the Economics of Enterprise Storage

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers.

Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 05/05/14
news

Marvell controller can push TLC flash closer to data center

Marvell Semiconductor is manufacturing a new SATA-based solid state drive controller designed to improve error detection in triple-level cell nonvolatile NAND flash memory (TLC) -- a feature it said could spur SSD makers to develop TLC-based enterprise drives for deep archives.

  • Premiered: 06/23/14
  • Author: Taneja Group
  • Published: Tech Target: Search Solid State Storage
Topic(s): TBA Marvell TBA SATA TBA NAND TBA SSD TBA Copan FalconStor TBA Flash TBA Storage TBA Arun Taneja TBA Enterprise Storage
Profiles/Reports

Fibre Channel: The Proven and Reliable Workhorse for Enterprise Storage Networks

Mission-critical assets such as virtualized and database applications demand a proven enterprise storage protocol to meet their performance and reliability needs. Fibre Channel has long filled that need for most customers, and for good reason. Unlike competing protocols, Fibre Channel was specifically designed for storage networking, and engineered to deliver high levels of reliability and availability as well as consistent and predictable performance for enterprise applications. As a result, Fibre Channel has been the most widely used enterprise protocol for many years.

But with the widespread deployment of 10GbE technology, some customers have explored the use of other block protocols, such as iSCSI and Fibre Channel over Ethernet (FCoE), or file protocols such as NAS. Others have looked to Infiniband, which is now being touted as a storage networking solution. In marketing the strengths of these protocols, vendors often promote feeds and speeds, such as raw line rates, as a key advantage for storage networking. However, as we’ll see, there is much more to storage networking than raw speed.

It turns out that on an enterprise buyer’s scorecard, raw speed doesn’t even make the cut as an evaluation criteria. Instead, decision makers focus on factors such as a solution’s demonstrated reliability, latency, and track record in supporting Tier 1 applications. When it comes to these requirements, no other protocol can measure up to the inherent strengths of Fibre Channel in enterprise storage environments.

Despite its long, successful track record, Fibre Channel does not always get the attention and visibility that other protocols receive. While it may not be winning the media wars, Fibre Channel offers customers a clear and compelling value proposition as a storage networking solution. Looking ahead, Fibre Channel also presents an enticing technology roadmap, even as it continues to meet the storage needs of today’s most critical business applications.

In this paper, we’ll begin by looking at the key requirements customers should look for in a commercial storage protocol. We’ll then examine the technology capabilities and advantages of Fibre Channel relative to other protocols, and discuss how those translate to business benefits. Since not all vendor implementations are created equal, we’ll call out the solution set of one vendor – QLogic – as we discuss each of the requirements, highlighting it as an example of a Fibre Channel offering that goes well beyond the norm.

Publish date: 02/28/14
Profiles/Reports

Software-defined Storage and VMware's Virtual SAN Redefining Storage Operations

The massive trend to virtualize servers has brought great benefits to IT data centers everywhere, but other domains of IT infrastructure have been challenged to likewise evolve. In particular, enterprise storage has remained expensively tied to a traditional hardware infrastructure based on antiquated logical constructs that are not well aligned with virtual workloads – ultimately impairing both IT efficiency and organizational agility.

Software-Defined Storage provides a new approach to making better use of storage resources in the virtual environment. Some software-defined solutions are even enabling storage provisioning and management on an object, database or per-VM level instead of struggling with block storage LUN’s or file volumes. In particular, VM-centricity, especially when combined with an automatic policy-based approach to management, enables virtual admins to deal with storage in the same mindset and in the same flow as other virtual admin tasks.

In this paper, we will look at VMware’s Virtual SAN product and its impact on operations. Virtual SAN brings both virtualized storage infrastructure and VM-centric storage together into one solution that significantly reduces cost compared to a traditional SAN. While this kind of software-defined storage alters the acquisition cost of storage in several big ways (avoiding proprietary storage hardware, dedicated storage adapters and fabrics, et.al.) here at Taneja Group what we find more significant is the opportunity for solutions like VMware’s Virtual SAN to fundamentally alter the on-going operational (or OPEX) costs of storage.

In this report, we will look at how Software-Defined Storage stands to transform the long term OPEX for storage by examining VMware’s Virtual SAN product. We’ll do this by examining a representative handful of key operational tasks associated with enterprise storage and the virtual infrastructure in our validation lab. We’ll examine the key data points recorded from our comparative hands-on examination, estimating the overall time and effort required for common OPEX tasks on both VMware Virtual SAN and traditional enterprise storage.

Publish date: 08/08/14
Profiles/Reports

What Admins Choose For Performance Management: Galileo's Cross-Domain Insight Served via Cloud

Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.

When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.

Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.

There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.

Publish date: 10/29/14
Profiles/Reports

Free Report: Galileo's Cross-Domain Insight Served via Cloud

Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.

When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.

Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.

There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.

Publish date: 01/01/15
Resources

Vendor Panel: How to Avoid Drowning in your Big Data Lake

We know big data is growing and represents a huge opportunity to mine competitive business value. Some even propose landing all corporate data first in a big data lake to make it available for multiple downstream uses. And soon, all your data could be thought of as big data. But what happens with enterprise storage? How do we protect that big data and apply corporate storage governance? How do support storage workflows on the backend? In this panel, we'll discuss some of the hottest trends and newest solutions for storing, protecting, and leveraging big data in the corporate IT data center.

Presenters:
*Mike Matchett, Taneja Group, Senior Analyst & Consultant (Moderator)
*Anant Chintamaneni, BlueData, Vice President of Products
*Vincent Hsu, IBM, Fellow and Storage CTO
*Richard McDougall, VMware, CTO Storage & Availability

  • Premiered: 03/11/15
  • Location: OnDemand
  • Speaker(s): Moderator: Mike Matchett, Senior Analyst, Taneja Group
Topic(s): TBA Topic(s): Big Data Topic(s): TBA Topic(s): Enterprise Storage Topic(s): TBA Topic(s): Storage Topic(s): TBA Topic(s): Data protection Topic(s): TBA Topic(s): VMWare Topic(s): TBA Topic(s): IBM Topic(s): TBA Topic(s): BlueData Topic(s): TBA Topic(s): Mike Matchett Topic(s): TBA Topic(s): Vendor Panel
news

Hadoop Storage Options: Time to Ditch DAS?

Hadoop is immensely popular today because it makes big data analysis cheap and simple: you get a cluster of commodity servers and use their processors as compute nodes to do the number crunching, while their internal direct attached storage (DAS) operate as very low cost storage nodes.

  • Premiered: 02/19/15
  • Author: Taneja Group
  • Published: Infostor
Topic(s): TBA Hadoop TBA Storage TBA DAS TBA Direct attached storage TBA Compute TBA SATA TBA HDFS TBA Hadoop Distributed File System TBA data TBA MapReduce TBA YARN TBA Hadoop 2 TBA data lake TBA data refinery TBA Enterprise Storage TBA DR TBA Disaster Recovery TBA compliance TBA Security TBA Business Continuity TBA Performance TBA FC TBA Fibre Channel TBA SAN TBA NAS TBA Virtualization TBA Cloud TBA VM TBA Virtual Machine TBA MapR
Profiles/Reports

Acaveo Smart Information Server: Bringing Dark Data into Light

In 2009, a fully burdened computing infrastructure figured storage at about 20% of all components.  By 2015, we’ve surged to 40% storage in the infrastructure (and counting) as companies pour in more and more data. And most of this data is hard-to-manage unstructured data, which typically represents 75%-80% of corporate data. This burdened IT infrastructure presents two broad and serious consequences: it increases capital and operating expenses, and cripples unstructured data management.  Capital and operating expenses scale up sharply with the swelling storage tide. Today’s storage costs alone include buying and deploying storage for file shares, email, and ECMs like SharePoint. Additional services such as third-party file sharing services and cloud-based storage add to cost and complexity.

And growing storage and complexity make managing unstructured data extraordinarily difficult. A digital world is delivering more data to more applications than ever before. IT’s inability to visualize and act upon widely distributed data impacts retention, compliance, value, and security. In fact, this visibility (or invisibility) problem is so prevalent that it has gained it own stage name: dark data. Dark data plagues IT with hard-to-answer questions: What data is on those repositories? How old is it? What application does it belong to? Which users can access it?

IT may be able to answer those questions on a single storage system with file management tools. But across a massive storage infrastructure including the cloud? No. Instead, IT must do what it can to tier aging data, to safely delete when possible, and try to keep up with application storage demands across the map. The status quo is not going to get any better in the face of data growth. Data is growing at 55% and higher per year in the enterprise. The energy ramifications alone of storing that much data are sobering. Data growth is getting to the point that it is overrunning the storage budget’s capacity to pay for it. And managing that data for cost control and business processes is harder still.

Conventional wisdom would have IT simply move data to the cloud. But conventional wisdom is mistaken. The problem is not how to store all of that data – IT can solve that problem with a cloud subscription. The problem is that once stored, IT lacks the tools to intelligently manage that data where it resides.

This is where highly scalable, unstructured file management comes into the picture: the ability to find, classify, and act upon files spread throughout the storage universe. In this Product Profile we’ll present Acaveo, a file management platform that discovers and acts on data-in-place, and federates classification and search activities across the enterprise storage infrastructure. The result is highly intelligent and highly scalable file management that cuts cost and adds value to business processes across the enterprise. 

Publish date: 02/27/15
news

Storage: The Next Generation

Qumulo demonstrates its new file system, the heart of its software-defined-storage product.

  • Premiered: 03/25/15
  • Author: Taneja Group
  • Published: Virtualization Review
Topic(s): TBA Storage TBA Qumulo TBA Qumulo Core TBA Qumulo Scalable File System TBA QSFS TBA software-defined TBA data-aware TBA scale-out TBA NAS TBA Enterprise Storage TBA File System TBA Datacenter TBA hybrid storage TBA Hybrid Array TBA SDS TBA software-defined storage TBA SSD TBA Flash TBA HDD TBA API TBA VM TBA Virtual Machine TBA scalability TBA analytics
news

Are data reduction techniques essential in VDI environments?

Data reduction methods can help shrink your storage footprint and costs. Here are some guidelines for incorporating these approaches into your VDI environment.

  • Premiered: 04/13/15
  • Author: Taneja Group
  • Published: TechTarget: Search Virtual Storage
Topic(s): TBA Tom Fenton TBA TechTarget TBA Data reduction TBA VDI TBA virtual desktop TBA Virtual Desktop Infrastructure TBA Storage TBA Enterprise Storage TBA Data Storage TBA Deduplication TBA data compression TBA Virtualization
Profiles/Reports

Redefining the Economics of Enterprise Storage (2015 Update)

Enterprise storage has long delivered superb levels of performance, availability, scalability, and data management.  But enterprise storage has always come at exceptional price, and this has made enterprise storage unobtainable for many use cases and customers. Most recently Dell introduced a new, small footprint storage array – the Dell Storage SC Series powered by Compellent technology – that continues to leverage proven Dell Compellent technology using Intel technology in an all-new form factor. The SC4020 also introduces the most dense Compellent product ever, an all-in-one storage array that includes 24 drive bays and dual controllers in only 2 rack units of space.  While the Intel powered SC4020 has more modest scalability than current Compellent products, this array marks a radical shift in the pricing of Dell’s enterprise technology, and is aiming to open up Dell Compellent storage technology for an entire market of smaller customers as well as large customer use cases where enterprise storage was too expensive before.

Publish date: 06/30/15
news

INFINIDAT Expands InfiniBox Enterprise Storage Family, Introduces Multi-Petabyte Scale Unified

INFINIDAT, a leader in high performance, highly available enterprise storage solutions, announces the expansion of its revolutionary InfiniBox family of storage arrays with the addition of two new capabilities and a new midrange model.

  • Premiered: 09/21/15
  • Author: Taneja Group
  • Published: Business Wire
Topic(s): TBA Infinidat TBA High Performance TBA high availability TBA Enterprise Storage TBA Storage TBA NAS TBA Enterprise TBA Scale TBA Unified Storage TBA Snapshots TBA Snapshot TBA Block Storage TBA SSD TBA Flash TBA InfiniBox TBA Cloud TBA Cloud Storage TBA SAN TBA Capacity TBA Arun Taneja
news / Blog

Is 7-9's the New Standard For Enterprise Storage? Infinidat Just Keeps Running

What difference does 7-9's make to storage operations?

  • Premiered: 09/29/15
  • Author: Mike Matchett
Topic(s): Infinidat high availability 7-9's Enterprise Storage
Profiles/Reports

Enterprise Storage that Simply Runs and Runs: Infinidat Infinibox Delivers Incredible New Standard

Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.

Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.

Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.

In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.

Publish date: 09/29/15
news

Get the most from cloud-based storage services

Enterprise data storage managers are in position to help their companies get more from cloud storage services.

  • Premiered: 05/02/16
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Enterprise Storage TBA Storage TBA Cloud TBA Data Storage TBA cold storage TBA cold data TBA Archive TBA Hybrid Cloud TBA HCS TBA Hybrid Cloud Storage TBA Saas TBA Big Data TBA tiering TBA CTERA TBA Private Cloud TBA ClearSky Data TBA hyper-converged TBA WAN Optimization TBA WANO TBA Riverbed TBA SteelFusion TBA IBM TBA EMC TBA Microsoft TBA Microsoft Azure TBA StorSimple TBA Oracle
news

Oracle cloud storage embraces ZFS Storage Appliance

New Oracle operating system update enables ZFS Storage Appliance to transfer file- and block-based data to Oracle Storage Cloud without an external cloud gateway.

  • Premiered: 03/29/17
  • Author: Taneja Group
  • Published: TechTarget: Search Cloud Storage
Topic(s): TBA Oracle TBA Oracle ZFS TBA Cloud TBA Cloud Storage TBA cloud converged storage TBA converged storage TBA Storage TBA Public Cloud TBA Converged Infrastructure TBA API TBA OpenStack TBA OpenStack Swift TBA Amazon TBA Google TBA Amazon S3 TBA Microsoft Azure TBA AWS TBA IBM TBA Hybrid Cloud TBA DRAM TBA all-flash TBA All Flash TBA SSD TBA Mike Matchett TBA Dell EMC TBA Enterprise Storage TBA scalable TBA scalability TBA Deduplication TBA Compression
news

Smarter storage starts with analytics

Storage smartens up to keep pace with data-intensive business applications embedding operational analytics capabilities.

  • Premiered: 04/03/17
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Storage TBA storage analytics TBA analytics TBA Artificial Intelligence TBA AI TBA software-defined TBA Flash TBA SSD TBA In-Memory TBA Hybrid Cloud TBA hybrid cloud tiering TBA cloud tiering TBA Cloud TBA Performance TBA Security TBA Metadata TBA API TBA HPE TBA BMC TBA Saas TBA remote storage TBA NetApp TBA Cassandra TBA HBase TBA Spark TBA Big Data TBA big data analytics TBA InfoSight TBA cluster TBA VM-aware
Profiles/Reports

Providing Secondary Storage at Cloud-Scale: Cohesity Performance Scales Linearly in 256 Node Test

Are we doomed to drown in our own data? Enterprise storage is growing fast enough with today’s data demands to threaten service levels, challenge IT expertise and often eat up a majority of new IT spending. And the amount of competitively useful data could possibly grow magnitudes faster with new trends in web-scale applications, IoT and big data. On top of that, assuring full enterprise requirements for data protection with traditional fragmented secondary storage designs means that more than a dozen copies of important data are often inefficiently eating up even more capacity at an alarming rate.

Cohesity, a feature-rich secondary storage data management solution based on a core parallel file system, promises to completely break through traditional secondary storage scaling limitations with its inherently scale-out approach. This is a big claim, and so we’ve executed a validation test of Cohesity under massive scaling – pushing their storage cluster to sizes far past what they’ve previously publicly tested.

The result is striking (though perhaps not internally surprising given their engineering design goals).  We documented linearly accumulating performance across several types of IO, all the way up to our cluster test target of 256 Cohesity storage nodes. Other secondary storage designs can be expected to drop off at a far earlier point, either hitting a hard constraint (e.g. a limited cluster size) or with severely decreasing returns in performance.

We also took the opportunity to validate some important storage requirements at scale. For example, we also tested that Cohesity ensured global file consistency and full cluster resilience even at the largest scale of deployment. Given overall test performance as validated in this report, Cohesity has certainly demonstrated it is inherently a web-scale system that can deliver advanced secondary storage functionality at any practical enterprise scale of deployment.

Publish date: 07/31/17