Taneja Group | scalability
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: scalability

Profiles/Reports

The Cost of Performance

What’s an IO worth to you? Is it worth more than a gigabyte? Less? That’s a hard question for many IT and business professionals to begin to consider; yet we often see it bandied about. It certainly has merit, it just isn’t easily understood. In this industry article, Taneja Group takes a look at how big the cost of performance is, and with that understanding in mind, we’ll look at two examples of new solutions and what they suggest is a changing way to get cost-effective performance inside the data center walls.

Publish date: 04/22/11
news

Cloud has a silver lining for ROBO storage

Providing and managing ROBO storage can be a challenge, but a hybrid approach using a combination of local and cloud-based storage may be the best solution.

  • Premiered: 02/25/13
  • Author: Mike Matchett
  • Published: TechTarget: SearchDataBackup.com
Topic(s): TBA ROBO TBA Storage TBA WAN TBA Cloud TBA scalability
Profiles/Reports

High-End Storage for Everyone: Exablox OneBlox

Reliable storage is critical to the lifeblood of every data-driven business, and operational storage capabilities like non-disruptive scalability, continuous data protection, capacity optimization, and disaster recovery are not just desired, but required. But enterprise-class storage features have long been out of reach of organizations that don't have enterprise-sized budgets, storage experts and large data centers. Instead, they make do with low-end disk arrays or even just a box of disks patched together with a minimal amount of data protection in the form of manual backups. The problem is that disks fail, organizations change, and data continues to grow – organizations that pile up disks under the desktop are taking big risks for significant business failure while those that pay up for traditional arrays and even cloud storage incur significant cost and management overhead.


Having to step up to deliver these advanced storage requirements challenges growing organizations with big adoption hurdles, not the least of which is both OPEX and CAPEX cost. Far too many organizations struggle along with high-risk storage or feel forced to allocate significant energy, cost, and staff time into acquiring, deploying, and operating high-touch storage arrays with layers of complex add-on software. Even larger enterprises with expert storage gurus and big data centers can feel the weight of managing complex SAN’s for departmental, ROBO, and other practical rubber-meets-road storage scenarios. What’s really needed is a new approach to storage - an affordable, expandable array solution with advanced storage capabilities baked in. Ideally it should be simpler to operate than even setting up a file system on raw drives and it should be available at a justifiable cost for even small data driven businesses.


In this solution brief we are going to look at what SMB and departmental storage buyers should both require and expect from storage solutions to meet their business goals, and how traditional mid-market storage based on old technologies can fall short. We will then introduce Exablox’s new OneBlox storage array to highlight how purposefully designing storage from the ground up can lead to a simple but powerful hardware design and software architecture that features built-in high availability, easy scalability, and great data protection. Along the way we’ll see how two real-world users of OneBlox experience its actual benefits, cost effectiveness and true ease of management in their live customer deployments.
 

Publish date: 04/24/13
news

Taneja Group Dubs Kaminario a 'Force to Be Reckoned With' After Testing Newest Generation K2 All-Fla

Kaminario, the leading scale-out flash array provider, today announced the results of the Taneja Group's independent report, "Kaminario K2: The Truly Enterprise-Ready SSD Storage Array."

  • Premiered: 05/15/13
  • Author: Taneja Group
  • Published: WSJ
Topic(s): TBA Virtualization TBA Storage TBA SSD TBA Flash TBA scalability TBA solid state TBA scale-out
news

Kaminario Dubbed ‘A Force to Be Reckoned With’ in Latest Taneja Group Report

It is no secret that scalability has long been the elephant in the room for most solid-state storage vendors.

  • Premiered: 05/15/13
  • Author: Taneja Group
  • Published: The I/O Storm
Topic(s): TBA scalability TBA Virtualization TBA Kaminario TBA SSD TBA Flash TBA flash storage TBA scale-out
news

SimpliVity is Named to Red Herring’s Top 100 North America

SimpliVity Corporation, developer of OmniCubeTM, the market’s only globally federated hyperconverged infrastructure system, today announced that it has been selected from thousands of companies as a winner of Red Herring's Top 100 award.

  • Premiered: 06/05/13
  • Author: Taneja Group
  • Published: Reuters
Topic(s): TBA SimpliVity TBA hyperconvergence TBA Deduplication TBA Cloud TBA SSD TBA Data protection TBA Data Center TBA scalability TBA Mobility TBA Software Defined Data Center
Profiles/Reports

Enterprise File Collaboration Market Landscape

Collaboration is a huge concept, even narrowing it down to enterprise file collaboration (EFC) is still a big undertaking. Many vendors are using “collaboration” in their marketing materials yet they mean many different things by it, ranging from simple business interaction to sophisticated groupware to data sharing and syncing on a wide scale. The result is a good deal of market confusion.

Frankly, vendors selling file collaboration into the enterprise cannot afford massive customer confusion because selling file collaboration into the enterprise is already an uphill battle. First, customers – business end-users – are resistant to changing their Dropbox and Dropbox-like file share applications. As far as the users are concerned their sharing is working just fine between their own devices and small teams.

IT is very concerned about this level of consumer-level file sharing and if they are not, they should be. But IT faces a battle when it attempts to wean thousands of end-users off of Dropbox on the users’ personal devices. There must be a business advantage and clear usability for users who are required to adopt a corporate file sharing application on their own device.

IT must also have good reasons to deploy corporate file sharing using the cloud. From their perspective the Dropboxes of the world are fueling the BYOD (Bring Your Own Device) phenomenon. They need to replace consumer-level file collaboration applications with an enterprise scale application and its robust management console. However, while IT may be anxious about BYOD and insecure file sharing it is not usually the most driving need on their full agenda. They need to understand how an EFC solution can solve a very large problem, and why they need to take advantage of the solution now. 

What is the solution? Enterprise file collaboration (EFC) with: 1) high scalability, 2) security, 3) control, 4) usability, and 5) compliance. In this landscape report we will discuss these five factors and the main customer drivers for this level of enterprise file collaboration.
 
Finally, we will discuss the leading vendors that offer enterprise file collaboration products and see how they stack up against our definition.



 

Publish date: 06/06/13
news

SanDisk Enhances FlashSoft Software for Server-Side Solid State Caching

SanDisk Corporation, a global leader in flash memory storage solutions, today announced a significant upgrade to its enterprise software portfolio with a new version of its FlashSoft™ software for Windows Server® and Linux operating systems. FlashSoft 3.2 server-side solid state caching software complements and enhances the performance of existing applications, and server and storage systems.

  • Premiered: 06/18/13
  • Author: Taneja Group
  • Published: BusinessWire
Topic(s): TBA SanDisk TBA FlashSoft TBA Flash TBA flash cache TBA Caching TBA SSD TBA DAS TBA SAN TBA SAS TBA SATA TBA scalability TBA hot data
Profiles/Reports

HP StoreOnce Boldly Goes Where No Deduplication Has Gone Before

Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.

However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.

Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe, or Dedupe 1.0 as it is sometimes referred to, is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.

A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.

 Federating dedupe across systems goes a long way to solve that problem. HP StoreOnce extends consistent dedupe across the infrastructure. Only HP implements the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.

 This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges, how HP is achieving its vision of federated dedupe with StoreOnce –- and what HP’s StoreOnce VSA announcement and achievement means to backup service providers, enterprise ROBO, and SMB customers. 
 

Publish date: 06/20/13
Profiles/Reports

Dell AppAssure 5: Unified Platform for Business Resiliency

Backup applications with large user bases have been vendor cash cows because their customers are reluctant to change such deeply embedded products. As long as the backup worked, it was out of sight and out of mind.
But the field is rapidly changing.

The push to virtualize applications saw traditional backup foundering. Traditional backup in the virtual arena suffered from heavy operational overhead on server, application host, network, and storage levels. The growing amount of VMs and virtualized data had a serious impact on storage resources. For example, each VMDK file represented an entire VM file system image, typically at least 2GB in size. File sizes led to issues for bandwidth, monitoring, and storage resources.

In response, some vendors developed innovative virtual backup products. They made virtual backup much more resource-efficient and easily manageable. Increased performance shrank backup window requirements, provided effective RPO and RTO, simplified the backup process and improved recovery integrity.  These tools changed the virtual data protection landscape for the better.

However, many of these startups offered limited solutions that only supported a single type of hypervisor and several physical machines. This left virtual and physical networks essentially siloed – not to mention the problem of multiple point products creating even more silos within both environments. Managing cross-domain data protection using a variety of point products became inefficient and costly for IT.

Traditional backup makers also scrambled to add virtualization backup support and succeeded to a point, but only a point. Their backup code base was written well before the mass appearance of the cloud and virtualization, and retrofitting existing applications only went so far to provide scalability and integration. There was also the inability to solve a problem that has plagued IT since the early days of backup tape – restore assurance. It has always been risky to find out after the fact that the backup you depended on is not usable for recovery. With data sets doubling every 18 months, the risk of data loss has significantly risen.

More modern backup solves some of these problems but causes new ones. Modern backup offers automated scheduling, manual operations, policy setting, multiple types of backup targets, replication schemes, application optimization, and more. These are useful features but they are also costly and resource-hungry: roughly 30% of storage costs go to IT operations alone. Another problem with these new features is their complexity. It is difficult to optimize and monitor the data protection environment, leading to a conservative estimate of about 20% failure in backup or recovery jobs. 

In addition, most data protection products offer average-to-poor awareness and integration into their backup tape and disk targets. This results in difficulty in setting and testing Recovery Time Objectives (RTO) and Recovery Point Objectives (RPO) for business applications. The last thing that IT wants is to cripple application recovery, but it is challenging to set meaningful RTO and RPO settings across multiple environments and applications, and extremely difficult to test them.
Even newer VM backup products are inadequate for modern enterprise data centers with physical and virtual layers running critical applications. Combine this with complex and mixed IT environments and it presents a very serious challenge for IT professionals charged with protecting data and application productivity.

What we are seeing now is next generation data protection that protects both virtual and physical environments in one flexible platform. Dell AppAssure is a leading pioneer in this promising field. AppAssure is rewriting the data protection book from limited point products to a highly agile data center protection platform with continual backup, instantaneous restore, backup assurance and a host of additional benefits.
 

Publish date: 06/27/13
news

Has Flash in Server Peaked?

The flash storage market is booming, and one approach in particular-server-based flash-has led the pack in user adoption. In a market pioneered and largely developed by a single enterprising vendor, server side flash has achieved significant mindshare among IT practitioners in the space of a few short years. PCIe SSDs have become commonplace in servers, and are being used to accelerate a whole host of performance-sensitive applications, from databases to virtualization to cloud.

  • Premiered: 06/28/13
  • Author: Taneja Group
  • Published: StorageNewsletter.com
Topic(s): TBA Storage TBA Flash TBA SSD TBA Virtualization TBA scalability TBA Hybrid Array TBA Storage Acceleration
news

Nutanix Inc. ships larger, smaller converged storage platforms

Nutanix Inc. today expanded its line of converged storage systems by launching an entry-level platform for small enterprises and branch offices, and a data center platform that handles more data-intensive applications than its earlier systems.

  • Premiered: 06/18/13
  • Author: Taneja Group
  • Published: Tech Target: Search Virtual Storage
Topic(s): TBA Nutanix TBA Data Center TBA ROBO TBA Virtualization TBA Storage TBA hyper-converge TBA SSD TBA Flash TBA flash storage TBA scalability
news

Cloud-integrated storage appliances link on-premises storage to cloud

Cloud-integrated storage appliances allow hybrid storage configurations that seamlessly link data center storage with cost-effective, scalable cloud storage.

  • Premiered: 09/12/13
  • Author: Arun Taneja
  • Published: Tech Target: Search Cloud Storage
Topic(s): TBA Cloud TBA Hybrid TBA Hybrid Cloud TBA hybrid storage TBA scalability TBA Backup TBA DR TBA Disaster Recovery TBA CIS TBA Archiving TBA Primary Storage
Profiles/Reports

Astute Networked Flash ViSX: Application Performance Achieved Cost Effectively

In their quest to achieve better storage performance for their critical applications, mid-market customers often face a difficult quandary. Whether they have maxed out performance on their existing iSCSI arrays, or are deploying storage for a new production application, customers may find that their choices force painful compromises.

When it comes to solving immediate application performance issues, server-side flash storage can be a tempting option. Server-based flash is pragmatic and accessible, and inexpensive enough that most application owners can procure it without IT intervention. But by isolating storage in each server, such an approach breaks a company's data management strategy, and can lead to a patchwork of acceleration band-aids, one per application.

At the other end of the spectrum, customers thinking more strategically may look to a hybrid or all-flash storage array to solve their performance needs. But as many iSCSI customers have learned the hard way, the potential performance gains of flash storage can be encumbered by network speed. In addition to this performance constraint, array-based flash storage offerings tend to touch multiple application teams and involve big dollars, and may only be considered a viable option once pain points have been thoroughly and widely felt.

Fortunately for performance-challenged, iSCSI customers, there is a better alternative. Astute Networks ViSX sits in the middle, offering a broader solution than flash in the server, bust cost effective and tactically achievable as well. As an all-flash storage appliance that resides between servers and iSCSI storage arrays, ViSX complements and enhances existing iSCSI SAN environments, delivering wire-speed storage access without disrupting or forcing changes to server, virtual server, storage or application layers. Customers can invest in ViSX before their performance pain points get too big, or before they've gone down the road of breaking their infrastructure with a tactical solution.

Publish date: 08/31/13
news

Emerging Trends in Software Defined Storage

“Software defined pretty-much-anything” is getting a lot of attention in the IT trade press. It seems new and different and promises to solve everyone’s problems around prioritizing applications, sharing storage across massive distributed environments and optimizing computing resources around business data needs.

  • Premiered: 10/23/13
  • Author: Taneja Group
  • Published: InfoStor
Topic(s): TBA software-defined TBA SDS TBA IO Optimization TBA Virtual storage TBA Virtualization TBA Storage TBA scale-out TBA VSA TBA EMC TBA ViPR TBA IBM SVC TBA DataCore TBA StoreVirtual TBA HP TBA VMWare TBA VSAN TBA scalability TBA Mobility
news

Storage infrastructure management is still elusive

We've been on a multi-decade crusade to address performance and basic storage management tasks to handle things such as protecting data in place, and scaling and expanding our data storage systems to meet new requirements. But today, when performance and scaling and expansion issues are addressed, it will be revealed that the last major challenge in the data center is storage management.

  • Premiered: 12/17/13
  • Author: Taneja Group
  • Published: Tech Target: Search Storage
Topic(s): TBA Jeff Boles TBA scalability TBA Storage TBA storage infrastructure TBA VM TBA Virtualization TBA Virtual Infrastructure TBA Virtual Infrastructure Management TBA converged storage TBA Converged Infrastructure TBA hyper-converged TBA HP TBA Hitachi TBA IBM TBA Nutanix TBA SimpliVity TBA VSA TBA FalconStor TBA Nexenta TBA StorMagic TBA VMWare TBA VSAN TBA Gridstore TBA Tintri TBA Virtual Machine TBA SDS TBA software defined
Profiles/Reports

Storage That Turns Big Data Into a Bigger Asset: Data-Defined Storage With Tarmin GridBank

UPDATED FOR 2014: Today’s storage industry is as stubbornly media-centric as it has always been: SAN, NAS, DAS; disk, cloud, tape. This centricity forces IT to deal with storage infrastructure on media-centric terms. But the storage infrastructure should really serve data to customers, not media; it’s the data that yields business value, while the media should be an internal IT architectural choice.

Storage media focused solutions only support business indirectly by providing optimized storage infrastructure for data. Intelligent data services on the other hand provide direct business value by optimizing data utility, availability, and management. The shift from traditional thinking here is really about seeking to provide logically ideal data storage for the people who own and use the data first, while freeing up underlying storage infrastructure designs to be optimized for efficiencies as desired. Ideal data storage would be global in access and scalability, secure and resilient, and inherently support data-driven management and applications.

Done well, this data centric approach would yield significant competitive advantage by leveraging an enterprise’s valuable intellectual property:  its vast and growing amounts of unstructured data. If this can be done by building on the company’s existing data storage and best practices, the business can quickly increase profitability, achieve faster time-to-market, and gain tremendous agility for innovation and competitiveness.

Tarmin, with its GridBank Data Management Platform, is a leading proponent of the data centric approach. It is firmly focused on managing data for global accessibility, protection and strategic value. In this product profile, we’ll explore how a data centric approach drives business value. We’ll then examine how GridBank was architected expressly around the concept that data storage should be a means for extracting business value from that data, not as a dead-end data dump.

Publish date: 02/17/14
news

External storage might make sense for Hadoop

Using Hadoop to drive big data analytics doesn't necessarily mean building clusters of distributed storage -- good old external storage might be a better choice.

  • Premiered: 02/28/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Storage
Topic(s): TBA Hadoop TBA Big Data TBA analytics TBA SAN TBA NAS TBA scale-out TBA HDFS TBA MapReduce TBA DAS TBA RAID TBA replication TBA Sentry TBA Accumulo TBA scalability
news

What does the next big thing in technology mean for the data center?

There are plenty of technologies touted as the next big thing. Big data, flash, high-performance computing, in-memory processing, NoSQL, virtualization, convergence, software-defined whatever all represent wild new forces that could bring real disruption but big opportunities to your local data center.

  • Premiered: 03/19/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA data TBA Data Center TBA Big Data TBA Storage TBA Flash TBA SSD TBA HPC TBA High Performance Computing TBA NoSQL TBA Virtualization TBA convergence TBA software-defined TBA Hadoop TBA scale-out TBA Apache TBA analytics TBA scalability TBA Converged Infrastructure TBA hyper convergence TBA Platform as a Service TBA PaaS TBA Hypervisor TBA Hybrid
news

FC technology use still leads despite Ethernet nipping at its heels

At least two technologies have tried to overtake Fibre Channel (FC) in the past decade: Ethernet and InfiniBand. Both have failed and FC use continues unabated. Why is that happening, and what's the future of FC?

  • Premiered: 04/08/14
  • Author: Arun Taneja
  • Published: Tech Target: Search Storage
Topic(s): TBA Arun Taneja TBA FC TBA Fibre Channel TBA SAN TBA Storage TBA SCSI TBA iSCSI TBA InfiniBand TBA scalability TBA QoS TBA Ethernet TBA NFS TBA CIFS TBA lossless TBA FCoE