Taneja Group | complexity
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: complexity

Profiles/Reports

Flash Virtualization System: Powerful but Cost-Effective Acceleration for VMware Workloads

Server virtualization can bring your business significant benefits, especially in the initial stages of deployment. Companies we speak with in the early stages of adoption often cite more flexible and automated management of both infrastructure and apps, along with CAPEX and OPEX savings resulting from workload consolidation.  However, as an increasing number of apps are virtualized, many of these organizations encounter significant storage performance challenges. As more virtualized workloads are consolidated on a given host, aggregate IO demands put tremendous pressure on shared storage, server and networking resources, with the strain further exacerbated by the IO blender effect, in which IO streams processed by the hypervisor become random and unpredictable. Together, these conditions reduce host productivity—e.g. by lowering data and transactional throughput and increasing application response time—and may prevent you from meeting performance requirements for your business-critical applications.

How can you best address these storage performance challenges in your virtual infrastructure? Adding solid-state or flash storage will provide a significant performance boost, but where should it be deployed to give your critical applications the biggest improvement per dollar spent? How can you ensure that the additional storage fits effortlessly into your existing environment, without requiring disruptive and costly changes to your infrastructure, applications, or management capabilities?

We believe that server-side acceleration provides the best answer to all of these questions. In particular, we like server solutions that combine intelligent caching with high-performance PCIe memory, which are tightly integrated with the virtualization platform, and enable sharing of storage across multiple hosts or an entire cluster. The Flash Virtualization System from SanDisk is an outstanding example of such a solution. As we’ll see, Flash Virtualization enables a shared cache resource across a cluster of hosts in a VMware environment, improving application performance and response time without disrupting primary storage or host servers. This solution will allow you to satisfy SLAs and keep your users happy, without breaking the bank.

Publish date: 06/14/16
news

Software Defined Storage: Changing Data from ‘State-ful’ to Stateless

Hedvig, the Santa Clara software defined storage (SDS) start-up now in its third year, has announced the infusion of a cool $21.5 million in Series C venture funding as attention increasingly turns to the fragmented SDS market, predicted to surpass $7 billion by 2020.

  • Premiered: 03/04/17
  • Author: Taneja Group
  • Published: Enterprise Tech
Topic(s): TBA Jeff Kato TBA Hedvig TBA software-defined TBA software defined TBA software-defined storage TBA Software Defined Storage TBA SDS TBA Distributed Storage TBA Cloud TBA cloud adoption TBA Cloud Storage TBA Storage TBA complexity TBA hyperscale TBA Hybrid Cloud TBA Public Cloud TBA API TBA scalable TBA scalability TBA Hypervisor TBA container TBA containers TBA VM TBA Virtual Machine TBA Virtualization TBA Docker TBA OpenStack TBA Hyper-V TBA VMWare TBA Primary Storage
Profiles/Reports

Companies Improve Data Protection and More with Cohesity

We talked to six companies that have implemented Cohesity DataProtect and/or the Cohesity DataPlatform. When these companies evaluated Cohesity, their highest priority was reducing storage costs and improving data protection. To truly modernize their secondary storage infrastructure, they also recognized the importance of having a scalable, all-in-one solution that could both consolidate and better manage their entire secondary data environment.

Prior to implementing Cohesity, many of the companies we interviewed had significant challenges with the high cost of their secondary storage. Several factors contributed to the high costs including the need to license multiple products, inadequate storage reduction, the need for professional services and extensive training, difficulty scaling and maintaining systems and adding capacity to expensive primary storage for lower-performance services, such as group file shares.

In addition to lower storage costs, all the companies we talked to also wanted a better data protection solution. Many companies were struggling with slow backup speeds, insufficient recovery times and cumbersome data archival methods. Solution complexity and high operational overhead was also a major issue. To address these issues, companies wanted a unified data protection solution that offered better backup performance, instant data recovery, simplified management, and seamless cloud integration for long-term data retention.

Companies also wanted to improve overall secondary storage management and they shared a common goal of combining secondary storage workloads under one roof. Depending on their environment and their operational needs, their objectives outside of data protection included providing self-service access to copies of production data for on-demand environments (such as test/dev), using secondary storage for file services and leveraging indexing and advanced search and analytics to find out-of-place confidential data and ensure data compliance.

Cohesity customers found that the key to addressing these challenges and needs is Cohesity’s Hyperconverged Secondary Storage. Cohesity is a pioneer of Hyperconverged Secondary Storage, a new category of secondary storage based on a webscale, distributed file system that scales linearly and provides global data deduplication and automatic indexing as well as advanced search and analytics and policy-based management of all secondary storage workloads. These capabilities combine to provide a single system that efficiently stores, manages, and understands all data copies and workflows residing in a secondary storage environment – whether the data is on-premises or in the cloud. There are no point products, therefore less complexity and lower licensing costs.

It’s a compelling value proposition, and importantly, every company we talked to stated that Cohesity has met and exceeded their expectations and has helped them rapidly evolve their data protection and overall secondary data management. To learn about each customer’s journey, we examined their business needs, their data center environment, their key challenges, the reasons they chose Cohesity, and the value they have derived. Read on to learn more about their experience.

Publish date: 04/28/17
Profiles/Reports

Providing Secondary Storage at Cloud-Scale: Cohesity Performance Scales Linearly in 256 Node Test

Are we doomed to drown in our own data? Enterprise storage is growing fast enough with today’s data demands to threaten service levels, challenge IT expertise and often eat up a majority of new IT spending. And the amount of competitively useful data could possibly grow magnitudes faster with new trends in web-scale applications, IoT and big data. On top of that, assuring full enterprise requirements for data protection with traditional fragmented secondary storage designs means that more than a dozen copies of important data are often inefficiently eating up even more capacity at an alarming rate.

Cohesity, a feature-rich secondary storage data management solution based on a core parallel file system, promises to completely break through traditional secondary storage scaling limitations with its inherently scale-out approach. This is a big claim, and so we’ve executed a validation test of Cohesity under massive scaling – pushing their storage cluster to sizes far past what they’ve previously publicly tested.

The result is striking (though perhaps not internally surprising given their engineering design goals).  We documented linearly accumulating performance across several types of IO, all the way up to our cluster test target of 256 Cohesity storage nodes. Other secondary storage designs can be expected to drop off at a far earlier point, either hitting a hard constraint (e.g. a limited cluster size) or with severely decreasing returns in performance.

We also took the opportunity to validate some important storage requirements at scale. For example, we also tested that Cohesity ensured global file consistency and full cluster resilience even at the largest scale of deployment. Given overall test performance as validated in this report, Cohesity has certainly demonstrated it is inherently a web-scale system that can deliver advanced secondary storage functionality at any practical enterprise scale of deployment.

Publish date: 07/31/17