Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Report

Page 1 of 45 pages  1 2 3 >  Last ›
Report

Hedvig Takes Your Storage to Hybrid and Multi-Cloud

With data growth exploding and on-premises IT costs creeping ever higher, an increasing number of organizations are taking a serious look at adopting cloud infrastructure for their business data and applications. Among other things, they are attracted to benefits like near-infinite scalability, greater agility and a pay-as-you-go model for consuming IT resources. These advantages are already driving new infrastructure spending on public and private clouds, which is growing at double-digit rates as spending on traditional, non-cloud, IT infrastructure continues to decline.


While most companies we speak with are already developing cloud-native apps in Amazon Web Services (AWS) or Microsoft Azure, a much smaller number have actually deployed typically backend business apps in the public cloud. What’s preventing them from taking this next step? As it turns out, one of the biggest hurdles is productively deploying existing data storage in the cloud. Public clouds don’t have the compatibility to fully support the range of storage protocols, data services and use cases that companies’ key business apps tend to rely on, making it difficult and less useful to move these workloads to the cloud. Some organizations consider reengineering their applications for cloud-native storage, but this is both costly and time consuming, and in fact may not lead to the results they are looking for. Based on recent Taneja Group research, IT buyers want a simple path for lifting and transferring their app data to the cloud, where it can be supported for both primary and secondary use cases. They are also looking to run many workloads flexibly in a hybrid cloud deployment while maintaining the level of data security and governance they enjoy on premises.


In addition to these technical requirements, companies must also weigh potential business costs, such as the risk of getting locked into a single provider. Our research reveals that customers are increasingly concerned about this risk, which is exacerbated by a lack of data mobility among various on-premises and public cloud infrastructures.


Fortunately, the founding team at Hedvig understands these customer needs and set out more than five years ago to address them. The result of their initiative is the Hedvig Distributed Storage Platform (DSP), a unified programmable data fabric that allows customers to simply and securely deploy any type of workload and application data in a hybrid or multi-cloud environment. Based on software-defined technology, Hedvig DSP enables your existing workloads, whether based on block, file or object storage, to take advantage of cloud scalability and agility today, without the expense and delays of a major reengineering effort. With Hedvig, IT teams can automatically and dynamically provision storage assets using just software on standard x86 servers, whether in your own private cloud or a public cloud IaaS environment. Hedvig enables your workloads to move freely between different public and private cloud environments, avoiding lock-in and allowing you to choose the cloud best suited for each application and use case. Hedvig can support your primary storage needs, but also supports tier-2 storage so that you can backup your data on the same platform.


In this piece, we’ll learn more about what IT professionals are looking for in cloud storage solutions, based on our research findings. We’ll then focus specifically on Hedvig storage for hybrid and multi-cloud environments to help you decide whether and how their solutions can meet your primary and secondary storage needs.
 

Publish date: 03/26/18
Report

The Easy Cloud For Complete File Data Protection: Igneous Systems Backs-up AND Archives All Your NAS

When we look at all the dollars being spent on endless NAS capacity growth, and the increasingly complex (and mostly unrealistic) file protection schemes that go along, we find most enterprises aren’t happy with their status quo. And while cloud storage seems so attractive theoretically, making big changes to a stable NAS storage architecture can be both costly and present enough risk to keep many stuck endlessly growing their primary filers. Cloud gateways can help offload capacity, yet they add operational complexity and are ultimately only half-way solutions.

What file-dependent enterprises really need is a true hybrid solution that backs on-premise primary NAS with hybrid cloud secondary storage to provide a secure, reliable, elastic, scalable, and – above all – a simply seamless solution. Ideally we would want a drop-in solution that was remotely managed, paid for by subscription and highly performant – all while automatically backing-up (and/or archiving) all existing primary NAS storage. In other words, enterprises don’t want to rip and replace working primary NAS solutions, but they do want to easily offload and extend them with superior cloud-shaped secondary storage.

When it comes to an enterprise’s cloud adoption journey, we can recommend that “hybrid cloud” storage services be adopted first to address longstanding challenges with NAS file data protection. While many enterprises have reasonable backup solutions for block (and sometimes VM images/disks), reliable data protection for fast growing file is much harder due to trends towards bigger data repositories, faster streaming data, global sharing requirements, and increasingly tighter SLA’s (not to mention shrinking backup windows). Igneous Systems promises to help filer-dependent enterprises keep up with all their file protection challenges with their Igneous Hybrid Storage Cloud that features integrated enterprise file backup and archive.

Igneous Systems’ storage layer integrates several key technologies – highly efficient, scalable, remotely managed object storage; built-in tiering and file movement not just on the back-end to public clouds, but also on the front-end from existing primary NAS arrays; remote management as-a-service to offload IT staff; and all necessary file archive and backup automation.

And Igneous Hybrid Storage Cloud is also a first-class object store with valuable features like built-in global meta-data search. However, here we’ll focus on how Igneous Backup and Igneous Archive services is used to solve gaping holes in traditional approaches to NAS backup and archive.

Download the Solution Profile today!

Publish date: 06/20/17
Report

Companies Improve Data Protection and More with Cohesity

We talked to six companies that have implemented Cohesity DataProtect and/or the Cohesity DataPlatform. When these companies evaluated Cohesity, their highest priority was reducing storage costs and improving data protection. To truly modernize their secondary storage infrastructure, they also recognized the importance of having a scalable, all-in-one solution that could both consolidate and better manage their entire secondary data environment.

Prior to implementing Cohesity, many of the companies we interviewed had significant challenges with the high cost of their secondary storage. Several factors contributed to the high costs including the need to license multiple products, inadequate storage reduction, the need for professional services and extensive training, difficulty scaling and maintaining systems and adding capacity to expensive primary storage for lower-performance services, such as group file shares.

In addition to lower storage costs, all the companies we talked to also wanted a better data protection solution. Many companies were struggling with slow backup speeds, insufficient recovery times and cumbersome data archival methods. Solution complexity and high operational overhead was also a major issue. To address these issues, companies wanted a unified data protection solution that offered better backup performance, instant data recovery, simplified management, and seamless cloud integration for long-term data retention.

Companies also wanted to improve overall secondary storage management and they shared a common goal of combining secondary storage workloads under one roof. Depending on their environment and their operational needs, their objectives outside of data protection included providing self-service access to copies of production data for on-demand environments (such as test/dev), using secondary storage for file services and leveraging indexing and advanced search and analytics to find out-of-place confidential data and ensure data compliance.

Cohesity customers found that the key to addressing these challenges and needs is Cohesity’s Hyperconverged Secondary Storage. Cohesity is a pioneer of Hyperconverged Secondary Storage, a new category of secondary storage based on a webscale, distributed file system that scales linearly and provides global data deduplication and automatic indexing as well as advanced search and analytics and policy-based management of all secondary storage workloads. These capabilities combine to provide a single system that efficiently stores, manages, and understands all data copies and workflows residing in a secondary storage environment – whether the data is on-premises or in the cloud. There are no point products, therefore less complexity and lower licensing costs.

It’s a compelling value proposition, and importantly, every company we talked to stated that Cohesity has met and exceeded their expectations and has helped them rapidly evolve their data protection and overall secondary data management. To learn about each customer’s journey, we examined their business needs, their data center environment, their key challenges, the reasons they chose Cohesity, and the value they have derived. Read on to learn more about their experience.

Publish date: 04/28/17
Report

Cloud Object Storage for the Healthcare Data Blues

The healthcare industry continues to face tremendous cost challenges. The U.S. government estimates national health expenditures in the United States accounted for $3.2 trillion last year – nearly 18% of the country’s total GDP. There are many factors that drive up the cost of healthcare, such as the cost of new drug development and hospital readmissions. In addition, there’s compelling studies that show medical organizations will need to evolve their IT environment to curb healthcare costs and improve patient care in new ways, such as cloud-based healthcare models aimed at research community collaboration, coordinated care and remote healthcare delivery.

For example, Goldman Sachs recently predicted that the digital revolution can save $300 billion in spending in the healthcare sector by powering new patient options, such as home-based patient monitoring and patient self-management. Moreover, the most significant progress may come from a medical organization transforming their healthcare data infrastructure. Here’s why:

  • Advancements in digital medical imaging has resulted in an explosion of data that sits in  picture archiving and communications systems (PACS) and vendor neutral archives (VNAs).
  • Patient care initiatives such as personalized medicine and genomics require storing, sharing and analyzing massive amounts of unstructured data.
  • Regulations such as the Health Insur­ance Portability and Accountability Act (HIPAA) require organizations to have policies for long term image retention and business continuity.

Unfortunately, traditional file storage approaches aren’t well-suited to manage vast amounts of unstructured data and present several barriers to modernizing healthcare infrastructure. A recent Taneja Group survey found the top three challenges to be:

  • Lack of flexibility: Traditional file storage appliances require dedicated hardware and don’t offer tight integration with collaborative cloud storage environments.
  • Poor utilization: Traditional file storage requires too much storage capacity for system fault tolerance, which reduces usable storage.
  • Inability to scale: Traditional storage solutions such as RAID-based arrays are gated by controllers and simply aren’t designed to easily expand to petabyte storage levels.

As a result, healthcare organizations are moving to object storage solutions that offer an architecture inherently designed for web scale storage environments. Specifically, object storage offers healthcare organizations the following advantages:

  • Simplified management, hardware independence and a choice of deployment options – private, public or hybrid cloud – lowers operational and hardware storage costs
  • Web-scale storage platform provides scale as needed and enables a pay as you go model
  • Efficient fault tolerance protects against site failures, node failures and multiple disk failures
  • Built in security protects against digital and physical breeches
Publish date: 03/22/17
Report

IBM Cloud Object Storage Provides the Scale and Integration Needed for Modern Genomics Infra.

For hospitals and medical research institutes, the ability to interpret genomics data and identify relevant therapies is key to provide better patient care through personalized medicine. Many such organizations are racing forward, analyzing patients’ genomic profiles to match more clinically actionable treatments using artificial intelligence (AI).

These rapid advancements in genomic research and personalized medicine are very exciting, but they are creating enormous data challenges for healthcare and life sciences organizations. High-throughput DNA sequencing machines can now process a human genome in a matter of hours at a cost approaching one thousand dollars. This is a huge drop from a cost of ten million dollars ten years ago and means the decline in genome sequencing cost has outpaced Moore’s Law (see chart). The result is an explosion in genomic data – driving the need for solutions that can affordably and securely store, access, share, analyze and archive enormous amounts of data in a timely manner.

Challenges include moving large volumes of genomic data from cost-effective archival storage to low latency storage for analysis to reduce the time needed to analyze genetic data. Currently, it takes days to do a comprehensive DNA sequence analysis.

Sharing and interpreting vast amounts of unstructured data to find relationships between a patient’s genetic characteristics and potential therapies adds another layer of complexity. Determining connections requires evaluating data across numerous unstructured data sources, such as genomic sequencing data, medical articles, drug information and clinical trial data from multiple sources.

Unfortunately, the traditional file storage within most medical organizations doesn’t meet the needs of modern genomics. These systems can’t accommodate massive amounts of unstructured data and they don’t support both data archival and high-performance compute. They also don’t facilitate broad collaboration. Today, organizations require a new approach to genomics storage, one that enables:

  • Scalable and convenient cloud storage to accommodate rapid unstructured data growth
  • Seamless integration between affordable unstructured data storage, low latency storage, high performance compute, big data analytics and a cognitive healthcare platform to quickly analyze and find relationships among complex life science data types
  • A multi-tenant hybrid cloud to share and collaborate on sensitive patient data and findings
  • Privacy and protection to support regulatory compliance
Publish date: 03/22/17
Report

HPE 3PAR Enables Highly Resilient All-Flash Data Centers: Latest Release Solidifies AFA Leadership

If you are an existing customer of HPE 3PAR, this latest release of 3PAR capabilities will leave you smiling. If you are looking for an All Flash Array (AFA) to transform your data center, now might be the time to take a closer at HPE 3PAR. Since AFAs first emerged on the scene at the turn of this decade, the products have gone through various waves of innovation to achieve the market acceptance it has today. In the first wave, it was all about raw performance for niche applications. In the second wave, it was about making flash more cost effective versus traditional disk-based arrays to broaden economic appeal. Now in the final wave, it is about giving these arrays all the enterprise features and ecosystem support to completely replace all legacy Tier 0/1 arrays still in production today. 

HPE 3PAR StoreServ is one of the leading AFAs on the market today. HPE 3PAR uses a modern architectural design that includes multi-controller scalability, a highly-virtualized data layer with three levels of abstraction, system-wide striping, a highly-specialized ASIC and numerous flash innovations. HPE 3PAR engineers pioneered this very efficient architecture well before flash technology became mainstream and proved that this architecture approach has been timeless by demonstrating a seamless transition to support all-flash technology. During this same time, other vendors ran into architectural controller-bound bottlenecks for flash, making them reinvent existing products or completely start from scratch with new architectures. 

HPE’s 3PAR timeless architecture has meant that features introduced years before are still relevant today and features introduced today are available to current 3PAR customers that purchased arrays previously. This continuous innovation of features available to old and new customers alike provides the ultimate in investment protection unmatched by most vendors in the industry today. In this Technology Brief, Taneja Group will explore some of the latest developments from HPE that build upon the rich feature set that already exists in the 3PAR architecture. These new features and simplicity enhancements will show that HPE continues to put customer’s investment protection first and continues to expand its capabilities around enterprise-grade business continuity and resilience. The combination of economic value of HPE 3PAR AFAs with years of proven mission critical features promises to accelerate the final wave of the much-anticipated All-Flash Data Center for Tier 0/1 workloads.

Publish date: 02/17/17
Page 1 of 45 pages  1 2 3 >  Last ›