Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Research Areas

Infrastructure Management

Includes Security, SRM, Cloud, ICM, SaaS, Business Intelligence, Data Warehouse, Database Appliances, NFM, Storage Management.

This section covers all forms of technologies that impact IT infrastructure management. Taneja Group analysts particularly focus on the interplay between server virtualization and storage, with and without virtualization, and study the impact on performance, security and management of the IT infrastructure. This section also includes all aspects of storage management (SRM, SMI-S) and the role of cross correlation engines on overall performance of an application. Storage virtualization technologies (In-band, Out-of-band, split path architectures or SPAID) are all covered in detail. Data Security, whether for data in-flight or at-rest and enterprise level key management issues are covered along with all the players that make up the ecosystems.

As databases continue to grow larger and more complex they present issues in terms of security, performance and management. Taneja Group analysts cover the vendors and the technologies that harness the power of archiving to reduce the size of active databases. We also cover specialized database appliances that have become the vogue lately. All data protection issues surrounding databases are also covered in detail. We write extensively on this topic for the benefit of the IT user.

Page 1 of 34 pages  1 2 3 >  Last ›
Report

Qumulo File Fabric extends high-performance file services to the cloud

The timing for Qumulo to extend its software-defined scalable file services to the cloud could not be better as public cloud utilization continues to grow at a phenomenal rate. Infrastructure spending on the public and private cloud is growing at double-digit rates while spending on traditional, non-cloud, IT infrastructure continues to decline and within a few short years will represent less than 50% of the entire infrastructure market. This trend is not surprising and has been widely predicted for several years. The surprising element now is how strong the momentum has become toward public cloud adoption, and the question is where the long-term equilibrium point will be between public clouds and on-premises infrastructure.

AWS was a pioneer in public cloud storage services when it introduced S3 (Simple Storage Service) over ten years ago. The approach of public cloud vendors has been to offer storage services at cut-rate pricing in what we call the “Hotel California” strategy – once they have your data, it can never leave. Recently, we have been hearing increased grumbling from customers that they are very concerned about losing the option to change infrastructure vendors and the resulting reduction in competition. In response to this, Taneja Group initiated multiple public and hybrid cloud research studies to gain insight on what storage services are needed across heterogenous cloud infrastructures. What we found is that IT practitioners are not only concerned about data security in the cloud; they are concerned about vendor lock-in created by the lack of data mobility between on-premises and public cloud infrastructures. Another surprising element we found is that IT practitioners predominately want file services across clouds and that object storage such as AWS S3 cannot meet their future cloud storage needs. This is actually not that surprising, as our research showed that many applications that businesses want to move to the cloud (to benefit from a highly dynamic compute environment) still rely on high-performance files access.

Enter Qumulo File Fabric (QF2). QF2 is a modern, highly scalable file storage system that runs in the data center and now in the public cloud. Unlike legacy scale-out NAS products, QF2 provides capacity for billions of files, closely matching the scale that could only previously be achieved with object storage solutions, but with the benefit of supporting file access protocols. Qumulo’s modern SDS, flash-first approach allows it to provide a very high-performance file storage system that can cover a wide variety of workloads. Its built-in, real-time analytics let administrators easily manage data no matter how large the footprint or where it is globally located. Continuous replication enables data to move where and when it’s required depending on business need. Qumulo refers to this unmatched file scalability and performance as universal-scale file storage.

Qumulo, founded in 2012, is rapidly growing its market presence and we recently validated their very high customer satisfaction and product capability through an extensive interview process with several customers. Qumulo recently extended their go-to-market ecosystem support through a partnership with Hewlett Packard Enterprise (HPE). Now with the launch of QF2 and support for  AWS, we expect Qumulo to continue its rapid rise as a leading provider of file services with universal scale. They are also well positioned to capture a significant share of the emerging multi-cloud storage market. We found many companies still prefer file access and there are plenty of reasons why scalable file will continue to grow and compete effectively versus object storage centric architectures.

Publish date: 09/22/17
Technology Validation

Providing Secondary Storage at Cloud-Scale: Cohesity Performance Scales Linearly in 256 Node Test

Are we doomed to drown in our own data? Enterprise storage is growing fast enough with today’s data demands to threaten service levels, challenge IT expertise and often eat up a majority of new IT spending. And the amount of competitively useful data could possibly grow magnitudes faster with new trends in web-scale applications, IoT and big data. On top of that, assuring full enterprise requirements for data protection with traditional fragmented secondary storage designs means that more than a dozen copies of important data are often inefficiently eating up even more capacity at an alarming rate.

Cohesity, a feature-rich secondary storage data management solution based on a core parallel file system, promises to completely break through traditional secondary storage scaling limitations with its inherently scale-out approach. This is a big claim, and so we’ve executed a validation test of Cohesity under massive scaling – pushing their storage cluster to sizes far past what they’ve previously publicly tested.

The result is striking (though perhaps not internally surprising given their engineering design goals).  We documented linearly accumulating performance across several types of IO, all the way up to our cluster test target of 256 Cohesity storage nodes. Other secondary storage designs can be expected to drop off at a far earlier point, either hitting a hard constraint (e.g. a limited cluster size) or with severely decreasing returns in performance.

We also took the opportunity to validate some important storage requirements at scale. For example, we also tested that Cohesity ensured global file consistency and full cluster resilience even at the largest scale of deployment. Given overall test performance as validated in this report, Cohesity has certainly demonstrated it is inherently a web-scale system that can deliver advanced secondary storage functionality at any practical enterprise scale of deployment.

Publish date: 07/31/17
Profile

Enterprise Cloud Platform Ideal for Database Apps: Nutanix Hosting Oracle Penetrates Tier 1

Creating an Enterprise Cloud with HyperConverged Infrastructure (HCI) is making terrific sense (and “cents”) for a wide range of corporations tired of integrating and managing complex stacks of IT infrastructure. Replacing siloed infrastructure and going far beyond simple pre-converged racks of traditional hardware, HCI greatly simplifies IT, frees up valuable staff from integration and babysitting heterogeneous solutions to better focus on adding value to the business, and can vastly improve “qualities of service” in all directions. Today, we find HCI solutions being deployed as an Enterprise Cloud platform in corporate data centers even for mission-critical tier-1 database workloads.

However, like public clouds and server virtualization before it, HCI has had to grow and mature. Initially HCI solutions had to prove themselves in small and medium size organizations – and on rank-and-file applications. Now, five plus years of evolution of vendors like Nutanix have matured HCI into a full tier1 enterprise application platform presenting the best features of public clouds including ease of management, modular scalability and agile user provisioning. Perhaps the best example of enterprise mission-critical workloads are business applications layered on Oracle Database, and as well see in this report, Nutanix now makes an ideal platform for enterprise-grade databases and database-powered applications.

In fact, we find that Nutanix’s mature platform not only can, by its natural mixed workload design, host a complete tier1 application stack (including the database), but also offers significant advantages because the whole application stack is “convergently” hosted. The resulting opportunity for both IT (and the business user) is striking. Those feeling tied down to legacy architectures and those previously interested in the benefits of plain Converged Infrastructure will now want to evaluate how mature HCI can now take them farther, faster.

In the full report, we explore in detail how Nutanix supports and accelerates serious Oracle database-driven applications (e.g. ERP, CRM) at the heart of most businesses and production data centers. In this summary, we will review how Nutanix Enterprise Cloud Platform is also an ideal enterprise data center platform for the whole application stack— consolidating many if not most workloads in the data center.

Publish date: 06/30/17
Free Reports

The Easy Cloud For Complete File Data Protection: Igneous Systems Backs-up AND Archives All Your NAS

When we look at all the dollars being spent on endless NAS capacity growth, and the increasingly complex (and mostly unrealistic) file protection schemes that go along, we find most enterprises aren’t happy with their status quo. And while cloud storage seems so attractive theoretically, making big changes to a stable NAS storage architecture can be both costly and present enough risk to keep many stuck endlessly growing their primary filers. Cloud gateways can help offload capacity, yet they add operational complexity and are ultimately only half-way solutions.

What file-dependent enterprises really need is a true hybrid solution that backs on-premise primary NAS with hybrid cloud secondary storage to provide a secure, reliable, elastic, scalable, and – above all – a simply seamless solution. Ideally we would want a drop-in solution that was remotely managed, paid for by subscription and highly performant – all while automatically backing-up (and/or archiving) all existing primary NAS storage. In other words, enterprises don’t want to rip and replace working primary NAS solutions, but they do want to easily offload and extend them with superior cloud-shaped secondary storage.

When it comes to an enterprise’s cloud adoption journey, we can recommend that “hybrid cloud” storage services be adopted first to address longstanding challenges with NAS file data protection. While many enterprises have reasonable backup solutions for block (and sometimes VM images/disks), reliable data protection for fast growing file is much harder due to trends towards bigger data repositories, faster streaming data, global sharing requirements, and increasingly tighter SLA’s (not to mention shrinking backup windows). Igneous Systems promises to help filer-dependent enterprises keep up with all their file protection challenges with their Igneous Hybrid Storage Cloud that features integrated enterprise file backup and archive.

Igneous Systems’ storage layer integrates several key technologies – highly efficient, scalable, remotely managed object storage; built-in tiering and file movement not just on the back-end to public clouds, but also on the front-end from existing primary NAS arrays; remote management as-a-service to offload IT staff; and all necessary file archive and backup automation.

And Igneous Hybrid Storage Cloud is also a first-class object store with valuable features like built-in global meta-data search. However, here we’ll focus on how Igneous Backup and Igneous Archive services is used to solve gaping holes in traditional approaches to NAS backup and archive.

Download the Solution Profile today!

Publish date: 06/20/17
Technology Validation

HPE StoreVirtual 3200: A Look at the Only Entry Array with Scale-out and Scale-up

Innovation in traditional external storage has recently taken a back seat to the current market darlings of All Flash Arrays and Software-defined Scale-out Storage. Can there be a better way to redesign the mainstream dual-controller array that has been a popular choice for entry-level shared storage for the last 20 years?  Hewlett-Packard Enterprise (HPE) claims the answer is a resounding Yes.

HPE StoreVirtual 3200 (SV3200) is a new entry storage device that leverages HPE’s StoreVirtual Software-defined Storage (SDS) technology combined with an innovative use of low-cost ARM-based controller technology found in high-end smartphones and tablets. This approach allows HPE to leverage StoreVirtual technology to create an entry array that is more cost effective than using the same software on a set of commodity X86 servers. Optimizing the cost/performance ratio using ARM technology instead of excessive power hungry processing and memory using X86 computers unleashes an attractive SDS product unmatched in affordability. For the first time, an entry storage device can both Scale-up and Scale-out efficiently and also have the additional flexibility to be compatible with a full complement of hyper-converged and composable infrastructure (based on the same StoreVirtual technology). This unique capability gives businesses the ultimate flexibility and investment protection as they transition to a modern infrastructure based on software-defined technologies. The SV3200 is ideal for SMB on-premises storage and enterprise remote office deployments. In the future, it will also enable low-cost capacity expansion for HPE’s Hyper Converged and Composable infrastructure offerings.

Taneja Group evaluated the HPE SV3200 to validate the fit as an entry storage device. Ease-of-use, advanced data services, and supportability were just some of the key attributes we validated with hands-on testing. What we found was that the SV3200 is an extremely easy-to-use device that can be managed by IT generalists. This simplicity is good news for both new customers that cannot afford dedicated administrators and also for those HPE customers that are already accustomed to managing multiple HPE products that adhere to the same HPE OneView infrastructure management paradigm. We also validated that the advanced data services of this entry array match that of the field proven enterprise StoreVirtual products already in the market. The SV3200 can support advanced features such as linear scale-out and multi-site stretch cluster capability that enables advanced business continuity techniques rarely found in storage products of this class. What we found is that HPE has raised the bar for entry arrays, and we strongly recommend businesses that are looking at either SDS technology or entry storage strongly consider HPE’s SV3200 as a product that has the flexibility to provide the best of both. A starting price at under $10,000 makes it very affordable to start using this easy, powerful, and flexible array. Give it a closer look.

Publish date: 04/11/17
Report

Cloud Object Storage for the Healthcare Data Blues

The healthcare industry continues to face tremendous cost challenges. The U.S. government estimates national health expenditures in the United States accounted for $3.2 trillion last year – nearly 18% of the country’s total GDP. There are many factors that drive up the cost of healthcare, such as the cost of new drug development and hospital readmissions. In addition, there’s compelling studies that show medical organizations will need to evolve their IT environment to curb healthcare costs and improve patient care in new ways, such as cloud-based healthcare models aimed at research community collaboration, coordinated care and remote healthcare delivery.

For example, Goldman Sachs recently predicted that the digital revolution can save $300 billion in spending in the healthcare sector by powering new patient options, such as home-based patient monitoring and patient self-management. Moreover, the most significant progress may come from a medical organization transforming their healthcare data infrastructure. Here’s why:

  • Advancements in digital medical imaging has resulted in an explosion of data that sits in  picture archiving and communications systems (PACS) and vendor neutral archives (VNAs).
  • Patient care initiatives such as personalized medicine and genomics require storing, sharing and analyzing massive amounts of unstructured data.
  • Regulations such as the Health Insur­ance Portability and Accountability Act (HIPAA) require organizations to have policies for long term image retention and business continuity.

Unfortunately, traditional file storage approaches aren’t well-suited to manage vast amounts of unstructured data and present several barriers to modernizing healthcare infrastructure. A recent Taneja Group survey found the top three challenges to be:

  • Lack of flexibility: Traditional file storage appliances require dedicated hardware and don’t offer tight integration with collaborative cloud storage environments.
  • Poor utilization: Traditional file storage requires too much storage capacity for system fault tolerance, which reduces usable storage.
  • Inability to scale: Traditional storage solutions such as RAID-based arrays are gated by controllers and simply aren’t designed to easily expand to petabyte storage levels.

As a result, healthcare organizations are moving to object storage solutions that offer an architecture inherently designed for web scale storage environments. Specifically, object storage offers healthcare organizations the following advantages:

  • Simplified management, hardware independence and a choice of deployment options – private, public or hybrid cloud – lowers operational and hardware storage costs
  • Web-scale storage platform provides scale as needed and enables a pay as you go model
  • Efficient fault tolerance protects against site failures, node failures and multiple disk failures
  • Built in security protects against digital and physical breeches
Publish date: 03/22/17
Page 1 of 34 pages  1 2 3 >  Last ›