Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.
All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.
We live in a digital world where online services, applications and data must always be available. Yet the modern data center remains very susceptible to interruptions. These opposing realities are challenging traditional backup applications and disaster recovery solutions and causing companies to rethink what is needed to ensure 100% uptime of their IT environments.
The need for availability goes well beyond recovering from disasters. Companies must be able to rapidly recover from many real world disruptions such as ransomware, device failures and power outages as well as natural disasters. Add to this the dynamic nature of virtualization and cloud computing, and it’s not hard to see the difficulty of providing continuous availability while managing a highly variable IT environment that is susceptible to trouble.
Some companies feel their backup devices will give them adequate data protection and others believe their disaster recovery solutions will help them restore normal business operations if an incident occurs. Regrettably, far too often these solutions fall short of meeting user expectations because they don’t provide the rapid recovery and agility needed for full business continuance.
Fortunately, there is a way to ensure a consistent experience in an inconsistent world. It’s called IT resilience. IT resilience is the ability to ensure business services are always on, applications are available and data is accessible no matter what human errors, events, failures or disasters occur. And true IT resilience goes a step further to provide continuous data protection (CDP), end-to-end recovery automation irrespective of the makeup of a company’s IT environment and the flexibility to evolve IT strategies and incorporate new technology.
Intrigued by the promise of IT resilience, companies are seeking data protection solutions that can withstand any disaster to enable a reliable online experience and excellent business performance. In a recent Taneja Group survey, nearly half the companies selected “high availability and resilient infrastructure” as one of their top two IT priorities. In the same survey, 67% of respondents also indicated that unplanned application downtime compromised their ability to satisfy customer needs, meet partner and supplier commitments and close new business.
This strong customer interest in IT resilience has many data protection vendors talking about “resilience.” Unfortunately, many backup and disaster recovery solutions don’t provide continuous data protection plus hardware independence, strong virtualization support and tight cloud integration. This is a tough combination and presents a big challenge for data protection vendors striving to provide enterprise-grade IT resilience.
There is however one data protection vendor that has replication and disaster recovery technologies designed from the ground up for IT resilience. The Zerto Cloud Continuity Platform built on Zerto Virtual Replication offers CDP, failover (for higher availability), end-to-end process automation, heterogeneous hypervisor support and native cloud integration. As a result, IT resilience with continuous availability, rapid recovery and agility is a core strength of the Zerto Cloud Continuity Platform.
This paper will explore the functionality needed to tackle modern data protection requirements. We will also discuss the challenges of traditional backup and disaster recovery solutions, outline the key aspects of IT resilience and provide an overview of the Zerto Cloud Continuity Platform as well as the hypervisor-based replication that Zerto pioneered.
Every year Dell measures the availability level of its Storage Center Series of products by analyzing the actual failure data in the field. For the past few years Dell has asked Taneja Group to audit the results to ensure that these systems were indeed meeting the celebrated 5 9s availability levels. And they have. This year Dell asked us to audit the results specifically on the relatively new model, SC4020.
Even though the SC4020 is a lower cost member of the SC family, it meets 5 9s criteria just like its bigger family members. Dell did not cut costs by sacrificing availability, but by space-saving design like a single enclosure for media and controllers instead of two separate enclosures. Even with the smaller footprint – 2U to the SC8000’s 6U -- the SC4020 still achieves 5 9s using the same strict test measurement criteria.
Frankly, many vendors choose not to subject their lower cost models to 5 9s testing. The vendor may not have put a lot of development dollars into the lower cost product in an effort to reduce cost and maintain profitability on a lower-priced system.
Dell didn’t do it this way with the SC4020. Instead of watering it down by stripping features, they architected high efficiency into a smaller footprint. The resulting array is smaller and more affordable, and retains the SC Series enterprise features: high availability and reliability, performance, centralized management, not only across all SC models but also across the Dell EqualLogic PS and FS models. This level of availability and efficiency makes the SC4020 an economical and highly efficient system for the mid-market and the distributed enterprise.
The challenge for mid-sized business is that they have smaller IT staffs and smaller budgets than the enterprise, yet still need high availability, high performance, and robust capacity on their storage systems. Every storage system will deliver parts of the solution but very, very few will deliver simplicity, efficiency, performance, availability, and capacity on a low-cost system.
We’re not blaming the storage system makers, since it’s hard to offer a storage system with all of these benefits and still maintain acceptable profit. It has been difficult for the storage manufacturers to design enterprise storage features into an affordable mid-ranged storage system that is enterprise-capable yet still yield enough profit to sustain the research and development needed to keep the product viable.
Dell is a master at this game with its Storage Center Intel-based portfolio. SC series range from an entry-level model up to enterprise datacenter class, with most of the middle line devoted to delivering enterprise features for the mid-market business. A few months ago Taneja Group reviewed and validated high availability features across the economical SC line. Dell is able to deliver those features because the SC operating system (SCOS) and FluidFS software stacks operate across every system in the SC family. Features are developed in such a way that a broad range of products can be deployed with enterprise data services each with highly tuned cost versus performance balance.
Dell’s new SC7000 series carries on with this successful game plan as the first truly unified storage platform for the popular SC line. Starting with the SC7020, this series now unifies block and file data in an extremely efficient and affordable architecture. And like all SC family members, the SC7020 comes with enterprise capabilities including high performance and availability, centralized management, storage efficiencies and more; all at mid-market pricing.
What distinguishes the SC7020 though is the level of efficiency and affordability that is rare for enterprise capable systems. Simple and efficient deployment, consistent management across all Storage Center platforms and investment protection through in-chassis upgrades (SC series can support multiple types of media within the same enclosure), makes the SC7020 an ideal choice for mid-market businesses. Add on auto-tiering (effectively right-sizing most frequently used data to the fastest media tier), built-in compression and multi-protocol support, provides these customers with a storage solution that evolves with their business needs
In this Solution Profile, Taneja Group explores how the cost-effective SC7020 delivers enterprise features to the data-intensive mid-market, and how Dell’s approach mitigates tough customer challenges.
Deduplication is a foundational technology for efficient backup and recovery. Vendors may argue over product features – where to dedupe, how much capacity savings, how fast are its backup speeds -- but everyone knows how central dedupe is to backup success.
However, serious pressures are forcing changes to the backup infrastructure and dedupe technologies. Explosive data growth is changing the whole market landscape as IT struggles with bloated backup windows, higher storage expenses, and increased management overhead. These pain points are driving real progress: replacing backup silos with expanded data protection platforms. These comprehensive systems backup from multiple sources to distributed storage targets, with single console management for increased control.
Dedupe is a critical factor in this scenario, but not in its conventional form as a point solution. Traditional dedupe is suited to backup silos. Moving deduped data outside the system requires rehydrating, which impacts performance and capacity between the data center, ROBO, DR sites and the cloud. Dedupe must expand its feature set in order to serve next generation backup platforms.
A few vendors have introduced new dedupe technologies but most of them are still tied to specific physical backup storage systems and appliances. Of course there is nothing wrong with leveraging hardware and software to increase sales, but storage system-specific dedupe means that data must rehydrate whenever it moves beyond the system. This leaves the business with all the performance and capacity disadvantages the infrastructure had before.
Federating dedupe across systems goes a long way to solve that problem. HPE StoreOnce extends consistent dedupe across the infrastructure. Only HPE provides customers deployment flexibility to implement the same deduplication technology in four places: target appliance, backup/media server, application source and virtual machine. This enables data to move freely between physical and virtual platforms and source and target machines without the need to rehydrate.
This paper will describe the challenges of data protection in the face of huge data growth, why dedupe is critical to meeting the challenges and how HPE is achieving the vision of federated dedupe with StoreOnce.
All Flash Arrays (AFAs) have had an impressive run of growth. From less than 5% of total array revenue in 2011, they’re expected to approach 50% of total revenue by the end of 2016, roughly a 60% CAGR. This isn’t surprising, really. Even though they’ve historically cost more on a $$/GB level (the gap is rapidly narrowing), they offer large advantages over hybrid and HDD-based arrays in every other area.
The most obvious advantage that SSDs have over HDDs is in performance. With no moving parts to slow them down, they can be over a thousand times faster than HDDs by some measures. Using them to eliminate storage bottlenecks, CIOs can squeeze more utility out of their servers. The high performance of SSD’s has allowed storage vendors to implement storage capacity optimization techniques such as thin deduplication within AFAs. Breathtaking performance combined with affordable capacity optimization has been the major driving force behind AFA market gains to date.
While people are generally aware that SSDs outperform HDDs by a large margin, they usually have less visibility into the other advantages that they bring to the table. SSDs are also superior to HDDs in the areas of reliability (and thus warranty), power consumption, cooling requirements and physical footprint. As we’ll see, these TCO advantages allow users to run at significantly lower OPEX levels when switching to AFAs from traditional, HDD-based arrays.
When looking at the total cost envelope, factoring in their superior performance, AFAs are already the intelligent purchase decision, particularly for Tier 1 mission critical workloads. Now, a new generation of high capacity SSDs is coming and it’s poised to accelerate the AFA takeover. We believe the Flash revolution in storage that started in 2011will outpace even the most optimistic forecast in 2016 easily eclipsing the 50% of total revenue predicted for external arrays. Let’s take a look at how and why.
After conducting a number of in-depth field interviews with real world Microsoft Azure StorSimple users, we’ve discovered that the real StorSimple story is all about helping people transition smoothly from on-premises storage to an on-premises/cloud hybrid model. From there, it helps both IT and the business accelerate broader adoption of cloud-centric hybrid IT architecture. StorSimple not only simplifies on-premises storage challenges with fully integrated automated cloud-tiering and data protection (providing elastic capacity and cloud burstability), but also optimizes distributed file sharing and application storage (with cloud-based DR, centralized management, and extensibility).
However, it’s easy to talk about features – what a product does and how it does it. These are important things to know and we’ll highlight several key capabilities in this report. But the real proof of the successful product pudding is this: what do actual customers say? What are their challenges, their hopes, their needs? And how did their storage decisions serve those needs?
To answer these questions, we took an in-depth look at StorSimple through a customer lens. Real-life enterprise customers told us about their original journeys to StorSimple, and how Microsoft is helping them to move on more fully to the cloud. Ultimately, we noted five highly-valued critical advantages of StorSimple: native data protection and disaster recovery, deployment and management simplicity across multiple locations, a high return on investment, and a dynamic storage environment that unifies files and applications across the enterprise.