In 2009, a fully burdened computing infrastructure figured storage at about 20% of all components. By 2015, we’ve surged to 40% storage in the infrastructure (and counting) as companies pour in more and more data. And most of this data is hard-to-manage unstructured data, which typically represents 75%-80% of corporate data. This burdened IT infrastructure presents two broad and serious consequences: it increases capital and operating expenses, and cripples unstructured data management. Capital and operating expenses scale up sharply with the swelling storage tide. Today’s storage costs alone include buying and deploying storage for file shares, email, and ECMs like SharePoint. Additional services such as third-party file sharing services and cloud-based storage add to cost and complexity.
And growing storage and complexity make managing unstructured data extraordinarily difficult. A digital world is delivering more data to more applications than ever before. IT’s inability to visualize and act upon widely distributed data impacts retention, compliance, value, and security. In fact, this visibility (or invisibility) problem is so prevalent that it has gained it own stage name: dark data. Dark data plagues IT with hard-to-answer questions: What data is on those repositories? How old is it? What application does it belong to? Which users can access it?
IT may be able to answer those questions on a single storage system with file management tools. But across a massive storage infrastructure including the cloud? No. Instead, IT must do what it can to tier aging data, to safely delete when possible, and try to keep up with application storage demands across the map. The status quo is not going to get any better in the face of data growth. Data is growing at 55% and higher per year in the enterprise. The energy ramifications alone of storing that much data are sobering. Data growth is getting to the point that it is overrunning the storage budget’s capacity to pay for it. And managing that data for cost control and business processes is harder still.
Conventional wisdom would have IT simply move data to the cloud. But conventional wisdom is mistaken. The problem is not how to store all of that data – IT can solve that problem with a cloud subscription. The problem is that once stored, IT lacks the tools to intelligently manage that data where it resides.
This is where highly scalable, unstructured file management comes into the picture: the ability to find, classify, and act upon files spread throughout the storage universe. In this Product Profile we’ll present Acaveo, a file management platform that discovers and acts on data-in-place, and federates classification and search activities across the enterprise storage infrastructure. The result is highly intelligent and highly scalable file management that cuts cost and adds value to business processes across the enterprise.
There is a serious re-hosting effort going on in data center storage as flash-filled systems replace large arrays of older spinning disks for tier 1 apps. Naturally as costs drop and the performance advantages of flash-accelerated IO services become irresistible, they also begin pulling in a widening circle of applications with varying QoS needs. Yet this extension leads to a wasteful tug-of-war between high-end flash only systems that can’t effectively serve a wide variety of application workloads and so-called hybrid solutions originally architected for HDDs that are often challenged to provide the highest performance required by those tier 1 applications.
Someday in its purest form all-flash storage theoretically could drop in price enough to outright replace all other storage tiers even at the largest capacities, although that is certainly not true today. Here at Taneja Group we think storage tiering will always offer a better way to deliver varying levels of QoS by balancing the latest in performance advances appropriately with the most efficient capacities. In any case, the best enterprise storage solutions today need to offer a range of storage tiers, often even when catering to a single application’s varying storage needs.
There are many entrants in the flash storage market, with the big vendors now rolling out enterprise solutions upgraded for flash. Unfortunately many of these systems are shallow retreads of older architectures, perhaps souped-up a bit to better handle some hybrid flash acceleration but not able to take full advantage of it. Or they are new dedicated flash-only point products with big price tags, immature or minimal data services, and limited ability to scale out or serve a wider set of data center QoS needs.
Oracle saw an opportunity for a new type of cost-effective flash-speed storage system that could meet the varied QoS needs of multiple enterprise data center applications – in other words, to take flash storage into the mainstream of the data center. Oracle decided they had enough storage chops (from Exadata, ZFS, Pillar, Sun, etc.) to design and build a “flash-first” enterprise system intended to take full advantage of flash as a performance tier, but also incorporate other storage tiers naturally including slower “capacity” flash, performance HDD, and capacity HDD. Tiering by itself isn’t a new thing – all the hybrid solutions do it and there are other vendor solutions that were designed for tiering – but Oracle built the FS1 Flash Storage System from the fast “flash” tier down, not by adding flash to a slower or existing HDD-based architecture working “upwards.” This required designing intelligent automated management to take advantage of flash for performance while leveraging HDD to balance out cost. This new architecture has internal communication links dedicated to flash media with separate IO paths for HDDs, unlike traditional hybrids that might rely solely on their older, standard HDD-era architectures that can internally constrain high-performance flash access.
Oracle FS1 is a highly engineered SAN storage system with key capabilities that set it apart from other all-flash storage systems, including built in QoS management that incorporates business priorities, best-practices provisioning, and a storage alignment capability that is application aware – for Oracle Database naturally, but that can also address a growing body of other key enterprise applications (such as Oracle JD Edwards, PeopleSoft, Siebel, MS Exchange/SQL Server, and SAP) – and a “service provider” capability to carve out multi-tenant virtual storage “domains” while online that are actually enforced at the hardware partitioning level for top data security isolation.
In this report, we’ll dive in and examine some of the great new capabilities of the Oracle FS1. We’ll look at what really sets it apart from the competition in terms of its QoS, auto-tiering, co-engineering with Oracle Database and applications, delivered performance, capacity scaling and optimization, enterprise availability, and OPEX reducing features, all at a competitive price point that will challenge the rest of the increasingly flash-centric market.
Over the past few years, to reduce cost and to improve time-to-value, converged infrastructure systems – the integration of compute, networking and storage - have been readily adopted by large enterprise users. The success of these systems results from the deployment of purpose built integrated converged infrastructure optimized for the most common IT workloads like Private Cloud, Big Data, Virtualization, Database and Desktop Virtualization (VDI). Traditionally these converged infrastructure systems have been built using a three-tier architecture; where compute, networking and storage, while integrated together in same rack, still consisted of best-in-breed standalone devices. These systems work well in stable, predictable environments, however, when a virtualization environment is dynamic with unpredictable growth, traditional three-tier architectures often times lack the simplicity, scalability and flexibility needed to operate in such environment.
Enter HyperConvergence, where the three-tier architecture has been collapsed into a single system that is purpose-built for virtualization from the ground up with virtualization, compute and storage, along with advanced features such as deduplication, compression and data protection, are all integrated into an x86 industry-standard building block node. These devices are built upon scale-out architectures with a 100% VM centric management paradigm. The simplicity, scalability and flexibility of this architecture make it a perfect fit for dynamic virtualized environments.
Dell XC Web-scale Converged Appliances powered by Nutanix software are delivered as a series of HyperConverged models that are extremely flexible and scalable. In this solution brief we will examine what constitutes a dynamic virtualized environment and how the Dell XC Web-scale Appliance series fits into such an environment. We can confidently state that by implementing Dell’s XC flexible range of Web-scale appliances, businesses can deploy solutions across a broad spectrum of virtualized workloads where flexibility, scalability and simplicity are critical requirements. Dell is an ideal partner to deliver Nutanix software because of its global reach, streamlined operations and enterprise systems solutions expertise. The company is well positioned to bring HyperConverged platforms to the masses and introduce the technology to a new set of customers previously unreached.
With the advent of server virtualization, many adopters erroneously think that disaster recovery (DR) is a problem of the past. They cite the ability of the hypervisors to replace the two most common yet imperfect DR choices: 1) infrastructure replication to a secondary replica site – fast to restore but very expensive, or 2) economical tape backup with off-site long-term storage – economical but slow to recover from.
The reality is that while server virtualization has certainly helped the industry get closer to simpler and less expensive DR products, DR still remains one of the major challenges for IT. This is especially true for applications that fall somewhere between the most mission critical where RTOs and RPOs of a few seconds is needed (and cost is often no object) and those that find RTOs and RPOs of a day or two to be adequate. Today, DR products available for these “intermediate” applications are few and far between, especially when overall cost of DR is considered.
The missing piece so far has been a cost-effective DR solution with excellent RTO and RPO for the majority of business applications -- without requiring a secondary site. OneCloud steps into the gap by replacing that expensive site with the hyper-scale public cloud. This Profile will discuss how OneCloud works to extend the primary data center onto the cloud, and how this impacts the ease and speed of VM recovery.
Every large IT shop has a long shelf of performance management solutions ranging from big platform bundles bought from legacy vendors, through general purpose log aggregators and event consoles, to a host of device-specific element managers. Despite the invested costs of acquiring, installing, supporting, and learning these often complex tools, only a few of them are in active daily use. Most are used only reactively and many just gather dust for a number of reasons. But, if only because of the ongoing costs of keeping management tools current, it’s only the solutions that get used that are worth having.
When it comes to picking which tool to use day-to-day, it’s not the theory of what it could do, it’s the actual value of what it does for the busy admin trying to focus on the tasks at-hand. And among the myriad of things an admin is responsible for, assuring performance requires the most management solution support. Performance related tasks include checking on the health of resources that the admin is responsible for, improving utilization, finding lurking or trending issues to attend to in order to head off disastrous problems later, working with other IT folks to diagnose and isolate service impacting issues, planning new activities, and communicating relevant insight to others – in IT, the broader business, and even to external stakeholders.
Admins responsible for infrastructure, when faced with these tasks, have huge challenges in large, heterogeneous, complex environments. While vendor-specific device and element managers drill into each piece of equipment, they help mostly with easily identifiable component failure. Both daily operational status and difficult infrastructure challenges involve looking across so-called IT domains (i.e. servers and storage) for thorny performance impacting trends or problems. The issue with larger platform tools is that they require a significant amount of installation, training, ongoing tool support, and data management that all detract from the time an admin can actually spend on primary responsibilities.
There is room for a new style of system management that is agile, insightful and empowering, and we think Galileo presents just such a compelling new approach. In this report we’ll explore some of the IT admin’s common performance challenges and then examine how Galileo Performance Explorer with its cloud-hosted collection and analysis helps conquer them. We’ll look at how Performance Explorer crosses IT domains to increase insight, easily implements and scales, fosters communication, and focuses on and enables the infrastructure admin to achieve daily operational excellence. We’ll also present a couple of real customer interviews in which, despite sunk costs in other solutions, adding Galileo to the data center quickly improved IT utilization, capacity planning, and the service levels delivered back to the business.
Within the past few months IBM announced a new member of its FlashSystem family of all-flash storage platforms – the IBM FlashSystem V840. FlashSystem V840 adds a rich set of storage virtualization features to the baseline FlashSystem 840 model. V840 combines two venerable technology heritages: the hardware hails from the long lineage of Texas Memory Systems flash storage arrays, and the storage services feature set for FlashSystem V840 is inherited from the IBM storage virtualization software that powers the SAN Volume Controller (SVC). One was created to deliver the highest performance out of flash technology and the other was a forerunner of what is being termed software defined storage. Together, these two technology streams represent decades of successful customer deployments in a wide variety of enterprise environments.
It is easy to be impressed with the performance and the tight integration of SVC functionality built into the FlashSystem V840. It is also easy to appreciate the wide variety of storage services built on top of SVC that are now an integral part of FlashSystem V840. But we believe the real impact of FlashSystem V840 is understood when one views how this product affects the cost of flash appliances, and more generally how this new cost profile will undoubtedly affect traditional data center architecture and deployment strategies. This Solution Profile will discuss how IBM FlashSystem V840 combines software-defined storage with the extreme performance of flash, and why the cost profile of this new product – equivalent essentially to current high performance disk storage – will have a major positive impact on data center storage architecture and the businesses that these data centers support.