Items Tagged: analytics
Dell is going after the big data market in a big way with its DX Object Storage Platform and partner RainStor’s big data repository.
Symantec Corp. today announced an Apache Hadoop add-on capability for its Veritas Cluster File System to help run "big data" analytics on storage area networks instead of scale-out, commodity servers using local storage.
Nimble Storage InfoSight: Transforming the Storage Lifecycle Experience with Deep-Data Analytics
The job of a storage administrator can sometimes be a difficult and lonely one. Administrators must handle a broad set of responsibilities, encompassing all aspects of managing their arrays and keeping up with user demands. And yet, flat IT budgets mean administrators are spread thin, with limited time to manage storage through its lifecycle, let alone improve and optimize storage practices and services.
In most organizations, the storage lifecycle is managed manually, as a complex and disjointed set of activities. Maintenance and support tend to be highly reactive, forcing administrators to play “catch up” each time a storage problem occurs. Monitoring and reporting rely on complex tools and large amounts of data that are difficult to interpret and act upon. Forecasting and planning are more art than science, leading administrators to overprovision to be on the safe side. These various lifecycle activities are seldom connected and inherently inefficient, and fail to provide administrators with the insight they need to anticipate issues and develop best practices. This, in turn, can put system availability and performance at risk, while reducing IT productivity.
Fortunately, one innovative vendor—Nimble Storage—has developed a powerful, data sciences-driven approach that promises to transform the storage lifecycle experience. Based on deep data collection, intelligent and predictive analytics, and automation built on storage and application expertise, Nimble InfoSight streamlines the storage lifecycle, providing administrators with the insights needed to optimize their arrays while also increasing their productivity. InfoSight collects and analyzes over 30 million data points each day from every installed Nimble Storage array worldwide, and then makes the resulting intelligence available both to Nimble engineers and customers. InfoSight automated analysis helps to proactively anticipate and prevent technical problems, significantly reducing the support burden on administrators. InfoSight also provides administrators with an intuitive, dashboard-driven portal into the performance, capacity utilization and data protection of their arrays, enabling them to monitor array operations across multiple sites and to better plan for future needs. By streamlining and informing key activities across the storage lifecycle, InfoSight simplifies and enhances day-to-day administrative tasks such as support, monitoring and forecasting, while enabling administrators to focus on more important initiatives.
To put a human face on InfoSight intelligence, Nimble Storage has also unveiled a new user community. The community allows users to connect and share ideas and resources via discussion forums, knowledge bases, and social media channels. The Nimble community will enable the company’s large and loyal customer base to write about and share their experiences and insights with each other, as well as with prospective users. Together, InfoSight and the Nimble community will give storage administrators unprecedented access to anonymized installed based data and peers’ expertise, enabling them to stay on top of their game and get more out of their arrays.
In this profile, we’ll examine the challenges administrators typically face on a day-to-day basis, and then take a closer look at InfoSight capabilities, and how they address these issues. We’ll then learn how two Nimble customers have benefited from InfoSight in several important ways. Finally, we’ll briefly examine the Nimble community, and discuss how these two initiatives together are empowering administrators through a combination of shared user data and insights.
With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services. “Extreme” applications like massive voice and image processing or complex financial analysis modeling that can push storage systems to their limits. Examples of some high visibility solutions include large-scale image pattern recognition applications and financial risk management based on high-speed decision-making. These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential.
Extreme Applications in the Enterprise Drive Parallel File System Adoption
With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services that include “extreme” applications like massive voice and image processing or complex fi-nancial analysis modeling that can push storage systems to their limits. Examples of some high visi-bility and big market impacting solutions include applications based on image pattern recognition at large scale and financial risk management based on decision-making at high speed.
These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential. Every day here at Taneja Group we see more and more mainstream enterprises exploring similar “extreme service” opportunities. But when enterprise IT data centers take stock of what it is required to host and deliver these new services, it quickly becomes apparent that traditional clustered and even scale-out file systems - of the kind that most enterprise data centers (or cloud providers) have racks and racks of - simply can’t handle the performance requirements.
There are already great enterprise storage solutions for applications that need either raw throughput, high capacity, parallel access, low latency, or high availability – maybe even for two or three of those at a time. But when an “extreme” application needs all of those requirements at the same time, only supercomputing type storage in the form of parallel file systems provides a functional solution. The problem is that most commercial enterprises simply can’t afford or risk basing a line of business on an expensive research project.
The good news is that some storage vendors have been industrializing former supercomputing storage technologies, hardening massively parallel file systems into commercially viable solutions. This opens the door for revolutionary services creation, enabling mainstream enterprise datacenters to support the exploitation of new extreme applications.
Tape Libraries: Why, When and Where?
A technology publication that shall go unnamed posted the news last year that Amazon Glacier was a “tape-killing cloud” and that it would “devastate the [tape] industry.”
Not exactly. We do believe that smaller tape implementations are going the way of the dodo bird. Cloud backup is quickly replacing small standalone tape drives and autoloaders for daily backup, and as low-end tape equipment ages IT replaces it with cloud for long-term backup retention.
However, tape housed in mid-sized and enterprise scale libraries are growing strongly in several high-value computing and industry segments. Thus the question for IT becomes not “Should I use tape?” but “When should I invest in tape libraries?”
Glassbeam SCALAR: Making Sense of the Internet of Things
In this new era of big data, sensors can be included in almost everything made. This “Internet Of Things” generates mountains of new data with exciting potential to be turned into invaluable information. As a vendor, if you make a product or solution that when deployed by your customers produces data about its ongoing status, condition, activity, usage, location, or practically any other useful information, you can now potentially derive deep intelligence that can be used to improve your products and services, better satisfy your customers, improve your margins, and grow market share.
For example, such information about a given customer’s usage of your product and its current operating condition, combined with knowledge gleaned from all of your customers’ experiences, enables you to be predictive about possible issues and proactive about addressing them. Not only do you come to know more about a customer’s implementation of your solution than the customer himself, but you can now make decisions about new features and capabilities based on hard data.
The key to gaining value from this “Internet Of Things” is the ability to make sense out of the kind of big data that it generates. One set of current solutions addresses data about internal IT operations including “logfile” analysis tools like Splunk and VMware Log Insight. These are designed for a technical user focused on recent time series and event data to improve tactical problem “time-to-resolution”. However, the big data derived from customer implementations is generally multi-structured across streams of whole “bundles” of complexly related files that can easily grow to PB’s over time. Business user/analysts are not necessarily IT-skilled (e.g. marketing, support, sales…) and the resulting analysis to be useful must at the same time be more sophisticated and be capable of handling dynamic changes to incoming data formats.
Click "Available Now" to read the full analyst opinion.
Whether clay pots, wooden barrels or storage arrays, vendors have always touted how much their wares can reliably store. And invariably, the bigger the vessel, the more impressive and costly it is, both to acquire and manage. The preoccupation with size as a measure of success implies that we should judge and compare offerings on sheer volume. But today, the relationship between physical storage media capacity and the effective value of the data "services" it delivers has become much more virtual and cloudy.
Using Hadoop to drive big data analytics doesn't necessarily mean building clusters of distributed storage -- good old external storage might be a better choice.
When deciding whether to implement an object storage platform into your environment, it makes sense to first outline what kind of data you're storing and how it's typically used.
- Premiered: 02/04/14
- Author: Arun Taneja
- Published: Tech Target: Search Cloud Storage
Today MapR and HP Vertica are rolling out an exciting joint integration, nicely addressing full SQL-on-Hadoop use cases. Vertica is now runnable, actually "pluggable", on and into MapR's enterprise quality Hadoop distribution. This is an interesting feat that depends highly on MapR's unique implementation of enterprise grade storage in place of the open source HDFS guts.
Data Defined Storage: Building on the Benefits of Software Defined Storage
At its core, Software Defined Storage decouples storage management from the physical storage system. In practice Software Defined Storage vendors implement the solution using a variety of technologies: orchestration layers, virtual appliances and server-side products are all in the market now. They are valuable for storage administrators who struggle to manage multiple storage systems in the data center as well as remote data repositories.
What Software Defined Storage does not do is yield more value for the data under its control, or address global information governance requirements. To that end, Data Defined Storage yields the benefits of Software Defined Storage while also reducing data risk and increasing data value throughout the distributed data infrastructure. In this report we will explore how Tarmin’s GridBank Data Management Platform provides Software Defined Storage benefits and also drives reduced risk and added business value for distributed unstructured data with Data Defined Storage.
There are plenty of technologies touted as the next big thing. Big data, flash, high-performance computing, in-memory processing, NoSQL, virtualization, convergence, software-defined whatever all represent wild new forces that could bring real disruption but big opportunities to your local data center.
- Premiered: 03/19/14
- Author: Mike Matchett
- Published: Tech Target: Search Data Center
Converging Branch IT Infrastructure the Right Way: Riverbed SteelFusion
Companies with significant non-data center and often widely distributed IT infrastructure requirements are faced with many challenges. It can be difficult enough to manage tens or hundreds if not thousands of remote or branch office locations, but many of these can also be located in dirty or dangerous environments that are simply not suited for standard data center infrastructure. It is also hard if not impossible to forward deploy the necessary IT experience to manage any locally placed resources. The key challenge then, and one that can be competitively differentiating on cost alone, is to simplify branch IT as much as possible while supporting branch business.
Converged solutions have become widely popular in the data center, particularly in virtualized environments. By tightly integrating multiple functionalities into one package, there are fewer separate moving parts for IT to manage while at the same time optimizing capabilities based on tightly intimately integrating components. IT becomes more efficient and in many ways gains more control over the whole environment. In addition to obviously increasing IT simplicity there are many other cascading benefits. The converged infrastructure can perform better, is more resilient and available, and offers better security than separately assembled silos of components. And a big benefit is a drastically lowered TCO.
Yet for a number of reasons, data center convergence approaches haven’t translated as usefully to beneficial convergence in the branch. No matter how tightly integrated a “branch in a box” is, if it’s just an assemblage of the usual storage, server, and networking silo components it will still suffer from traditional branch infrastructure challenges – second-class performance, low reliability, high OPEX, and difficult to protect and recover. Branches have unique needs and data center infrastructure, converged or otherwise, isn’t designed to meet those needs. This is where Riverbed has pioneered a truly innovative converged infrastructure designed explicitly for the branch which provides simplified deployment and provisioning, resiliency in the face of network issues, improved protection and recovery from the central data center, optimization and acceleration for remote performance, and a greatly lowered OPEX.
In this paper we will review Riverbed’s SteelFusion (formerly known as Granite) branch converged infrastructure solution, and see how it marries together multiple technical advances including WAN optimization, stateless compute, and “projected” datacenter storage to solve those branch challenges and bring the benefits of convergence out to branch IT. We’ll see how SteelFusion is not only fulfilling the promise of a converged “branch” infrastructure that supports distributed IT, but also accelerates the business based on it.
Content Raven, a leading digital distribution platform provider, today announced the availability of its Spring 2014 release. The new release introduces significant new features that enable organizations of all sizes to keep control of and gain analytical insight into confidential and business-critical files as they are shared with both external and internal audiences, stopping data leakage and protecting intellectual property.
Internet of Things (IoT) is a hot trend in today’s economy of connected devices. Every high-tech device in the data center industry – storage, servers, switches, application software, or any appliance – generates copius amounts of machine data that can be analyzed to help the manufacturer gain operational and strategic insights. These insights assist in reducing costs to support customers by lowering mean time to resolution (MTTR) per case. They also enable manufacturers to build and sell specific value add services to their customers with the objective of being proactive, predictive and prescriptive in front of their top enterprise accounts. Finally, deep insights can be gleaned off this kind of machine data analytics, if done the right way, to funnel strategic information onto future product roadmaps. All of these benefits are very quantifiable if there is a solution like Glassbeam in place in such markets.
This webinar will discuss the right elements of such a solution, lay out the foundation of ROI with specific benefits, and discuss a case study with a Fortune 100 account of Glassbeam.
- Premiered: 06/09/14 at 11 am PT/2 pm ET
- Location: OnDemand
- Speaker(s): Mike Matchett, Senior Analyst at Taneja Group; Puneet Pandit, Co-founder & CEO, Glassbeam
- Sponsor(s): Glassbeam, BrightTALK
Nimble Storage, looking to break into the enterprise, today launched its highest capacity hybrid flash array and a solid-state drive (SSD) shelf as part of what it calls its Adaptive Flash platform.
Is there a benefit to understanding how your users, suppliers or employees relate to and influence one another? It's hard to imagine that there is a business that couldn't benefit from more detailed insight and analysis, let alone prediction, of its significant relationships.
- Premiered: 06/17/14
- Author: Mike Matchett
- Published: Tech Target: Search Data Center
The HP Solution to Backup Complexity and Scale
There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.
While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.
For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HP as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.
First, HP is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce storage and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.
But HP hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HP people have taken to heart a “customer first” message to provide a truly solution-focused HP experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HP business unit components are in the “box”. And significantly, this approach elevates HP from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HP is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.
In this report, we’ll examine first why HP StoreOnce and Data Protector products are truly game-changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.
It's no secret organizations today are dealing with data growth up to and beyond the petabyte level. This massive growth magnifies data management challenges, such as the overheads associated with storage acquisition and operation, as well as exacerbated data protection, governance, and security concerns due to regulatory issues and data mobility.