Includes Storage Arrays, NAS, File Systems, Clustered and Distributed File Systems, FC Switches/Directors, HBA, CNA, Routers, Components, Semiconductors, Server Blades.
Taneja Group analysts cover all form and manner of storage arrays, modular and monolithic, enterprise or SMB, large and small, general purpose or specialized. All components that make up the SAN, FC-based or iSCSI-based, and all forms of file servers, including NAS systems based on clustered or distributed file systems, are covered soup to nuts. Our analysts have deep backgrounds in file systems area in particular. Components such as Storage Network Processors, SAS Expanders, FC Controllers are covered here as well. Server Blades coverage straddles this section as well as the Infrastructure Management section above.
Cloud computing has several clear business models. SaaS delivers software, upgrades and maintenance as a service, saving customers money by eliminating costs of ownership that the cloud provider now bears. Several technology factors contribute to SaaS' increasing popularity including protocol standardization, the ubiquity of Web browsing, access to broadband networks, and rapid application development. It’s not perfect – people have legitimate concerns about data security, governance, vendor lock-in, and data portability, but based on its success, the advantages of SaaS seem to be outweighing its challenges. And the market segment is growing fast.
Another cloud computing model is IaaS, where the customer outsources the compute infrastructure to a cloud provider. This model is gaining traction, especially for application development and testing. App developers are able to take the capital they would otherwise have to spend on buying computing gear and target it to specific development projects that are underway in Internet data centers. The problem with IaaS is that cloud software development doesn’t necessarily translate well into on-premises deployments and many developers prefer to develop SaaS.
Storage in the cloud is yet another business model with different dynamics. While SaaS and Iaas are strongly oriented towards cloud deployments, there are strong pressures driving cloud storage toward on-premises deployments. While storing data in the cloud for SaaS and IaaS computing is certainly important, the vast amount of data still resides on-premises where its growth is largely unchecked. If the cloud storage is going to succeed, it needs to become relevant to the people managing data in corporate on-premises data centers.
In this new era of big data, sensors can be included in almost everything made. This “Internet Of Things” generates mountains of new data with exciting potential to be turned into invaluable information. As a vendor, if you make a product or solution that when deployed by your customers produces data about its ongoing status, condition, activity, usage, location, or practically any other useful information, you can now potentially derive deep intelligence that can be used to improve your products and services, better satisfy your customers, improve your margins, and grow market share.
For example, such information about a given customer’s usage of your product and its current operating condition, combined with knowledge gleaned from all of your customers’ experiences, enables you to be predictive about possible issues and proactive about addressing them. Not only do you come to know more about a customer’s implementation of your solution than the customer himself, but you can now make decisions about new features and capabilities based on hard data.
The key to gaining value from this “Internet Of Things” is the ability to make sense out of the kind of big data that it generates. One set of current solutions addresses data about internal IT operations including “logfile” analysis tools like Splunk and VMware Log Insight. These are designed for a technical user focused on recent time series and event data to improve tactical problem “time-to-resolution”. However, the big data derived from customer implementations is generally multi-structured across streams of whole “bundles” of complexly related files that can easily grow to PB’s over time. Business user/analysts are not necessarily IT-skilled (e.g. marketing, support, sales…) and the resulting analysis to be useful must at the same time be more sophisticated and be capable of handling dynamic changes to incoming data formats.
Click "Available Now" to read the full analyst opinion.
Hadoop is coming to enterprise IT in a big way. The competitive advantage that can be gained from analyzing big data is just too “big” to ignore. And the amount of data available to crunch is only growing bigger, whether from new sensors, capture of people, systems and process “data exhaust”, or just longer retention of available raw or low-level details. It’s clear that enterprise IT practitioners everywhere are soon going to have to operate scale-out computing platforms in the production data center, and being the first, most mature solution on the scene, Hadoop is the likely target. The good news is that there is now a plethora of Hadoop infrastructure options to choose from to fit almost every practical big data need – the challenge now for IT is to implement the best solutions for their business client needs.
While Apache Hadoop as originally designed had a relatively narrow application for only certain kinds of batch-mode parallel algorithms applied over unstructured (or semi-structured depending on your definition) data, because of its widely available open source nature, commodity architecture approach, and ability to extract new kinds of value out of previously discarded or ignored data sets, the Hadoop ecosystem is rapidly evolving and expanding. With recent new capabilities like YARN that opens up the main execution platform to applications beyond batch MapReduce, the integration of structured data analysis, real-time streaming and query support, and the roll out of virtualized enterprise hosting options, Hadoop is quickly becoming a mainstream data processing platform.
There has been much talk that in order to derive top value from big data efforts, rare and potentially expensive data scientist types are needed to drive. On the other hand, there is an abundance of higher level analytical tools and pre-packaged applications emerging to support the existing business analyst and user with familiar tools and interfaces. While completely new companies have been founded on the exciting information and operational intelligence gained from exploiting big data, we expect wider adoption by existing organizations based on augmenting traditional lines of business with new insight and revenue enhancing opportunity. In addition, a Hadoop infrastructure serves as a great data capture and ETL base for extracting more structured data to feed downstream workflows, including traditional BI/DW solutions. No matter how you want to slice it, big data is becoming a common enterprise workload, and enterprise IT infrastructure folks will need to deploy, manage, and provide Hadoop services to their businesses.
Storage has long been the tail on the proverbial dog in virtualized environments. The random I/O streams generated by multiple consolidated VMs creates an “I/O blender” effect, which overwhelms traditional array-based architectures and compromises application performance. As many customers have learned the hard way, doing storage right in the virtual infrastructure required a fresh and innovative approach.
These sentiments were echoed in the findings of Taneja Group’s latest research study on storage acceleration and performance. More than half of the 280 buyers and practitioners we surveyed have an immediate need to accelerate one or more applications running in their virtual infrastructures. While three quarters of survey respondents are seriously considering deploying a storage acceleration solution, only a handful are willing to give up or compromise their existing storage capabilities in the process. Customers need better performance, but in most cases can neither afford nor stomach a wholesale upgrade or replacement of their storage infrastructure to achieve it.
Fortunately for performance-challenged, mid-sized and enterprise customers, there is a better alternative. QLogic’s FabricCache QLE10000 is a server-side, SAN caching solution, which is designed to accelerate multi-server virtualized and clustered applications. Based on QLogic’s innovative Mt. Rainier technology, the QLE10000is the industry’s first caching SAN adapter offering that enables the cache from individual servers to be pooled and shared across multiple physical servers. This breakthrough functionality is delivered in the form of a combined Fibre Channel and caching host bus adapter (HBA), which plugs into existing HBA slots and is transparent to hypervisors, operating systems, and applications. QLogic’s FabricCache QLE10000 adapter cost effectively boosts performance of critical applications, while enabling customers to preserve their existing storage investments.
There is a storm brewing in IT today that will upset the core ways of doing business with standard data processing platforms. This storm is being fueled by inexorable data growth, competitive pressures to extract maximum value and insight from data, and the inescapable drive to lower costs through unification, convergence, and optimization. The storage market in particular is ripe for disruption. Surprisingly, that storage disruption may just come from a current titan only seen by many as primarily an application/database vendor —Oracle.
When Oracle bought Sun in 2009, one of the areas of expertise brought over was in ZFS, a “next generation” file system. While Oracle clearly intended to compete in the enterprise storage market, some in the industry thought that the acquisition would essentially fold any key IP into narrow solutions that would only effectively support Oracle enterprise workloads. And in fact, Oracle ZFS Storage Appliances have been successfully and stealthily moving into more and more data centers as the DBA-selected best option for “database” and “database backup” specific storage.
But the truth is that Oracle has continued aggressive development on all fronts, and its ZFS Storage Appliance is now extremely competitive as scalable enterprise storage, posting impressive benchmarks topping other comparative solutions. What happens when support for mixed workloads is also highly competitive? The latest version of Oracle ZFS Storage Appliances, the new ZS3 models, become a major contender as a unified, enterprise featured, and affordable storage platform for today’s data center, and are positioned to bring Oracle into enterprise storage architectures on a much broader basis going forward.
In this report we will take a look at the new ZS3 Series and examine how it delivers both on its “application engineered” premise and its broader capabilities for unified storage use cases and workloads of all types. We’ll briefly examine the new systems and their enterprise storage features, especially how they achieve high performance across multiple use cases. We’ll also explore some of the key features engineered into the appliance that provide unmatched support for Oracle Database capabilities like Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC) which provides heat map driven storage tiering. We’ll also review some of the key benchmark results and provide an indication of the TCO factors driving its market leading price/performance.
Cloud computing does some things very well. It delivers applications and upgrades. It runs analysis on cloud-based big data. It connects distributed groups sharing communications and files. It provides a great environment for developing web applications and running test/dev processes.
But public cloud storage is a different story. The cloud does deliver long-term, cost-effective storage for inactive backup and archives. Once the backup and archive data streams are scheduled and running they can use relatively low bandwidth as long as they are deduping on-site before transport. (And as long as they do not have to be rehydrated pre-upload, which is another story.) This alone helps to save on-premises storage capacity and can replace off-site tape vaulting.
But cloud storage users want more. They want to have the cost and agility advantages of the public cloud without incurring the huge expense of building one. They want to keep using the public cloud for cost-effective backup and archive, but they also want to use it for more active – i.e. primary – data. This is especially true for workloads with rapidly growing data sets that quickly age like collaboration and file shares. Some of this data needs to reside locally but the majority can be moved, or tiered, to public cloud storage.
What does the cloud need to work for this enterprise wish list? Above all it needs to make public cloud storage an integral part of the on-premises primary storage architecture. This requires intelligent and automated storage tiering, high performance for baseline uploads and continual snapshots, no geographical lock-in, and a central storage management console that integrates cloud and on-premises storage.
Hybrid cloud storage, or HCS, meets this challenge. HCS turns the public cloud into a true active storage tier for less active production data that is not ready to be put out to backup pasture. Hybrid cloud storage integrates on-premises storage with public cloud storage services: not as another backup target but as integrated storage infrastructure. The storage system uses both the on-premises array and scalable cloud storage resources for primary data, expanding that data and data protection to a cost-effective cloud storage tier.
Microsoft’s innovative and broad set of technology enables a true, integrated solution for hybrid cloud storage for business and government organizations – not just a heterogeneous combination of private cloud and public cloud storage offerings. Comprised of StorSimple cloud-integrated storage and the Windows Azure Storage service, HCS from Microsoft well serves the demanding enterprise storage environment, enabling customers to realize huge data management efficiencies in their Microsoft applications and Windows and VMware environments.
This paper will discuss how the Microsoft solution for hybrid cloud storage, consisting of Windows Azure and StorSimple, is different from traditional storage, best practices for leveraging it, and the real world results from multiple customer deployment examples.