Includes iSCSI, Fibre Channel or FC, InfiniBand or IB, SMI-S, RDMA over IP, FCoE, CEE, SAS, SCSI, NPIV, SSD.
All technologies relating to storage and servers are covered in this section. Taneja Group analysts have deep technology and business backgrounds and several have participated in the development of several of these technologies. We take pride in explaining complex technologies in a simple enough manner for IT, the press community and the industry at large to understand.
UPDATED FOR 2014: Today’s storage industry is as stubbornly media-centric as it has always been: SAN, NAS, DAS; disk, cloud, tape. This centricity forces IT to deal with storage infrastructure on media-centric terms. But the storage infrastructure should really serve data to customers, not media; it’s the data that yields business value, while the media should be an internal IT architectural choice.
Storage media focused solutions only support business indirectly by providing optimized storage infrastructure for data. Intelligent data services on the other hand provide direct business value by optimizing data utility, availability, and management. The shift from traditional thinking here is really about seeking to provide logically ideal data storage for the people who own and use the data first, while freeing up underlying storage infrastructure designs to be optimized for efficiencies as desired. Ideal data storage would be global in access and scalability, secure and resilient, and inherently support data-driven management and applications.
Done well, this data centric approach would yield significant competitive advantage by leveraging an enterprise’s valuable intellectual property: its vast and growing amounts of unstructured data. If this can be done by building on the company’s existing data storage and best practices, the business can quickly increase profitability, achieve faster time-to-market, and gain tremendous agility for innovation and competitiveness.
Tarmin, with its GridBank Data Management Platform, is a leading proponent of the data centric approach. It is firmly focused on managing data for global accessibility, protection and strategic value. In this product profile, we’ll explore how a data centric approach drives business value. We’ll then examine how GridBank was architected expressly around the concept that data storage should be a means for extracting business value from that data, not as a dead-end data dump.
Storage performance has long been the bane of the enterprise infrastructure. Fortunately, in the past couple of years, solid-state technologies have allowed new comers as well as established storage vendors to start shaping up clever, cost effective, and highly efficient storage solutions that unlock greater storage performance. It is our opinion that the most innovative of these solutions are the ones that require no real alteration in the storage infrastructure, nor a change in data management and protection practices.
This is entirely possible with server-side caching solutions today. Server-side caching solutions typically use either PCIe solid-state NAND Flash or SAS/SATA SSDs installed in the server alongside a hardware or software IO handler component that mirrors commonly utilized data blocks onto the local high speed solid-state storage. Then the IO handler redirects server requests for data blocks to those local copies that are served up with lower latency (microseconds instead of milliseconds) and greater bandwidth than the original backend storage. Since data is simply cached, instead of moved, the solution is transparent to the infrastructure. Data remains consolidated on the same enterprise infrastructure, and all of the original data management practices – such as snapshots and backup – still work. Moreover, server-side caches can actually offload IO from the backend storage system, and can allow a single storage system to effectively serve many more clients. Clearly there’s tremendous potential value in a solution that can be transparently inserted into the infrastructure and address storage performance problems.
Solid state storage technology – typically storage devices based on NAND flash – have opened up new horizons for storage systems over the past couple of years. The storage market has seemingly been flooded by new products incorporating solid-state storage somewhere within their product line while making promises of break through levels of storage performance. Vendors have found there are some challenges when it comes to putting solid-state technology in the storage system. Many storage products entering the market in turn have a few wrinkles beneath the surface.
Just recently, AMI has introduced yet another SAN storage appliance. The StorTrends 3500i integrates a comprehensive solid-state storage layer into the StorTrends iTX architecture. The 3500i brings with it the ability to use Solid State Drives (SSDs) in multiple roles – as a full flash array or hybrid storage array. As a hybrid storage array, the SSDs can be utilized as cache, tier, or a combination of the two. Along with SSD caching and tiering, the StorTrends 3500i incorporates these performance features with the field proven storage architecture validated by more than 1,100 global installs. The net result is a high performance and cost effective storage array. In fact, the 3500i array looks poised to be one of the most comprehensively equipped storage system options for the mid-range enterprise storage customer looking for solid state acceleration for their workloads.
With this most recent storage system launch, StorTrends once again caught our attention, and we approached AMI with the idea of a hands-on lab exercise that we call a Taneja Group Technology Validation. Our goal with this testing? To see whether StorTrends truly preserved all of their storage functionality with the integration of SSD into the 3500i storage system, and whether the 3500i was up to the task of harnessing the blazing fast storage performance of SSD.
In this new era of big data, sensors can be included in almost everything made. This “Internet Of Things” generates mountains of new data with exciting potential to be turned into invaluable information. As a vendor, if you make a product or solution that when deployed by your customers produces data about its ongoing status, condition, activity, usage, location, or practically any other useful information, you can now potentially derive deep intelligence that can be used to improve your products and services, better satisfy your customers, improve your margins, and grow market share.
For example, such information about a given customer’s usage of your product and its current operating condition, combined with knowledge gleaned from all of your customers’ experiences, enables you to be predictive about possible issues and proactive about addressing them. Not only do you come to know more about a customer’s implementation of your solution than the customer himself, but you can now make decisions about new features and capabilities based on hard data.
The key to gaining value from this “Internet Of Things” is the ability to make sense out of the kind of big data that it generates. One set of current solutions addresses data about internal IT operations including “logfile” analysis tools like Splunk and VMware Log Insight. These are designed for a technical user focused on recent time series and event data to improve tactical problem “time-to-resolution”. However, the big data derived from customer implementations is generally multi-structured across streams of whole “bundles” of complexly related files that can easily grow to PB’s over time. Business user/analysts are not necessarily IT-skilled (e.g. marketing, support, sales…) and the resulting analysis to be useful must at the same time be more sophisticated and be capable of handling dynamic changes to incoming data formats.
Click "Available Now" to read the full analyst opinion.
There is a storm brewing in IT today that will upset the core ways of doing business with standard data processing platforms. This storm is being fueled by inexorable data growth, competitive pressures to extract maximum value and insight from data, and the inescapable drive to lower costs through unification, convergence, and optimization. The storage market in particular is ripe for disruption. Surprisingly, that storage disruption may just come from a current titan only seen by many as primarily an application/database vendor —Oracle.
When Oracle bought Sun in 2009, one of the areas of expertise brought over was in ZFS, a “next generation” file system. While Oracle clearly intended to compete in the enterprise storage market, some in the industry thought that the acquisition would essentially fold any key IP into narrow solutions that would only effectively support Oracle enterprise workloads. And in fact, Oracle ZFS Storage Appliances have been successfully and stealthily moving into more and more data centers as the DBA-selected best option for “database” and “database backup” specific storage.
But the truth is that Oracle has continued aggressive development on all fronts, and its ZFS Storage Appliance is now extremely competitive as scalable enterprise storage, posting impressive benchmarks topping other comparative solutions. What happens when support for mixed workloads is also highly competitive? The latest version of Oracle ZFS Storage Appliances, the new ZS3 models, become a major contender as a unified, enterprise featured, and affordable storage platform for today’s data center, and are positioned to bring Oracle into enterprise storage architectures on a much broader basis going forward.
In this report we will take a look at the new ZS3 Series and examine how it delivers both on its “application engineered” premise and its broader capabilities for unified storage use cases and workloads of all types. We’ll briefly examine the new systems and their enterprise storage features, especially how they achieve high performance across multiple use cases. We’ll also explore some of the key features engineered into the appliance that provide unmatched support for Oracle Database capabilities like Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC) which provides heat map driven storage tiering. We’ll also review some of the key benchmark results and provide an indication of the TCO factors driving its market leading price/performance.
In their quest to achieve better storage performance for their critical applications, mid-market customers often face a difficult quandary. Whether they have maxed out performance on their existing iSCSI arrays, or are deploying storage for a new production application, customers may find that their choices force painful compromises.
When it comes to solving immediate application performance issues, server-side flash storage can be a tempting option. Server-based flash is pragmatic and accessible, and inexpensive enough that most application owners can procure it without IT intervention. But by isolating storage in each server, such an approach breaks a company's data management strategy, and can lead to a patchwork of acceleration band-aids, one per application.
At the other end of the spectrum, customers thinking more strategically may look to a hybrid or all-flash storage array to solve their performance needs. But as many iSCSI customers have learned the hard way, the potential performance gains of flash storage can be encumbered by network speed. In addition to this performance constraint, array-based flash storage offerings tend to touch multiple application teams and involve big dollars, and may only be considered a viable option once pain points have been thoroughly and widely felt.
Fortunately for performance-challenged, iSCSI customers, there is a better alternative. Astute Networks ViSX sits in the middle, offering a broader solution than flash in the server, bust cost effective and tactically achievable as well. As an all-flash storage appliance that resides between servers and iSCSI storage arrays, ViSX complements and enhances existing iSCSI SAN environments, delivering wire-speed storage access without disrupting or forcing changes to server, virtual server, storage or application layers. Customers can invest in ViSX before their performance pain points get too big, or before they've gone down the road of breaking their infrastructure with a tactical solution.