Items Tagged: SAN
Compellent Remote Instant Play
More than ever, businesses are facing massive challenges to both protect and move their data across multiple sites. Most critically, this challenge presents itself in the event of an infrastructure failure or an entire site outage. However, with the addition of multiple data center sites, expanding WAN investments, and increasingly heterogeneous and distributed computing infrastructures, moving production data between geographies has also become a hot-button issue.
We’ve seen a rapid evolution in SAN fabric switching technologies in the past 18 months. The entire switching category is exploding with more intelligent, flexible, and scalable fabric architectures. Enterprises are very keen to get their hands on real-time fabric management tools, virtual abstraction capabilities, and multi-protocol support, to name just a few advances. Why? Because these technologies are central to establishing significant SAN ROI.
Four Questions to Consider
Small to midsize businesses and Remote Offices/Branch Offices (ROBOs) of large enterprises (for purposes of this paper referred to collectively as SMEs) are facing a mounting storage crisis. They are beset by crushing data growth, shrinking backup windows, and the need to implement disaster recovery and regulatory compliance. They simply cannot sustain their DAS environments and meet these new requirements. The only proven solution to this storage management crisis is to consolidate on network storage.
TECHNICAL WEBINAR SERIES
The New Best Practice of SAN TAPs
"Reduce Troubleshooting From Days To Minutes"
Is your Fibre Channel SAN infrastructure TAP'd? With your most mission critical applications running on your FC SAN and the growing use of virtualization, there has never been a greater need for comprehensive, real-time insight into your SAN infrastructure. Despite attempts by some vendors to provide limited features such as port mirroring, these practices are inadequate as they fail to see lower level interactions or provide full visibility into sessions that exceed bandwidth demands.
With all of the recent advancements in the datacenter, designing a cabling infrastructure that provides visibility into the SAN has remained fairly stagnant...until now.
The revolutionary breakthrough offered by fiber-optic network TAPs (Traffic Access Points) are being broadly adopted by enterprise datacenters across the globe as end users now depend on this inexpensive, non-intrusive and effective way of gaining full performance and utilization visibility of their SAN infrastructures.
Join David Bartoletti, Senior Analyst & Consultant, Taneja Group, Andrew Varley, Business Director of Splice, and Alex D'Anna, Director of Solutions Consulting, Virtual Instrument as they explain how such simple devices have enabled complex IT environments to reduce their SAN troubleshooting times from days to minutes.
When: Thursday, February 17, 2011
Time: 9:00 am PST; 12:00 pm EST; 5:00 pm GMT
In this webinar, we'll examine the keys to SAN visibility and how to:
Implement the right physical access to fibre-based networks
Reduce troubleshooting times from days to minutes
Obtain key perfomance metrics to measure SLAs
NetApp SAN Efficiency
Storage efficiency is often bandied about in realms of archive and deduplication, but storage efficiency should be front and center when it comes to the golden tier of enterprise storage – primary storage. That is after all the most expensive storage resource in the data center. The problem is, few vendors have been able to do very much about storage efficiency without messing with the IOPs, raw throughput, low latency, and controller processing power that are the most precious components of enterprise storage. NetApp claims they think different here, and because they think different, they claim the storage architecture they’ve built - that delivers unified multiprotocol single system storage across the entire family of NetApp FAS systems - can deliver efficiency beyond the competition. In this Technology Validation, we took a FAS3270 infrastructure through a series of hands-on tests that made it clear that efficiency runs deep in the NetApp storage portfolio.
In the trenches of storage “practitioning,” we often discuss concepts like value or efficiency, but fail to put real meat on the bones around what those terms mean. Too often those terms revolve around a loose collection of cobbled together features or tools that might make performing a given task particularly easy, but might simultaneously mean a system is inefficient in 10 other ways. Yet these ephemeral terms like efficiency and value keep gaining steam.
Hewlett-Packard (HP) Co. updated it storage portfolio today with a new EVA midrange SAN array, a set of pre-packaged storage and compute bundles tuned for virtual environments, and a new Windows-based NAS. The company also laid out plans for a building-block approach to future data storage products that would involve more bundling of its storage, server and networking products.
Storage for the Integrated Virtual Infrastructure: HP P4000 SAN Solutions
This paper examines the features and capabilities of the HP P4000 product family, and shows how these SAN solutions help businesses to overcome the storage-related growing pains they typically encounter as they deploy and scale out a virtual infrastructure.
This report is FREE w/registration. Click the blue Register button on the right-hand side and follow the quick registration process.
Kaminario Announces the Industry's First Enterprise-Grade, All Solid-State SAN Storage with Media...
New K2 hybrid delivers the fastest, most reliable and cost-effective application performance
The case for Intelligent Storage (Dell)
In the past decade, a volatile business climate and a dynamic technology landscape have combined to raise the pressures on the enterprise datacenter, and especially on the storage infrastructure that underlies it. The need to adapt to such constantly-shifting demands and technology developments has sorely tested the limits of existing networked storage solutions. The virtualization mega-trend has dramatically changed the way information is sized, controlled, and protected. But traditional networked storage solutions are too often rigid, complex, and inefficient. Taneja Group has identified a way forward. We have collected the essential elements of the storage solutions needed for today’s new IT realities under the term “intelligent storage.” In this profile, we define what we mean by storage intelligence, whether that storage is file or block, and whether its architecture is SAN, NAS, or unified. We then examine how Dell is delivering this intelligence with its EqualLogic storage line. Wherever you are in your datacenter evolution, we think it’s time to examine whether your storage has the intelligence to carry you to your end goals.
Server virtualization has made some promises that traditional SAN, NAS and unified solutions haven't been able to keep. It has promised that we can have it both ways: consolidation and simplicity, flexibility plus efficiency, mobility without downtime. There's a lot of intelligence in your hypervisor to deliver these benefits. But how intelligent is your storage?
Dell Compellent: Fluid Storage for a Virtualized World
The enterprise datacenter was a very different place just a few years ago. Over the last decade, several macro trends have converged: rapid server consolidation enabled by virtualization, dramatic data proliferation and the rise of “big data,” solid-state drive technology advances, and an increasingly mobile and demanding workforce. In short, IT continues to consolidate, while business becomes more distributed. This tension drives the search for greater efficiency now at the heart of every IT decision. And no-where is this pressure felt more acutely than in the storage layer. Virtualized and consolidated work-loads create new types of storage I/O contention, which are costly to troubleshoot and repair. Storage costs continue to rise because capacity planning is harder in today’s dynamic business environment. Over time, performance limitations, wasted capacity, and complex operations eat into the bottom line and increase lifetime storage TCO. These realities drive the need for more intelligence in the storage layer. In this technology brief, we explore the ways in which Dell Compellent’s Storage Center is delivering such intelligence today.
Client Virtualization the HP Way
What’s driving customers to bigger and bigger desktop virtualization initiatives? What are the challenges they face, and what can be done to resolve hurdles, and speed customers on their way to better client virtualization infrastructures that actually deliver the benefits that attract them to desktop virtualization in the first place?
No doubt, desktop virtualization, or “client virtualization” in HP parlance, roared off to a thunderous start, and the industry has already seen many big initiatives and offerings. Moving into 2012, client virtualization increasingly looks to have the wherewithal to go entirely mainstream, and move into a much broader set of customers than the initial high profile adoptees who had unusual business needs or were turning up unique hosted services. Yet client virtualization remains a complex undertaking. HP thinks they have a clear understanding of that complexity, and a consequent solution approach built to eradicate that complexity from the equation. In turn, they aim to make the promises of client virtualization more compelling than ever, and allow customers to be certain of realizing those promises. In this solution profile, Taneja Group will take a look at the promises driving client virtualization, the challenges that too often pull those promises apart, and then look at how HP is driving those challenges out of the equation with a newly announce product – VirtualSystem CV2.
Dell Equallogic FS7500: Unified Storage Simplifies File Sharing And Accelerates Virtualization
With the introduction of the FS7500 NAS appliance for the EqualLogic PS Series, Dell customers now have a unified storage option to further reduce management overhead and improve efficiency. All too often, companies have been forced to deploy different storage platforms for different needs: NAS for file-based applications and user file shares and SAN for block-based applications and high-performance virtualized workloads. The FS7500 changes the game. Your unified storage solution should let you easily scale your file shares to handle today’s tremendous growth in unstructured data. It should also accelerate and simplify your virtualization efforts by giving you the freedom to choose the best storage protocol for each virtual workload based on your unique application requirements, skill sets, and existing storage investments. In this technology brief, we explore how Dell’s customers can benefit from the addition of scale-out NAS to the leading scale-out iSCSI SAN storage family.
Doubling VM Density with HP 3PAR Storage
This paper examines HP 3PAR Utility Storage and describes how the solution overcomes typical virtual infrastructure storage issues, enabling customers to increase VM density by at least two-fold as a result.
This year marks the 10th anniversary of iSCSI storage-area networks (SANs). After early doubts that iSCSI could be a viable contender to Fibre Channel (FC), the Ethernet storage protocol has established a loyal base among small- to medium-sized businesses (SMBs).
Dell EqualLogic FS7500 - Unifying the storage infrastructure with integrated file storage
The mantra for many storage vendors over the past few years has consisted of “doing more with less” and optimizing the total cost of storage inside the data center walls. While the market at large has shifted their focus to this topic more recently, this has been a key Dell EqualLogic differentiator since the first day they released scale-out iSCSI SAN. EqualLogic has long come packaged with the ease of use, sophisticated features, and comprehensive management tools that other vendors have been much later to roll out. Moreover, EqualLogic has often led the field in fluid adaptability – what Dell now calls their Fluid Data Architecture – that allows customers to easily and non-disruptively add more storage performance and capacity at any time, and then automatically load balance storage demands across all resources.
More recently, Dell has strategically broadened EqualLogic iSCSI SAN portfolio by integrating an enterprise-class Network-attached Storage (NAS) appliance into the EqualLogic architecture to enable scale-out unified SAN solution. Dell has labeled the first generation of this NAS appliance as EqualLogic FS7500.
Download this technology validation report free from Dell: http://dell.to/NoFizY
NORCROSS, GEORGIA - American Megatrends Inc. (AMI), developer and owner of the StorTrends line of SAN & NAS data storage solutions, is joining analysts from the Taneja Group on Tuesday, July 31, 2012 to present findings on the “state of tiered storage” in the U.S. market.
VI - Top Six Physical Layer Best Practices: Maintaining Fiber Optics for the High Speed Data Center
Whether it’s handling more data, accelerating mission-critical applications, or ultimately delivering superior customer satisfaction, businesses are requiring IT to go faster, farther, and at ever-larger scales. In response vendors keep evolving newer generations of higher-performance technology. It’s an IT arms race full of uncertainty, but one thing is inevitable – the interconnections that tie it all together, the core data center networks, will be driven faster and faster.
Unfortunately, many data center owners are under the impression that their current “certified” fiber cabling plant is inherently future-proofed and will readily handle tomorrow’s networking speeds. This is especially true for the high-speed critical SAN’s at the heart of the data center. For example, most of today’s fiber plants supporting protocols like 2Gb or 4Gb Fibre Channel (FC) simply do not meet the required physical layer specifications to support upgrades to 8Gb or 16Gb FC. And faster speeds like 20Gb FC are on the horizon.
It is not just the plant design that’s a looming problem. Fiber cabling has always deserved special handling but is often robust enough that it can withstand a certain amount of dirt and mistreatment at today’s speeds. While lack of good cable hygiene and maintenance can and does cause significant problems today, at higher networking speeds the tolerance for dust, bends, and other optical distractions is much smaller. Careless practices need to evolve to whole new level of best practice now or future network upgrades are doomed.
In this paper we’ll consider the tighter requirements of higher speed protocols and examine the critical reasons why standard fiber cabling designs may not be “up to speed”. We’ll introduce some redesign considerations and also look at how an improperly maintained plant can easily degrade or defeat higher-speed network protocols, including some real world experiences that we’ve drawn from experienced field experts in SAN troubleshooting at Virtual Instruments. Along the way we will come to recommend the top six physical layer best practices we see necessary to designing and maintaining fiber to handle whatever comes roaring down the technology highway.
Optimization For Real-Time Storage: IBM’s SAN Volume Controller
Storage virtualization has come a long way in the past seven years. After a false start in 2001, fraught with inflated expectations and product deficiencies, the category fell into infamy. Several vendors disappeared, many others repositioned themselves to focus on the Small Medium Business (SMB) space and yet others reinvented themselves with completely different products. Only one company stayed true to the promise of virtualization from the very beginning: IBM.
IBM’s SAN Volume Controller (SVC) product launched in July, 2003. The company took SVC and nurtured the market, in spite of the fact that many in the market didn’t even want to say the V-word anymore. IBM persisted, fundamentally because the customer could see the potential of storage virtualization and could count on IBM to support them through the early learning cycles.
The payoff for IBM is huge. IBM has now shipped more than 12,000 SVC engines operating behind more than 5000 SVC systems. SVC is a mature, enterprise-proven product that has demonstrated proven ROI to its customers. IBM has shown that SVC and its in-band architecture can scale to handle the largest, most stringent enterprise SAN environments. By doing so, IBM has led the market where others have only slowly followed.
The value of storage virtualization is unquestioned. It helps rein in storage capital expenses (CAPEX) and operational expenses (OPEX) that are otherwise running amok. It provides a forum to perform storage management in a consistent fashion even while the underlying physical storage is heterogeneous and possesses its own idiosyncrasies. In our view, it is also a key building block for the next generation data center that will focus on delivering a variety of services. IBM knew that and held steady. We believe the payoff until now is a shadow of what it is to come, as IBM ties storage virtualization to other efforts, such as server blades and server virtualization.
More recently, IBM has taken one more not inconsequential step in defining what value is when it comes to SVC and virtualization. That step is the introduction of Real-time Compression, integrated into the high performance controllers of SVC as an in-line technology that can be used against production data. In this Product Brief, we’ll take a look at SVC and its historical differentiators, and what compression for real-time, primary storage means for SVC customers – the value is no less than tremendous.