Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Profiles/Reports

Report

Page 1 of 39 pages  1 2 3 >  Last ›
Report

HP ConvergedSystem: Altering Business Efficiency and Agility with Integrated Systems

The era of IT infrastructure convergence is upon us. Over the past few years Integrated Computing systems – the integration of compute, networking, and storage - have burst onto the scene and have been readily adopted by large enterprise users. The success of these systems has been built by taking well-known IT workloads and combining it with purpose built integrated computing systems optimized for that particular workload. Example workloads today that are being integrated to create these systems are Cloud, Big Data, Virtualization, Database, VDI or even combinations of two or more.

In the past putting these workload solutions together meant having or hiring technology experts with multiple domain knowledge expertise. Integration and validation could take months of on-premise work. Fortunately, technology vendors have matured along with their Integrated Computing systems approach, and now practically every vendor seems to be touting one integrated system or another focused on solving a particular workload problem. The promised set of business benefits delivered by these new systems fall into these key areas:

·         Implementation efficiency that accelerates time to realizing value from integrated systems

·         Operational efficiency through optimized workload density and an ideally right sized set of infrastructure

·         Management efficiency enabled by an integrated management umbrella that ties all of the components of a solution together

·         Scale and agility efficiency unlocked through a repeatedly deployable building block approach

·         Support efficiency that comes with deeply integrated, pre-configured technologies, overarching support tools, and a single vendor support approach for an entire-set of infrastructure

In late 2013, HP introduced a new portfolio offering called HP ConvergedSystem – a family of systems that includes a specifically designed virtualization offering. ConvergedSystem marked a new offering, designed to tackle key customer pain points around infrastructure and software solution deployment, while leveraging HP’s expertise in large scale build-and-integration processes to herald an entirely new level of agility around speed of ordering and implementation. In this profile, we’ll examine how integrated computing systems marks a serious departure from the inefficiencies of traditional order-build-deploy customer processes, and also evaluate HP’s latest advancement of these types of systems. 

Publish date: 07/08/14
Report

Hybrid Cloud Storage: Extending the Storage Infrastructure for Major Cost Savings

Cloud computing has several clear business models. SaaS delivers software, upgrades and maintenance as a service, saving customers money by eliminating costs of ownership that the cloud provider now bears. Several technology factors contribute to SaaS' increasing popularity including protocol standardization, the ubiquity of Web browsing, access to broadband networks, and rapid application development.  It’s not perfect – people have legitimate concerns about data security, governance, vendor lock-in, and data portability, but based on its success, the advantages of SaaS seem to be outweighing its challenges. And the market segment is growing fast.

Another cloud computing model is IaaS, where the customer outsources the compute infrastructure to a cloud provider. This model is gaining traction, especially for application development and testing. App developers are able to take the capital they would otherwise have to spend on buying computing gear and target it to specific development projects that are underway in Internet data centers. The problem with IaaS is that cloud software development doesn’t necessarily translate well into on-premises deployments and many developers prefer to develop SaaS.

Storage in the cloud is yet another business model with different dynamics. While SaaS and Iaas are strongly oriented towards cloud deployments, there are strong pressures driving cloud storage toward on-premises deployments. While storing data in the cloud for SaaS and IaaS computing is certainly important, the vast amount of data still resides on-premises where its growth is largely unchecked. If the cloud storage is going to succeed, it needs to become relevant to the people managing data in corporate on-premises data centers.

Publish date: 11/27/13
Report

QLogic FabricCache: Cost Effective And Non-Disruptively Accelerating Customers’ Key Virtualized…

Storage has long been the tail on the proverbial dog in virtualized environments. The random I/O streams generated by multiple consolidated VMs creates an “I/O blender” effect, which overwhelms traditional array-based architectures and compromises application performance. As many customers have learned the hard way, doing storage right in the virtual infrastructure required a fresh and innovative approach.

These sentiments were echoed in the findings of Taneja Group’s latest research study on storage acceleration and performance. More than half of the 280 buyers and practitioners we surveyed have an immediate need to accelerate one or more applications running in their virtual infrastructures. While three quarters of survey respondents are seriously considering deploying a storage acceleration solution, only a handful are willing to give up or compromise their existing storage capabilities in the process. Customers need better performance, but in most cases can neither afford nor stomach a wholesale upgrade or replacement of their storage infrastructure to achieve it.

Fortunately for performance-challenged, mid-sized and enterprise customers, there is a better alternative. QLogic’s FabricCache QLE10000 is a server-side, SAN caching solution, which is designed to accelerate multi-server virtualized and clustered applications. Based on QLogic’s innovative Mt. Rainier technology, the QLE10000is the industry’s first caching SAN adapter offering that enables the cache from individual servers to be pooled and shared across multiple physical servers. This breakthrough functionality is delivered in the form of a combined Fibre Channel and caching host bus adapter (HBA), which plugs into existing HBA slots and is transparent to hypervisors, operating systems, and applications. QLogic’s FabricCache QLE10000 adapter cost effectively boosts performance of critical applications, while enabling customers to preserve their existing storage investments.

Publish date: 09/16/13
Report

Astute Networked Flash ViSX: Application Performance Achieved Cost Effectively

In their quest to achieve better storage performance for their critical applications, mid-market customers often face a difficult quandary. Whether they have maxed out performance on their existing iSCSI arrays, or are deploying storage for a new production application, customers may find that their choices force painful compromises.

When it comes to solving immediate application performance issues, server-side flash storage can be a tempting option. Server-based flash is pragmatic and accessible, and inexpensive enough that most application owners can procure it without IT intervention. But by isolating storage in each server, such an approach breaks a company's data management strategy, and can lead to a patchwork of acceleration band-aids, one per application.

At the other end of the spectrum, customers thinking more strategically may look to a hybrid or all-flash storage array to solve their performance needs. But as many iSCSI customers have learned the hard way, the potential performance gains of flash storage can be encumbered by network speed. In addition to this performance constraint, array-based flash storage offerings tend to touch multiple application teams and involve big dollars, and may only be considered a viable option once pain points have been thoroughly and widely felt.

Fortunately for performance-challenged, iSCSI customers, there is a better alternative. Astute Networks ViSX sits in the middle, offering a broader solution than flash in the server, bust cost effective and tactically achievable as well. As an all-flash storage appliance that resides between servers and iSCSI storage arrays, ViSX complements and enhances existing iSCSI SAN environments, delivering wire-speed storage access without disrupting or forcing changes to server, virtual server, storage or application layers. Customers can invest in ViSX before their performance pain points get too big, or before they've gone down the road of breaking their infrastructure with a tactical solution.

Publish date: 08/31/13
Report

Unified Data Protection Comes of Age

Traditional data protection is three decades old and is definitely showing its age. Poor management oversight, data growth, virtualization, data silos, and stricter SLAs all conspire to strain traditional backup to the breaking point.

Traditional backup usually follows a set pattern: full baseline backup, daily incremental backup, full weekly backup. When backup volumes were smaller and fewer, this process worked well enough. But a daily operation creates backup data that is missing up to 20 hours or more of current data input, making it impossible to restore to a meaningful RPO. The obvious solution is continuous backup with frequent snapshot recovery points. But this type of backup product can be expensive and resource-intensive, and IT often reserves it for a few Tier 1 transactional applications. But what happens to large and popular business applications such as email, back office files and content management systems? Failed backup and recovery can still devastate a business.  

This article will look at why traditional backup is so difficult to do well these days, and why the risk and expense are so high. 

Publish date: 06/28/13
Report

Making Better Storage for a Virtual World: IBM SAN Volume Controller

Server virtualization has deeply penetrated IT and now hosts well more than half of all server instances, but storage virtualization has been slower to catch on. Yet the main constraint on further server virtualization adoption stems from poorly aligned storage. Perhaps the storage world just moves slower due to the “weight” of data, but if so it will also pick up more momentum. Here at Taneja Group we think due to virtualization pressure and desire for cloud (and now software defined data center) infrastructures, proven storage virtualization is next on everybody’s radar. This is good news for IBM and the IBM SAN Volume Controller (SVC). SVC, first launched in 2003, not only put a firm stake in the ground as to what block storage virtualization could be, but for a decade it has continued to evolve and has been for some time what we think of as the gold standard for what block storage virtualization can be.

In the face of ever-growing data, new processing paradigms, and aggressively evolving applications, storage virtualization provides an ideally adaptive approach by creating optimal logical storage services out of otherwise disparate and inflexible physical storage arrays. Like server virtualization, storage virtualization helps tackle difficult IT challenges in guaranteeing performance at scale, optimizing capacity utilization, taming complexity, increasing availability, and assuring data protection and DR across the enterprise - all while earning significant cost and efficiency benefits.

In fact, robust storage virtualization is becoming as necessary as server virtualization. Dynamic architectures require virtualizing all resources – compute, network, and storage. Experience with cloud implementations shows that server and storage virtualization are both necessary and complimentary, and lead towards the next generation data center built on end-to-end consistency, high automation, and  “software defined” principles.

While many IT storage strategists have pursued storage consolidation and adopted tiering practices to tame some growth challenges, they should all now look to storage virtualization to achieve higher levels of flexibility, agility, and resilience. However, storage virtualization adoption has lagged behind server virtualization. This is where IBM brings to the table a tremendously proven solution that has succeeded in more than 10,000 shipments to customers of almost every size and storage mix, coupled with world-class support and services to guarantee success.

In this profile, we’ll briefly define storage virtualization and the key benefits it brings to the modern data center and consider what it means when IBM says it is “Making Storage Better”. In that light we’ll look in more depth at IBM SVC, its architecture and key product features that have helped establish it as the market leading block storage virtualization solution.

Publish date: 06/28/13
Page 1 of 39 pages  1 2 3 >  Last ›