Items Tagged: Automation
Lack of automation doesn’t just hurt the development cycle; it will soon start to hurt your ability to take advantage of the cloud.
EMC Replication Manager
Without any fanfare, data replicas have quietly become the lifeblood of enterprise storage infrastructures. While it is easy enough to create a replica, it is a much harder task to manage and integrate replicas within complex IT operations. In turn, enterprises encounter management overhead that spirals out of control and reduces the power and usefulness of replicas. But with the right replication management products, it is possible to transform replicas into a powerful, automated, smooth running, policy-driven tool that enhances enterprise storage services, increases protection, and delivers tremendous versatility by improving data re-use.
About five years ago, policy-based orchestration and datacenter automation—two of my favorite topics—were enjoying a renaissance due to rapidly growing virtualized environments. Quickly, most of the interesting start-up technology was acquired either by the big datacenter management gorillas or by the hypervisor makers.
When it comes time to automate IT processes, we recommend taking stock of your existing trusted vendor relationships as well as your most glaring pain points: do you struggle to keep operating systems and servers patched to the right levels to meet compliance?
Automatic failover for disaster recovery is an option for organizations that need a zero downtime environment, and there are a number of approaches available in the market today. Andrew Burton, senior editor of SearchDisasterRecovery.com and Jeff Boles, senior analyst with the Taneja Group, discuss who should consider automatic failover, the vendors that offer products in this space, how the technology works, and the challenges associated with automatic failover.
Listen to the podcast by clicking here.
You might think you have good insight into your infrastructure, but for next-generation data centers, it probably isn’t good enough. If you’re responsible for business-wide IT infrastructure strategy, you’re facing a unique array of innovative, appealing and business-changing technology choices. I can’t recall ever having witnessed a similar number of big picture choices to rival the selection we’re currently seeing from virtualization, automation, the cloud and more. Moreover, nearly everything is better integrated and easier to use. But one element – visibility -- remains an unusual sticking point.
Nimble Storage InfoSight: Transforming the Storage Lifecycle Experience with Deep-Data Analytics
The job of a storage administrator can sometimes be a difficult and lonely one. Administrators must handle a broad set of responsibilities, encompassing all aspects of managing their arrays and keeping up with user demands. And yet, flat IT budgets mean administrators are spread thin, with limited time to manage storage through its lifecycle, let alone improve and optimize storage practices and services.
In most organizations, the storage lifecycle is managed manually, as a complex and disjointed set of activities. Maintenance and support tend to be highly reactive, forcing administrators to play “catch up” each time a storage problem occurs. Monitoring and reporting rely on complex tools and large amounts of data that are difficult to interpret and act upon. Forecasting and planning are more art than science, leading administrators to overprovision to be on the safe side. These various lifecycle activities are seldom connected and inherently inefficient, and fail to provide administrators with the insight they need to anticipate issues and develop best practices. This, in turn, can put system availability and performance at risk, while reducing IT productivity.
Fortunately, one innovative vendor—Nimble Storage—has developed a powerful, data sciences-driven approach that promises to transform the storage lifecycle experience. Based on deep data collection, intelligent and predictive analytics, and automation built on storage and application expertise, Nimble InfoSight streamlines the storage lifecycle, providing administrators with the insights needed to optimize their arrays while also increasing their productivity. InfoSight collects and analyzes over 30 million data points each day from every installed Nimble Storage array worldwide, and then makes the resulting intelligence available both to Nimble engineers and customers. InfoSight automated analysis helps to proactively anticipate and prevent technical problems, significantly reducing the support burden on administrators. InfoSight also provides administrators with an intuitive, dashboard-driven portal into the performance, capacity utilization and data protection of their arrays, enabling them to monitor array operations across multiple sites and to better plan for future needs. By streamlining and informing key activities across the storage lifecycle, InfoSight simplifies and enhances day-to-day administrative tasks such as support, monitoring and forecasting, while enabling administrators to focus on more important initiatives.
To put a human face on InfoSight intelligence, Nimble Storage has also unveiled a new user community. The community allows users to connect and share ideas and resources via discussion forums, knowledge bases, and social media channels. The Nimble community will enable the company’s large and loyal customer base to write about and share their experiences and insights with each other, as well as with prospective users. Together, InfoSight and the Nimble community will give storage administrators unprecedented access to anonymized installed based data and peers’ expertise, enabling them to stay on top of their game and get more out of their arrays.
In this profile, we’ll examine the challenges administrators typically face on a day-to-day basis, and then take a closer look at InfoSight capabilities, and how they address these issues. We’ll then learn how two Nimble customers have benefited from InfoSight in several important ways. Finally, we’ll briefly examine the Nimble community, and discuss how these two initiatives together are empowering administrators through a combination of shared user data and insights.
Continuity Software™ today announced it has expanded its award-winning family of service availability risk management solutions with the addition of AvailabilityGuard/SAN™.
The future of data storage will have storage shedding the role of a passive technology player as it integrates more closely with applications and workloads.
- Premiered: 10/31/13
- Author: Mike Matchett
- Published: Tech Target: Search Storage
New products designed from the ground up to specifically serve storage for virtual servers can offer dramatic savings in terms of dollars and the time spent managing storage.
IT departments can benefit from storage vendors eavesdropping on their arrays to help them curb the amount of Internet of Things data inundating their storage shops.
Software-defined Storage and VMware's Virtual SAN Redefining Storage Operations
The massive trend to virtualize servers has brought great benefits to IT data centers everywhere, but other domains of IT infrastructure have been challenged to likewise evolve. In particular, enterprise storage has remained expensively tied to a traditional hardware infrastructure based on antiquated logical constructs that are not well aligned with virtual workloads – ultimately impairing both IT efficiency and organizational agility.
Software-Defined Storage provides a new approach to making better use of storage resources in the virtual environment. Some software-defined solutions are even enabling storage provisioning and management on an object, database or per-VM level instead of struggling with block storage LUN’s or file volumes. In particular, VM-centricity, especially when combined with an automatic policy-based approach to management, enables virtual admins to deal with storage in the same mindset and in the same flow as other virtual admin tasks.
In this paper, we will look at VMware’s Virtual SAN product and its impact on operations. Virtual SAN brings both virtualized storage infrastructure and VM-centric storage together into one solution that significantly reduces cost compared to a traditional SAN. While this kind of software-defined storage alters the acquisition cost of storage in several big ways (avoiding proprietary storage hardware, dedicated storage adapters and fabrics, et.al.) here at Taneja Group what we find more significant is the opportunity for solutions like VMware’s Virtual SAN to fundamentally alter the on-going operational (or OPEX) costs of storage.
In this report, we will look at how Software-Defined Storage stands to transform the long term OPEX for storage by examining VMware’s Virtual SAN product. We’ll do this by examining a representative handful of key operational tasks associated with enterprise storage and the virtual infrastructure in our validation lab. We’ll examine the key data points recorded from our comparative hands-on examination, estimating the overall time and effort required for common OPEX tasks on both VMware Virtual SAN and traditional enterprise storage.
Working on a hybrid cloud project? Mike Matchett explains the steps an organization should take to become an internal hybrid cloud service provider.
- Premiered: 03/31/15
- Author: Mike Matchett
- Published: TechTarget: Search Cloud Storage
VMware and Citrix have acquired UEM tools that leave providers like RES Software with little choice but to take their software to the next level.
Barcelona should be rocking this week, what with VMworld Europe (October 12-15), and the news that Dell intends to acquire EMC, which owns 81% of VMware. So an announcement that vRealize, VMware’s hybrid cloud management platform, is getting a makeover, can easily get lost in the noise.
Mike Matchett takes a closer look at the future of data storage technology in 2016 based on research from the Taneja Group.
- Premiered: 01/06/16
- Author: Mike Matchett
- Published: TechTarget: Search Storage
The HPE Solution to Backup Complexity and Scale: HPE Data Protector and StoreOnce
There are a lot of game-changing trends in IT today including mobility, cloud, and big data analytics. As a result, IT architectures, data centers, and data processing are all becoming more complex – increasingly dynamic, heterogeneous, and distributed. For all IT organizations, achieving great success today depends on staying in control of rapidly growing and faster flowing data.
While there are many ways for IT technology and solution providers to help clients depending on their maturity, size, industry, and key business applications, every IT organization has to wrestle with BURA (Backup, Recovery, and Archiving). Protecting and preserving the value of data is a key business requirement even as the types, amounts, and uses of that data evolve and grow.
For IT organizations, BURA is an ever-present, huge, and growing challenge. Unfortunately, implementing a thorough and competent BURA solution often requires piecing and patching together multiple vendor products and solutions. These never quite fully address the many disparate needs of most organizations nor manage to be very simple or cost-effective to operate. Here is where we see HPE as a key vendor today with all the right parts coming together to create a significant change in the BURA marketplace.
First, HPE is pulling together its top-notch products into a user-ready “solution” that marries both StoreOnce and Data Protector. For those working with either or both of those separately in the past in conjunction with other vendor’s products, it’s no surprise that they each compete favorably one-on-one with other products in the market, but together as an integrated joint solution they beat the best competitor offerings.
But HPE hasn’t just bundled products into solutions, it is undergoing a seismic shift in culture that revitalizes its total approach to market. From product to services to support, HPE people have taken to heart a “customer first” message to provide a truly solution-focused HPE experience. One support call, one ticket, one project manager, addressing the customer’s needs regardless of what internal HPE business unit components are in the “box”. And significantly, this approach elevates HPE from just being a supplier of best-of-breed products into an enterprise-level trusted solution provider addressing business problems head-on. HPE is perhaps the only company completely able to deliver a breadth of solutions spanning IT from top to bottom out of their own internal world-class product lines.
In this report, we’ll examine first why HPE StoreOnce and Data Protector products are truly game changing on their own rights. Then, we will look at why they get even “better together” as a complete BURA solution that can be more flexibly deployed to meet backup challenges than any other solution in the market today.
Jeff Byrne gives us his second industry prediction for 2016
Disruptive Solution Provides Past, Present and Predictive Analytics to Deliver Visibility for Mission-Critical Applications
- Premiered: 04/12/16
- Author: Taneja Group
- Published: SYS-CON Media
As IT advances, organizations are adopting infrastructures that enhance agility and improve efficiency.
- Premiered: 04/19/16
- Author: Taneja Group
- Published: CDW