Items Tagged: Data+Center
The Greening of the Data Center Technology in Depth
As industry analysts and consultants, the Taneja Group has a single driving mission: to deliver hype-free, accurate, and responsible information about important issues impacting the server and storage industry. This is why we have turned our attention to the green data center, which we believe badly needs insight and clarity in the midst of marketing smokescreens and competing claims. This Technology in Depth represents Taneja Group's take on this important issue. We present the background of the energy crisis, explore important data center trends, and tell you how we believe the industry should respond to a real problem and a real opportunity.
Sepaton: Growing the Green Data Center
Space-saving software technologies like de-duplication and thin provisioning are crucial to achieving the green data center, and SEPATON offers both of them. These capabilities not only serve as the foundation for the energy-efficient data center, but also increase storage-related ROI across the board.
In March I published my article on meaningful visibility into the data center. I mentioned several vendors who are doing good work in this challenging field including Virtual Instruments (VI). There are lots of monitoring, capacity planning and general trending products out there. These vendors market their products to virtualization and cloud markets hoping to catch the big wave of investment in those fields. There is nothing wrong with these tools and IT needs them in discrete settings, but they don’t go nearly far enough in managing new levels of data center complexity. They offer some visibility, but dynamic data centers require not only information but also the ability to correlate information across multiple systems and to automatically act on it. We call this crucial piece of the puzzle “instrumentation.”
Enterprise IT shops looking to consolidate data centers and virtualize their servers will need to weigh the latest storage networking technology developments as they plot any infrastructure upgrade.
BEAVERTON, Ore. – July 18, 2012 – The InfiniBand® Trade Association (IBTA), a global organization dedicated to maintaining and furthering the InfiniBand™ specification, today released an analyst report from Taneja Group, demonstrating the continued market growth of InfiniBand products in the HPC and emerging as an attractive choice for the core of the enterprise data center.
A "programmable" data center could allow IT administrators to more holistically manage servers, storage, and networking components.
SimpliVity: Transforming the Data Center with Virtualization and Storage Convergence
The data center has fast been transformed by the emergence and mainstream adoption of virtualization over merely the past decade. Virtualization has changed the ability of IT to deploy and manage workloads, and lent tremendous power to the administrator for manipulating those workloads in clever ways. At onset, virtualization first became appealing to most users because it promised to homogenize a rather difficult part of the physical data center through a layer of software abstraction. Such abstraction would make configuration and deployment difficulties associated with physical server hardware and fat operating systems melt away. The rewards from this undertaking were tremendous – untold buckets of operational dollars were saved by avoiding time and effort intensive rack, power, install, and configure cycles that would occur repeatedly to facilitate application development, testing, production deployment, cycle replacements, break/fix, and more. The changes have infused the business with a new ability to leverage IT, and to do so at lower cost and with less risk of disruption.
The transformation is not yet complete, as the full promise of virtualization is inhibited by the underlying physical infrastructure of the data center. Realization of the need to address the infrastructure complexity problem is driving a flurry of innovation in the market. With an eye on what we call hyperconvergence, we’ll briefly survey innovations emerging in response to on-going virtualization challenges, evaluate how these technologies will impact the datacenter over the next few years, and highlight one vendor as an early innovator bringing these changes to the market: SimpliVity. Unlike early players who have converged a few aspects of the IT Infrastructure stack – rudimentary storage and server functionality – SimpliVity has assimilated all the functionality of the IT Infrastructure on to a single platform. Each such unit – OmniCube, as SimpliVity calls it – offers a complete set of data center infrastructure functionality at a fraction of the acquisition and operating costs of separate IT systems. But hyperconvergence will bring about change far bigger than costs, and may well transform how IT is done. Let’s take a look.
While several of the largest storage vendors, such as EMC, Hewlett-Packard and NetApp, have entered into agreements with server, software and networking vendors to offer bundled products, purpose-built products are just beginning to emerge.
Closing the Virtual IO Management Gap
Assuring Service Throughout the Data Center with Infrastructure Performance Management
There is a significant and potentially costly management gap in virtualized server environments that rely solely on hypervisor-centric solutions. As organizations virtualize more of their mission-critical applications, they are discovering that the virtual versions of these apps continue to depend on the rock-solid storage availability and top-notch IO performance they had when physically hosted. Assuring great service to virtualized clients still requires deep performance management capabilities along the whole IO infrastructure path down to and including shared storage resources.
Cohesive hypervisor management solutions like VMware’s vCenter Operations Management Suite provide a significant advantage to virtual administration by centralizing and simplifying many traditionally disparate management tasks. However, there is a significant management blind spot in the view of end-to-end IO infrastructure when looking at it from the native virtual server perspective. Enterprises relying more and more on virtualized IT delivery need to address this natural management gap with Infrastructure Performance Management (IPM). A lack of robust IPM will degrade or even prevent the deployment of critical applications into a virtual environment – at best losing out on the benefits of virtualization and the opportunities for cloud, at worst causing severe degradation and service outages for all applications sharing the same virtual infrastructure pools.
In this paper we review the virtual performance management landscape and the management strengths of the most well-known hypervisor management solution – VMware’s vCenter Operations Suite - to understand why both the market perception and resulting admin reliance on it is so high. We look at how that reliance overlooks a critical gap for IO and storage, and what the implications of that blind spot are for ensuring total performance. Finally, we examine how the unique IO-centric capabilities of Virtual Instruments’ VirtualWisdom close that gap by correlating complete IO path monitoring with both physical and virtual infrastructure, and how by using VirtualWisdom with vCenter Ops one can achieve a complete end-to-end picture that enables mission-critical applications to be successfully virtualized.
InfiniBand networking has expanded from its high-performance computing roots to take on emerging, more mainstream use cases in today's enterprise data center.
Microsoft Buys StorSimple - Deepen integration between Microsoft Azure and Enterprise Data Center
Microsoft officially acquired StorSimple on November 15, 2012. StorSimple was a relative startup that had been shipping products for about 18 months. Why did Microsoft buy StorSimple? What is the strategy behind the purchase? Where will Microsoft take this newly acquired technology? These are many of the questions we are being asked at present. Here is our view....
Virtual Instruments Field Study Report
Taneja Group conducted in-depth telephone interviews with six Virtual Instruments (VI) customers. The customers represented enterprises from different industry verticals. The interviews took place over a 3-month period in late 2012 and early 2013. We were pursuing user insights into how VI is bringing new levels of performance monitoring and troubleshooting to customers running large virtualized server and storage infrastructures.
Running large virtualized data centers with hundreds or even thousands of servers, petabytes of data and a large distributed storage network, requires a comprehensive management platform. Such a platform must provide insight into performance and enable proactive problem avoidance and troubleshooting to drive both OPEX and CAPEX savings. Our interviewees revealed that they consider VI to be an invaluable partner in helping to manage the performance of their IT infrastructure supporting mission critical applications.
VI’s expertise and the VirtualWisdom platform differ significantly from other tools’ monitoring, capacity planning and trending capabilities. Their unique platform approach provides true, real-time, system-wide visibility into performance—and correlates data from multiple layers—for proactive remediation of problems and inefficiencies before they affect application service levels. Other existing tools have their usefulness, but they don’t provide the level of detail required for managing through the layers of abstraction and virtualization that characterize today’s complex enterprise data center.
Most of the representative companies were using storage array-specific or fabric device monitoring tools but not system-wide performance management solutions. They went looking for a more comprehensive platform that would monitor, alert and remediate the end-to-end compute infrastructure. The customers we interviewed talked about why they needed this level of instrumentation and why they chose VI over other options. Their needs fell into 6 primary areas:
1. Demonstrably decrease system-wide CAPEX and OPEX while getting more out of existing assets.
2. Align expenditures on server, switch and storage infrastructure with actual requirements.
3. Proactively improve data center performance including mixed workloads and I/O.
4. Manage and monitor multiple data centers and complex computing environments.
5. Troubleshoot performance slowdowns and application failures across the stack.
6. Create customized dashboards and comprehensive reports on the end-to-end environment.
The consensus of opinion is that VI’s VirtualWisdom is by far the best solution for meeting complex data center infrastructure performance challenges, and that the return on investment is unparalleled.
For more information, check out the press release issued by Virtual Instruments.
You can also download this report directly from Virtual Instruments.
Extreme Applications in the Enterprise Drive Parallel File System Adoption
With the advent of big data and cloud-scale delivery, companies are racing to deploy cutting-edge services that include “extreme” applications like massive voice and image processing or complex fi-nancial analysis modeling that can push storage systems to their limits. Examples of some high visi-bility and big market impacting solutions include applications based on image pattern recognition at large scale and financial risk management based on decision-making at high speed.
These ground-breaking solutions, made up of very different activities but with similar data storage challenges, create incredible new lines of business representing significant revenue potential. Every day here at Taneja Group we see more and more mainstream enterprises exploring similar “extreme service” opportunities. But when enterprise IT data centers take stock of what it is required to host and deliver these new services, it quickly becomes apparent that traditional clustered and even scale-out file systems - of the kind that most enterprise data centers (or cloud providers) have racks and racks of - simply can’t handle the performance requirements.
There are already great enterprise storage solutions for applications that need either raw throughput, high capacity, parallel access, low latency, or high availability – maybe even for two or three of those at a time. But when an “extreme” application needs all of those requirements at the same time, only supercomputing type storage in the form of parallel file systems provides a functional solution. The problem is that most commercial enterprises simply can’t afford or risk basing a line of business on an expensive research project.
The good news is that some storage vendors have been industrializing former supercomputing storage technologies, hardening massively parallel file systems into commercially viable solutions. This opens the door for revolutionary services creation, enabling mainstream enterprise datacenters to support the exploitation of new extreme applications.
Where does VMware's vSOM (short for vSphere with Operations Management) fit in the virtualization company's software-defined data center strategy? And what's the upside for channel partners? The VAR Guy goes searching for answers.
SimpliVity Corporation, developer of OmniCubeTM, the market’s only globally federated hyperconverged infrastructure system, today announced that it has been selected from thousands of companies as a winner of Red Herring's Top 100 award.
SimpliVity's rapid market adoption continues to provide evidence that a revolution is afoot in IT infrastructure, providing great benefit to IT organizations worldwide that can now capitalize on the superior economics and simplicity that the new era is ushering in.
Nutanix Inc. today expanded its line of converged storage systems by launching an entry-level platform for small enterprises and branch offices, and a data center platform that handles more data-intensive applications than its earlier systems.
Some have said there is nothing new under the sun, that it all comes around in circles. We think server virtualization has taken us to places we've never been before, but there is some truth to that old adage about having seen it all before.
- Premiered: 07/15/13
- Author: Mike Matchett
- Published: Virtualization Review
Cloud security automation provider HyTrust announced financing in the amount of $18.5 million from new investors Intel Capital and Fortinet, as well as recent investors VMware and In-Q-Tel.
Top Performance on Mixed Workloads, Unbeatable for Oracle Databases
There is a storm brewing in IT today that will upset the core ways of doing business with standard data processing platforms. This storm is being fueled by inexorable data growth, competitive pressures to extract maximum value and insight from data, and the inescapable drive to lower costs through unification, convergence, and optimization. The storage market in particular is ripe for disruption. Surprisingly, that storage disruption may just come from a current titan only seen by many as primarily an application/database vendor —Oracle.
When Oracle bought Sun in 2009, one of the areas of expertise brought over was in ZFS, a “next generation” file system. While Oracle clearly intended to compete in the enterprise storage market, some in the industry thought that the acquisition would essentially fold any key IP into narrow solutions that would only effectively support Oracle enterprise workloads. And in fact, Oracle ZFS Storage Appliances have been successfully and stealthily moving into more and more data centers as the DBA-selected best option for “database” and “database backup” specific storage.
But the truth is that Oracle has continued aggressive development on all fronts, and its ZFS Storage Appliance is now extremely competitive as scalable enterprise storage, posting impressive benchmarks topping other comparative solutions. What happens when support for mixed workloads is also highly competitive? The latest version of Oracle ZFS Storage Appliances, the new ZS3 models, become a major contender as a unified, enterprise featured, and affordable storage platform for today’s data center, and are positioned to bring Oracle into enterprise storage architectures on a much broader basis going forward.
In this report we will take a look at the new ZS3 Series and examine how it delivers both on its “application engineered” premise and its broader capabilities for unified storage use cases and workloads of all types. We’ll briefly examine the new systems and their enterprise storage features, especially how they achieve high performance across multiple use cases. We’ll also explore some of the key features engineered into the appliance that provide unmatched support for Oracle Database capabilities like Automatic Data Optimization (ADO) with Hybrid Columnar Compression (HCC) which provides heat map driven storage tiering. We’ll also review some of the key benchmark results and provide an indication of the TCO factors driving its market leading price/performance.