Taneja Group | scale-up
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: scale-up

Profiles/Reports

Midrange Reinvented - the IBM Storwize V7000

In this Product in Depth, we take a look at the IBM Storwize V7000 and how it has harnessed a unique range of sophisticated heterogeneous storage virtualization technology from the IBM portfolio of intellectual property, and turned these technologies into building blocks for a next generation mid-range storage array.  The result is a unique foundation to unify the consolidating and virtualizing infrastructure.

Publish date: 11/19/10
news

NAS options: Pros and cons of scale-out and scale-up

Enterprise IT shops coping with the explosive growth of unstructured data need to consider their NAS options and decide whether traditional fixed-capacity NAS devices or the emerging scale-out NAS systems will better meet their file-storage needs.

  • Premiered: 11/03/11
  • Author: Taneja Group
  • Published: TechTarget: SearchStorage.com
Topic(s): TBA NAS TBA scale-out TBA scale-up
news

Database performance tuning: Five ways for IT to save the day

When database performance takes a turn for the worse, IT can play the hero. There are some new ways for IT pros to tackle slowdown problems. However, one question must be addressed first: Why is it up to IT?

  • Premiered: 04/17/14
  • Author: Mike Matchett
  • Published: Tech Target: Search Data Center
Topic(s): TBA Database TBA Database Performance TBA IT TBA Optimization TBA SQL TBA NoSQL TBA Infrastructure TBA scale-up TBA scale-out TBA Active Archive TBA Archiving TBA SSD TBA Flash TBA Acceleration TBA server TBA Tokutek
Profiles/Reports

Memory is the Hidden Secret to Success with Big Data: GridGain's In-Memory Hadoop Accelerator

Two big trends are driving IT today. One, of course, is big data. The growth in big data IT is tremendous, both in terms of data and in number of analytical apps developing in new architectures like Hadoop. The second is the well-documented long-term trend for critical resources like CPU and memory to get cheaper and denser over time. It seems a happy circumstance that these two trends accommodate each other to some extend; as data sets grow, resources are also growing. It's not surprising to see traditional scale-up databases with new in-memory options coming to the broader market for moderately-sized structured databases. What is not so obvious is that today an in-memory scale-out grid can cost-effectively accelerate both larger scale databases as well as those new big data analytical applications.

A robust in-memory distributed grid combines the speed of memory with massive horizontal scale-out and enterprise features previously reserved for disk-orienting systems. By transitioning data processing onto what's really now an in-memory data management platform, performance can be competitively accelerated across the board for all applications and all data types. For example, GridGain's In-Memory Computing Platform can functionally replace both slower disk-based SQL databases and accelerate unstructured big data processing to the point where formerly "batch" Hadoop-based apps can handle both streaming data and interactive analysis.

While IT shops may be generally familiar with traditional in-memory databases - and IT resource economics are shifting rapidly in favor of in-memory options - less known about how an in-memory approach is a game-changing enabler to big data efforts. In this report, we'll first briefly examine Hadoop and it's fundamental building blocks to see why high performance big data projects, those that are more interactive, real-time, streaming, and operationally focused have needed to continue to look for yet newer solutions. Then, much like the best in-memory database solutions, we'll see how GridGain's In-Memory Hadoop Accelerator can simply "plug-and-play" into Hadoop, immediately and transparently accelerating big data analysis by orders of magnitude. We'll finish by evaluating GridGain's enterprise robustness, performance and scalability, and consider how it enables a whole new set of competitive solutions unavailable over native databases and batch-style Hadoop.

Publish date: 07/08/14
news

New approaches to scalable storage

With all these scalable storage approaches, IT organizations must evaluate the options against their data storage and analytics needs, as well as future architectures.

  • Premiered: 03/16/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA Mike Matchett TBA TechTarget TBA Storage TBA scalable TBA scalability TBA analytics TBA Data Storage TBA Big Data TBA Block Storage TBA File Storage TBA object storage TBA scale-out TBA scale-up TBA Performance TBA Capacity TBA HA TBA high availability TBA latency TBA IOPS TBA Flash TBA SSD TBA File System TBA Security TBA NetApp TBA Data ONTAP TBA ONTAP TBA EMC TBA Isilon TBA OneFS TBA Cloud
news

Hyperconverged systems prove viable SAN alternative

Why are IT pros looking to hyperconverged offerings? The architecture and skills required for SANs are some reasons.

  • Premiered: 06/16/15
  • Author: Taneja Group
  • Published: TechTarget: Search Data Center
Topic(s): TBA hyperconverged TBA hyperconvergence TBA SAN TBA Storage TBA Virtualization TBA Data Center TBA scale-up TBA Fibre Channel TBA FC TBA iSCSI TBA HCI TBA VSA TBA VSAN TBA LUN TBA DR TBA Disaster Recovery
news

Can your cluster management tools pass muster?

The right designs and cluster management tools ensure your clusters don't become a cluster, er, failure.

  • Premiered: 11/17/15
  • Author: Mike Matchett
  • Published: TechTarget: Search Data Center
Topic(s): TBA cluster TBA Cluster Management TBA Cluster Server TBA Storage TBA Cloud TBA Public Cloud TBA Private Cloud TBA Virtual Infrastructure TBA Virtualization TBA hyperconvergence TBA hyper-convergence TBA software-defined TBA software-defined storage TBA SDS TBA Big Data TBA scale-up TBA CAPEX TBA IT infrastructure TBA OPEX TBA Hypervisor TBA Migration TBA QoS TBA Virtual Machine TBA VM TBA VMWare TBA VMware VVOLs TBA VVOLs TBA Virtual Volumes TBA cloud infrastructure TBA OpenStack
Profiles/Reports

Nutanix Versus VCE: Web-Scale Versus Converged Infrastructure in the Real World

This Field Report was created by Taneja Group for Nutanix in late 2014 with updates in 2015. The Taneja Group analyzed the experiences of seven Nutanix Xtreme Computing Platform customers and seven Virtual Computing Environment (VCE) Vblock customers. We did not ‘cherry-pick’ customers for dissatisfaction, delight, or specific use case; we were interested in typical customers’ honest reactions.

As we talked in detail to these customers, we kept seeing the same patterns: 1) VCE users were interested in converged systems; and 2) they chose VCE because VCE partners Cisco, EMC, and/or VMware were embedded in their IT relationships and sales. The VCE process had the advantage of vendor familiarity, but it came at a price: high capital expense, infrastructure and management complexity, expensive support contracts, and concerns over the long-term viability of the VCE partnership (see an opinion of the DELL/EMC merger at end of this document). VCE customers typically did not research other options for converged infrastructure prior to deploying the VCE Vblock solution.

In contrast, Nutanix users researched several convergence and hyperconvergence vendors to determine the best possible fit. Nutanix’ advanced web-scale framework gave them simplified architecture and management, reasonable acquisition and operating costs, and considerably faster time to value.

Our conclusion, based on the amount of time and effort spent by the teams responsible for managing converged infrastructure, is that VCE Vblock deployments represent an improvement over traditional architectures, but Nutanix hyperconvergence – especially with its web-scale architecture – is an big improvement over VCE.

This Field Report will compare customer experiences with Nutanix hyperconverged, web-scale infrastructure to VCE Vblock in real-world environments. 

Publish date: 01/14/16
news

Four big data and AI trends to keep an eye on

AI is making a comeback - and it's going to affect your data center soon.

  • Premiered: 11/17/16
  • Author: Mike Matchett
  • Published: TechTarget: Search IT Operations
Topic(s): TBA AI TBA Artificial Intelligence TBA Big Data TBA Data Center TBA Datacenter TBA Machine Learning TBA Apache TBA Apache Spark TBA Spark TBA Hadoop TBA MapReduce TBA latency TBA In-Memory TBA big data analytics TBA Business Intelligence TBA Python TBA Dataiku TBA Cask TBA ETL TBA data flow management TBA Virtualization TBA Storage TBA scale-up TBA scale-out TBA scalability TBA GPU TBA IBM TBA NVIDIA TBA Virtual Machine TBA VM
Profiles/Reports

The Best All-Flash Array for SAP HANA

These days the world operates in real-time all the time. Whether making airline reservations or getting the best deal from an online retailer, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish. These capabilities enable businesses to succeed in the modern age, giving forward-thinking companies a real edge in innovation.

SAP HANA is an example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both OLAP and OLTP database workloads simultaneously. SAP HANA can be deployed on-premises or in the cloud. Originally, on-premises HANA was available only as a dedicated appliance. Recently SAP has expanded support to best in class components through their SAP Tailored Datacenter Integration (TDI) program. In this solution profile, Taneja Group examined the storage requirements needed for HANA TDI environments and evaluated storage alternatives including the HPE 3PAR StoreServ All Flash. We will make a strong case as to why all-flash arrays like the HPE 3PAR version are a great fit for SAP HANA solutions.

Why discuss storage for an in-memory database? The reason is simple: RAM loses its mind when the power goes off. This volatility means that persistent shared storage is at the heart of the HANA architecture for scalability, disaster tolerance, and data protection. The performance attributes of your shared storage dictate how many nodes you can cluster into a SAP HANA environment which in turn affects your business outcomes. Greater scalability capability means more real-time information is processed. SAP HANA workload shared storage requirements are write intensive with low latency for small files and sequential throughput performance for large files. However, the overall storage capacity is not extreme which makes this workload an ideal fit for all-flash arrays that can meet performance requirements with the smallest quantity of SSDs. Typically you would need 10X the equivalent spinning media drives just to meet the performance requirements, which then leaves you with a massive amount of capacity that cannot be used for other purposes.

In this study, we examined five leading all-flash arrays including the HPE 3PAR StoreServ 8450 All Flash. We found that the unique architecture of the 3PAR array could meet HANA workload requirements with up to 73% fewer SSDs, 76% less power, and 60% less rack space than the alternative AFAs we evaluated. 

Publish date: 06/07/17
news

Splice Machine seeks to deliver hybrid RDBMS as a service

Splice Machine, which specializes in open source SQL RDBMS for mixed operational and analytical workloads, is aiming to offer that capability as a database-as-a-service on AWS.

  • Premiered: 02/10/17
  • Author: Taneja Group
  • Published: CIO Magazine
Topic(s): TBA Splice Machine TBA RDBMS TBA SQL TBA analytics TBA AWS TBA Open Source TBA hybrid workload TBA Cloud TBA Big Data TBA Storage TBA ANSI TBA Optimization TBA elasticity TBA scale-up TBA replication TBA Backup TBA Recovery TBA Mike Matchett TBA scalability TBA IoT TBA Internet of Things TBA Machine Learning
Profiles/Reports

HPE StoreVirtual 3200: A Look at the Only Entry Array with Scale-out and Scale-up

Innovation in traditional external storage has recently taken a back seat to the current market darlings of All Flash Arrays and Software-defined Scale-out Storage. Can there be a better way to redesign the mainstream dual-controller array that has been a popular choice for entry-level shared storage for the last 20 years?  Hewlett-Packard Enterprise (HPE) claims the answer is a resounding Yes.

HPE StoreVirtual 3200 (SV3200) is a new entry storage device that leverages HPE’s StoreVirtual Software-defined Storage (SDS) technology combined with an innovative use of low-cost ARM-based controller technology found in high-end smartphones and tablets. This approach allows HPE to leverage StoreVirtual technology to create an entry array that is more cost effective than using the same software on a set of commodity X86 servers. Optimizing the cost/performance ratio using ARM technology instead of excessive power hungry processing and memory using X86 computers unleashes an attractive SDS product unmatched in affordability. For the first time, an entry storage device can both Scale-up and Scale-out efficiently and also have the additional flexibility to be compatible with a full complement of hyper-converged and composable infrastructure (based on the same StoreVirtual technology). This unique capability gives businesses the ultimate flexibility and investment protection as they transition to a modern infrastructure based on software-defined technologies. The SV3200 is ideal for SMB on-premises storage and enterprise remote office deployments. In the future, it will also enable low-cost capacity expansion for HPE’s Hyper Converged and Composable infrastructure offerings.

Taneja Group evaluated the HPE SV3200 to validate the fit as an entry storage device. Ease-of-use, advanced data services, and supportability were just some of the key attributes we validated with hands-on testing. What we found was that the SV3200 is an extremely easy-to-use device that can be managed by IT generalists. This simplicity is good news for both new customers that cannot afford dedicated administrators and also for those HPE customers that are already accustomed to managing multiple HPE products that adhere to the same HPE OneView infrastructure management paradigm. We also validated that the advanced data services of this entry array match that of the field proven enterprise StoreVirtual products already in the market. The SV3200 can support advanced features such as linear scale-out and multi-site stretch cluster capability that enables advanced business continuity techniques rarely found in storage products of this class. What we found is that HPE has raised the bar for entry arrays, and we strongly recommend businesses that are looking at either SDS technology or entry storage strongly consider HPE’s SV3200 as a product that has the flexibility to provide the best of both. A starting price at under $10,000 makes it very affordable to start using this easy, powerful, and flexible array. Give it a closer look.

Publish date: 04/11/17
news

The power and benefits of encouraging IT disruption

IT can't remain a reactive cost center and cheerful help desk, but must become a competitive, cutthroat service provider and powerful champion of emerging disruptive technology.

  • Premiered: 09/18/17
  • Author: Mike Matchett
  • Published: TechTarget: Search Storage
Topic(s): TBA Storage TBA Mike Matchett TBA Capacity TBA Performance TBA Data Center TBA scale-out TBA scale-up TBA storage architecture TBA flash storage TBA SSD TBA Flash TBA Big Data TBA big data analytics TBA data warehouse TBA Business Intelligence TBA BI TBA non-volatile memory TBA software-defined TBA software-defined storage TBA SDS TBA containerization TBA container TBA containers TBA Security TBA Data protection TBA Machine Learning TBA Google TBA Facebook TBA Apple TBA Cloud