Items Tagged: DRAM
The Kaminario K2: Transforming the Costs and Capabilities of Storage Performance
In this product profile, we’ll take a look at what we think is required to deliver a true enterprise-class storage foundation for improving application performance, and one vendor's distinctly different approach to delivering a high performance storage solution. That solution - the Kaminario K2 – delivers a compelling approach for those enterprises in need of faster storage for greater application performance.
MRAM technology likely choice as post-flash solid-state storage
NAND flash-based storage could be replaced by newer forms of non-volatile memory like MRAM technology. Find out why MRAM's density, cost and form factor could make flash obsolete.
- Premiered: 05/09/13
- Author: Mike Matchett
- Published: Tech Target: Search Solid State Storage
5 Tech Experts Share Their Caching Secrets
It matters where you put your cache. Caching - as in the use of DRAM or flash to accelerate IO - should live closest to the application making use of it.
Turn to in-memory processing when performance matters
In-memory processing can improve data mining and analysis, and other dynamic data processing uses. When considering in-memory, however, look out for data protection, cost and bottlenecks.
EMC upgrades Isilon NAS for file storage, Hadoop analytics
EMC today launched two new Isilon scale-out network-attached storage arrays and added the ability to use flash as a read cache for all Isilon systems.
- Premiered: 07/08/14
- Author: Taneja Group
- Published: Tech Target: Search Storage
Flash runs past read cache
Just because you can add a cache doesn't mean you should. It is possible to have the wrong kind, so weigh your options before implementing memory-based cache for a storage boost.
- Premiered: 10/16/14
- Author: Mike Matchett
- Published: Tech Target: Search Data Center
Diablo Technologies launches new DDR4 Memory1 module
Diablo Technologies introduces the all-flash Memory1 DDR4 module, and claims lower-cost, higher-capacity flash can replace expensive DRAM.
- Premiered: 08/05/15
- Author: Taneja Group
- Published: TechTarget: Search Solid State Storage
Kaminario K2 array uses 3D TLC NAND flash
Kaminario unveils 5.5 version of its K2 all-flash array with 3D TLC NAND flash, claims of sub-$1 per GB pricing and support for asynchronous replication.
- Premiered: 08/20/15
- Author: Taneja Group
- Published: TechTarget: Search Solid State Storage
Enterprise Storage that Simply Runs and Runs: Infinidat Infinibox Delivers Incredible New Standard
Storage should be the most reliable thing in the data center, not the least. What data centers today need is enterprise storage that affordably delivers at least 7-9's of reliability, at scale. That's a goal of less than three seconds of anticipated unavailability per year - less than the reliability of most data centers.
Data availability is the key attribute enterprises need most to maximize their enterprise storage value, especially as data volumes grow into scales. Yet traditional enterprise storage solutions aren’t keeping pace with the growing need for greater than the oft-touted 5-9’s of storage reliability, instead deferring to layered on methods like additional replication copies, that can drive up latency and cost, or settling for cold tiering which zaps performance and reduces accessibility.
Within the array, as stored data volumes ramp up and disk capacities increase, RAID and related volume/LUN schemes begin to fall down due to longer and longer disk rebuild times that create large windows of vulnerability to unrecoverable data loss. Other vulnerabilities can arise from poor (or at best, default) array designs, software issues, and well-intentioned but often fatal human management and administration. Any new storage solution has to address all of these potential vulnerabilities.
In this report we will look at what we mean by 7-9’s exactly, and what’s really needed to provide 7-9’s of availability for storage. We’ll then examine how Infinidat in particular is delivering on that demanding requirement for those enterprises that require cost-effective enterprise storage at scale.
Big Data Storage Solutions: Options Abound
Hadoop, Spark and other big data analysis tools all have one thing in common: they need some form of big data storage to hold the vast quantities of data that they crunch through. The good news is that big data storage options are proliferating.
- Premiered: 08/09/16
- Author: Taneja Group
- Published: InfoStor
Oracle cloud storage embraces ZFS Storage Appliance
New Oracle operating system update enables ZFS Storage Appliance to transfer file- and block-based data to Oracle Storage Cloud without an external cloud gateway.
- Premiered: 03/29/17
- Author: Taneja Group
- Published: TechTarget: Search Cloud Storage
HPE and Micro Focus Data Protection for SAP HANA
These days the world operates in real-time all the time. Whether in making ticket purchases or getting the best deal from online retailers, data is expected to be up to date with the best information at your fingertips. Businesses are expected to meet this requirement, whether they sell products or services. Having this real-time, actionable information can dictate whether a business survives or dies. In-memory databases have become popular in these environments. The world's 24X7 real-time demands cannot wait for legacy ERP and CRM application rewrites. Companies such as SAP devised ways to integrate disparate databases by building a single, super-fast uber-database that could operate with legacy infrastructure while simultaneously creating a new environment where real-time analytics and applications can flourish.
SAP HANA is a growing and very popular example of an application environment that uses in-memory database technology and allows the processing of massive amounts of real-time data in a short time. The in-memory computing engine allows HANA to process data stored in RAM as opposed to reading it from a disk. At the heart of SAP HANA is a database that operates on both OLAP and OLTP database workloads simultaneously. To overcome the volatility of server DRAM, the HANA architecture requires persistent shared storage to enable greater scalability, disaster tolerance, and data protection.
SAP HANA is available on-premises as a dedicated appliance and/or via best-in-class components through the SAP Tailored Datacenter Integration (TDI) program. The TDI environment has become the more popular HANA option as it provides the flexibility to leverage legacy resources such as data protection infrastructure and enables a greater level of scalability that was lacking in the appliance approach. Hewlett Packard Enterprise (HPE) has long been the leader in providing mission-critical infrastructure for SAP environments. SUSE has long been the leader in providing the mission-critical Linux operating system for SAP environments. Micro Focus has long been a strategic partner of HPE, and together they have leveraged unique hardware and software integrations that enable a complete end-to-end, robust data protection environment for SAP. One of the key value propositions of SAP HANA is its ability to integrate with legacy databases. Therefore, it makes the most sense to leverage a flexible data protection solution from Micro Focus and HPE to cover both legacy database environments and modern in-memory HANA environments.
In this solution brief, Taneja Group will explore the data protection requirements for HANA TDI environments. Then we’ll briefly examine what makes Micro Focus Data Protector combined with HPE storage an ideal solution for protecting mission-critical SAP environments.