Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Taneja Blog

Taneja Blog / Big Data / Data Center Systems

HPE Welcomes You To The Machine!

Welcome my son, welcome to the machine
Where have you been?   -Pink Floyd

HPE has publicly rolled out their "The Machine" prototype, showcasing an impressive 160TB of fabric-attached memory, 1280 ARM cores and 100Gb/s fiber interconnect.  Ok, so this is a whole lot of memory!  But it's not just about memory.

In both HPC and big data analytics, and in increasingly converged applications that combine analytics with operational processes at scale, the game is all about increasing data locality to compute. Ten years ago we Hadoop unlocked massive data scale processing for certain classes of problems by "mapping/reducing" compute and big data across a cluster of commodity servers.  We might look at that as the "airy" and "cloudy" kind of approach.  Then Spark came about and showed how we really need to tackle big data sets in memory, though still across a cluster architecture. 

Here today we see the bleeding edge of what aggressive hardware engineering can do to practically cram massive memory and large numbers of cores all together - compressing and converging compute and big data as densely as possible - in some ways a throwback nod to the old mainframe. Folks with highly-interconnected HPC-style modeling and simulation application needs that haven't been well served by commodity scale-out (i.e. affordable) big data analytic architectures are going to want to look closely at this development. (and in fact, HPE has modified a Spark version to make different internal assumptions to match this new architecture to great affect (15x at least)).

We can guess at a few possible classes of applications that might find this fertile ground -

  1. Apps/data that best just lives fully in memory for performance.  Nothing beats all-in-memory for random I/O patterned workloads. And HPE is also working up new classes of persistent memory that only takes this ball farther down the field.
  2. Some apps (HPC modeling-like, or IoT/high-volume low-latency streaming apps) might need to communicate state or status between nodes (cores) at small intervals. When all your scratch storage, or buffer/message queues are in shared local memory space, you'll get masssive speed-ups compared to disk (or network-based) sharing.
  3. Massive shared memory "fabrics" can potentially replace whole shared storage subsystems - arrays, SAN's, external networks, gateways, etc.
  4. Massive memory pool solutions can enable programmable, composable architectures - think new cloud like provisioning of any size/shape of virtual resources you need, including massive memory.

We've predicted before that the world will need new kinds of application compilers to aim at machines where memory is large, persistent, and fabric-attached (and also for IoT distribution of compute to the edge). HPE has written a lot of new code, and will need to encourage a massive partner (and possibly open source?) ecosystem as well.  HPE is calling this brave new space "memory computing", although I think companies like Knowm are more truly defining "memory computing" with their nueron-like memory chips. Still, this kind of memory intensive platform will no doubt attract those looking for high performance Machine Learning/Deep Learning, big and fast financial modeling, and IoT scale stream processing today.  It's a small step for HPE to add in racks of GPU's or FPGA's, tie this all in to a bigger IoT architectural platform, and really show why it's not just a software-defined world - that software has to run somewhere!

Bookmark and Share
  • Premiered: 05/16/17
  • Author: Mike Matchett
Topic(s): HPE The Machine Big Data Spark IoT Mike Matchett

Comments

There are no comments to display. Scroll down to leave your own!

 

Leave a Comment

You must be logged in to comment. Click here to log in or register if you don't have an account.