Taneja Group | The+Machine
Join Newsletter
Forgot
password?
Register
Trusted Business Advisors, Expert Technology Analysts

Items Tagged: The+Machine

news / Blog

HPE Welcomes You To The Machine!

HPE has publicly rolled out their "The Machine" prototype, carrying 160TB of fabric-attached memory, 1280 ARM cores and 100Gb/s fiber interconnects. Ok, so this is a whole lot of memory! But it's not just about memory. In both HPC and big data analytics, and in increasingly converged applications that combine analytics with operational processes at scale, the game is all about increasing data locality to compute. Ten years ago we Hadoop unlocked massive data scale processing for certain classes of problems by "mapping/reducing" compute and big data across a cluster of commodity servers. We might look at that as the "airy" and "cloudy" kind of approach. Then Spark came about and showed how we really need to tackle big data sets in memory, though still across a cluster architecture. Here today we see the bleeding edge of what aggressive hardware engineering can do to practically cram massive memory and large numbers of cores all together - compressing and converging compute and big data as densely as possible - in some ways a throwback nod to the old mainframe. Folks with highly-interconnected HPC-style modeling and simulation application needs that haven't been well served by commodity scale-out (i.e. affordable) big data analytic architectures are going to want to look closely at this development. (and in fact, HPE has modified a Spark version to make different internal assumptions to match this new architecture to great affect (15x at least)).....

  • Premiered: 05/16/17
  • Author: Mike Matchett
Topic(s): HPE The Machine Big Data Spark IoT Mike Matchett