HP has partnered with Calxeda to produce early samples of 4U chassis containing 288 Systems on Chip servers.
I care about this sort of systems offering because I espouse the symmetrical Massively Parallel Processing (MPP) computing paradigm.
Here’s a nice quote from The Register:
[…a half rack of Redstone machines and their external switches implementing 1,600 server nodes has 41 cables, burns 9.9 kilowatts, and costs $1.2m.
A more traditional x86-based cluster doing the same amount of work would only require 400 two-socket Xeon servers, but it would take up 10 racks of space, have 1,600 cables, burn 91 kilowatts, and cost $3.3m.
Shared Nothing / Shared Everything?
Did you notice I mentioned symmetrical MPP in this post? EMC Greenplum Database is a symmetrical MPP software product. This means all code can and does run on all available CPU units. Unlike Exadata, there is no arbitrary cut off point where some CPUs must run certain code and other CPUs (in dedicated servers) can run certain code. That would be an asymmetrical MPP and it is impossible to handle data flow in a balanced matter if some CPUs must run some code and others cannot. Please allow me to quote myself:
The scalability of an MPP is solely related to whether it is symmetrical or asymmetrical.
So what about shared-disk? The scalability of an MPP has nothing to do with whether the disks are accessed via shared-disk or dedicated (non-shared) plumbing. Oracle Real Application Clusters scales DW/BI/Analytics workloads fantastically and it is a shared-disk architecture. However, coupling Real Application Clusters with Exadata Smart Scan is where the asymmetrical attributes are introduced.
The shared-disk versus shared-nothing argument is old, tired and irrelevant. Interestingly, I part ways with my colleagues here in EMC Greenplum on that matter. If you see literature that props the shared-nothing element of Greenplum please bear in mind that it is my personal assertion that shared versus shared-nothing is not a scalability topic related to DW/BI/Analytics workloads. To put it another way, it is my personal campaign and I’ll be blogging soon on that matter. Oh, I forgot to mention that I’m right on the matter (smiley).
Little Things Doth Crabby Make?
I don’t want to draw attention to the lack of care for Electrostatic discharge in the handling of components in the following video because I’m too excited having finally seen things that I’ve been anxiously anticipating for quite some time. So, no, I won’t make this an installment in the Little Things Doth Crabby Make series. HP most likely uses ESD wrist-bands when they are not producing a video (smiley).
The Disclaimer
Please take a gander at the upper right hand corner of this page. You’ll see the disclaimer that spells out the fact that these are my personal words and thoughts. I am not blogging about any EMC business in this post. I’m simply blogging about low-wattage general-purpose servers—something I’m very interested in.
1 Response to “Very Cool Yet Very Dense! Low Wattage HP Technology in Project Moonshot. By The Way, Large Node Counts Require A Symmetrical Architecture.”