Ages ago I blogged about the Intel topology tool and mapping Xeon 5500 (Nehalem EP) processor threads to OS CPUs on Linux. I don’t recall if I ever blogged the same about Xeon 5600 (Westmere EP) but I’ll cover that processor and Xeon E5-2600 in this short post. Fist, Xeon 5600.
The following two screen shots are socket 0 and socket 1 from a Xeon 5600 server. Socket 0 first:
Now, socket 1:
So, based on the information above, one would have to specify OS CPUs 0,1,2,3,4,5 if they wanted thread 0 from the first 3 cores on each CPU (c0_t0). I never liked that much. That’s why I’m glad Sandy Bridge presents itself in a more logical manner. As you can see from the following two screen shots, specifying affinity for thread 0 of cores on socket 0 is as simple as 0,1,2,3,4,5,6,7. First, socket 0:
And now, socket 1:
Lest this come off as simple tomfoolery, allow me to show the 2x difference in siphoning off a fifo when the data flows socket-local versus socket-remote:
Be aware that this level of disparity will not necessarily be realized when a server is booted SUMA (nor even when BIOS NUMA is enabled but the grub boot string includes numa=off). I’d test the difference and blog that here but that would just be tomfoolery :-)