Background
A thread of execution will typically run until it has used up its quantum (aka time slice), at which point it joins the back of the run queue waiting to be re-scheduled as soon as a processor core becomes available. While running the thread will have accumulated a significant amount of state in the processor, including instructions and data in the cache. If the thread can be re-scheduled to run on the same core as last time it can benefit from all that accumulated state. A thread may equally not run to the end of its quantum because it has been pre-empted, or blocked on IO or a lock. After which, when it is ready to run again, the same holds true.
There are numerous techniques available for pinning threads to a particular core. In this article I’ll illustrate the use of the taskset command on two threads exchanging IP multicast messages via a dummy interface. I’ve chosen this as the first example because in a low-latency environment multicast is the preferred IP protocol. For simplicity, I’ve also chosen to not involve the physical network while introducing the concepts. In the next article I’ll expand on this example and the issues involving a real network.
1. Create the dummy interface
$ su -
$ modprobe dummy
$ ifconfig dummy0 172.16.1.1 netmask 255.255.255.0
$ ifconfig dummy0 multicast
2. Get the Java files (Sender and Receiver) and compile them
$ javac *.java
3. Run the tests without CPU pinning
Window 1:
$ java MultiCastReceiver 230.0.0.1 dummy0
Window 2:
$ java MultiCastSender 230.0.0.1 dummy0 20000000
4. Run the tests with CPU pinning
Window 1:
$ taskset -c 2 java MultiCastReceiver 230.0.0.1 dummy0
Window 2:
$ taskset -c 4 java MultiCastSender 230.0.0.1 dummy0 20000000
Results
The tests output once per second the number of messages they have managed to send and receive. A typically example run is charted in Figure 1 below.
Figure 1. |
The interesting thing I've observed is that the unpinned test will follow a step function of unpredictable performance. Across many runs I've seen different patterns but all similar in this step function nature. For the pinned tests I get consistent throughput with no step pattern and always the greatest throughput.
This test is not particularly CPU intensive, nor does it access the physical network device, yet it shows how critical processor affinity is to not just high performance but also predictable performance. In the next article of this series I'll introduce a network hop and the issues arising from interrupt handling.
i see the same results here. also, if you make sure to pin the two processes to cores that share the same L2 cache you get double the throughput over two cores on different L2 caches. I presume this is the overhead of the cache interconnect?
ReplyDeleteHi Martin.
ReplyDeleteNo doubt you will already have this in mind for a future post, but I am curious about what sort of constraints you may have in place for ensuring that other threads are not utilising the resources of the CPUs that the sender and receiver processes (obviously single-threaded) have affinity to.
When sharing the same L2 cache I'm assuming you are using a pre-Nehalem Intel processor such as Penryn? If so, you are seeing the benefits of exchanging data via the L2 rather than the L3 cache as in my test. This will obviously be faster between two cores but does not scale to more cores as well as the Nehalem processors do. Most processors now operate a 3 layer cache with only the third level shared if you discount hyper threading.
ReplyDeletetaskset is the cheap and cheerful means of setting affinity. Other means exist such as cgroups which can be used to contain OS threads for avoiding contention with the cores assigned to specific tasks. I used taskset for quick illustration of what is possible.
ReplyDeletei've used taskset in the past to pin init and everything under it to one core and then have my "soft-realtime" processes pinned to the other cores on the box. this way the OS shouldn't interfere with any of your application processes. Idea is to always have at least one core dedicated to the OS. Linux containers and cgroups are also well worth investigating...
ReplyDeleteHow about processor affinity for interruptions? Do you think if it is good practice to dedicate one cpu for interruption handling?
ReplyDeleteDedicating a CPU for interrupt handling can be a very valid technique for certain types of workload. It is one of the points I plan to cover in the next instalment of this series.
ReplyDeleteMartin, this is a great post.
ReplyDeleteYou finish by mentioning "In the next article of this series [...]". And as the title suggest, there should be a Part 2. Where is it? Eagerly waiting for it.
Continue the great work!
You have observational evidence that pinning helps which is good but you assign the cause as being accumulated processor state. How did you reach that conclusion?
ReplyDeleteI base that question on the following - when the next thread is scheduled to run all the processor registers, cache-lines etc. will be loaded for that thread effectively flushing all your currents threads state (indeed the OS should save all that state for you). This will continue for subsequent threads until your thread is re-scheduled to run on that processor.
Regards,
Matt
Use the model specific registers (MSRs) for your CPU to get all the data you need on how the process is executing. Cheap way is "perf stat" on Linux. I have seen the OS schedule the thread to execute on another core too readily. This is worst with Linux; Windows, BSD and OSX do much better. Being scheduled to another core is even worse than having another thread partially pollute your warm cache.
DeleteIf you have other threads running on that core it can cause the cache pollution as you point out. For low-latency applications you do not want to have this happen. This may mean you are over-provisioning cores.
Hi,
ReplyDeleteFor the dummy interface part, can I just use lo interface and 127.0.0.1 instead?
Alex
Should be fine if the you are connected to a network. Dummy works well even if a network is not connected.
DeleteThanks, I tried running the java with dummy0 but the receive did not receive anything, even though I turned off selinux. But after I changed to lo it all worked, thanks.
DeleteHi Martin,
ReplyDeleteHow will this work if a process has more than one thread? Will it pin all threads or will it pin only the main thread?
No simple answer here. You need to consider control groups, isocpus, and other config options.
DeleteHi, great article, thx!
ReplyDeleteIs there part II released? It sounds like you were to describe some interesting stuff - interrupt handling.
Cheers,
Michał
Hi Martin,
ReplyDeleteThe links to source code (Sender and Receiver) is broken.
Could you please update them?
Google code has been archived so I moved them to GitHub.
DeleteThis comment has been removed by the author.
Delete