Tuesday 19 July 2011
Saturday 16 July 2011
Let The Caller Choose
A programming idiom I often see in managed runtime environments, such as Java, is to return a collection or array from a method that is allocated within the method. Having originally come from a C/C++ language background this always struck me as a bit strange and lazy. In the C language world this idiom would likely be a source of memory leaks. I do however understand how convenient it can be in garbage collected environments. A good example is the split function on a String as below:
String str = “Some string that I may want to split”;
String[] values = str.split(“ ”);
For this a new array has to be allocated within the method. If the array is small it can be allocated in the young generation. Due to the round-robin nature of the young generation it will most likely involve a cache miss. The OS, in addition, must zero the memory for security before handing it over. What’s worse is the size of many arrays cannot be determined until the method completes, so temporary structures may have to be allocated before the final structure is allocated and copied into. Then after all this, the memory will eventually need to be garbage collected. This feels like the CPU is doing a lot of work to make this idiom work.
What is the alternative?
Another idiom is to let the caller allocate the structure and pass it into the method like in the example below:
String str = “Some string that I may want to split”;
Collection<String> values = new ArrayList<String>();
str.split(values, “ ”);
With this approach the caller has more control and flexibility. Not only can the caller reuse the collection across multiple calls, they can also choose the most appropriate structure for the end needs. For example, it could be a HashSet if finding distinct words, an ArrayList for cache friendly performance, or a LinkedList if memory is a scarce resource. The method can return the collection allowing a fluent coding pattern.
In the Java world this idiom becomes critical to performance when large arrays are involved. An array may be too large to be allocated in the young generation and therefore needs to be allocated in the old generation. Besides being less efficient and contended for large object allocation, the large array may cause a compaction of the old generation resulting in a multi-second stop-the-world pause. Now that is a good topic for another blog post...
I can hear folk cry out, “but short lived objects are cheap!” While this is true when compared to longer living objects, they are relatively cheap but certainly not free. I’d prefer to use those extra cycles solving the business problem and, in addition, have the flexibility to choose the most appropriate data structure for the task at hand.
Friday 15 July 2011
Write Combining
Modern CPUs employ lots of techniques to counteract the latency cost of going to main memory. These days CPUs can process hundreds of instructions in the time it takes to read or write data to the DRAM memory banks.
The major tool used to hide this latency is multiple layers of SRAM cache. In addition, SMP systems employ message passing protocols to achieve coherence between caches. Unfortunately CPUs are now so fast that even these caches cannot keep up at times. So to further hide this latency a number of less well known buffers are used.
This article explores “write combining buffers” and how we can write code that uses them effectively.
CPU caches are effectively unchained hash maps where each bucket is typically 64-bytes. This is known as a “cache line”. The cache line is the effective unit of memory transfer. For example, an address A in main memory would hash to map to a given cache line C.
If a CPU needs to work with an address which hashes to a line that is not already in cache, then the existing line that matches that hash needs to be evicted so the new line can take its place. For example if we have two addresses which both map via the hashing algorithm to the same cache line, then the old one must make way for the new cache line.
When a CPU executes a store operation it will try to write the data to the L1 cache nearest to the CPU. If a cache miss occurs at this stage the CPU goes out to the next layer of cache. At this point on an Intel, and many other, CPUs a technique known as “write combining” comes into play.
While the request for ownership of the L2 cache line is outstanding the data to be stored is written to one of a number of cache line sized buffers on the processor itself, known as line fill buffers on Intel CPUs. These on chip buffers allow the CPU to continue processing instructions while the cache sub-system gets ready to receive and process the data. The biggest advantage comes when the data is not present in any of the other cache layers.
These buffers become very interesting when subsequent writes happen to require the same cache line. The subsequent writes can be combined into the buffer before it is committed to the L2 cache. These 64-byte buffers maintain a 64-bit field which has the corresponding bit set for each byte that is updated to indicate what data is valid when the buffer is transferred to the outer caches.
Hang on I hear you say. What happens if the program wants to read some of the data that has been written to a buffer? Well our hardware friends have thought of that and they will snoop the buffers before they read the caches.
What does all this mean for our programs?
If we can fill these buffers before they are transferred to the outer caches then we will greatly improve the effective use of the transfer bus at every level. How do we do this? Well programs spend most of their time in loops doing work.
There are a limited number of these buffers, and they differ by CPU model. For example on an Intel CPU you are only guaranteed to get 4 of them at one time. What this means is that within a loop you should not write to more than 4 distinct memory locations at one time or you will not benefit from the write combining effect.
What does this look like in code?
1 SingleLoop duration (ns) = 14019753545
1 SplitLoop duration (ns) = 8972368661
2 SingleLoop duration (ns) = 14162455066
2 SplitLoop duration (ns) = 8887610558
3 SingleLoop duration (ns) = 13800914725
3 SplitLoop duration (ns) = 7271752889
To spell it out, if we write to 6 array locations (memory addresses) inside one loop we see that the program takes significantly longer than if we split the work up, and write first to 3 array locations, then to the other 3 locations sequentially.
By splitting the loop we do much more work yet the program completes in much less time! Welcome to the magic of “write combining”. By using our knowledge of the CPU architecture to fill those buffers properly we can use the underlying hardware to accelerate our code by a factor of two.
Don’t forget that with hyper-threading you can have 2 threads in competition for these buffers on the same core.
The major tool used to hide this latency is multiple layers of SRAM cache. In addition, SMP systems employ message passing protocols to achieve coherence between caches. Unfortunately CPUs are now so fast that even these caches cannot keep up at times. So to further hide this latency a number of less well known buffers are used.
This article explores “write combining buffers” and how we can write code that uses them effectively.
CPU caches are effectively unchained hash maps where each bucket is typically 64-bytes. This is known as a “cache line”. The cache line is the effective unit of memory transfer. For example, an address A in main memory would hash to map to a given cache line C.
If a CPU needs to work with an address which hashes to a line that is not already in cache, then the existing line that matches that hash needs to be evicted so the new line can take its place. For example if we have two addresses which both map via the hashing algorithm to the same cache line, then the old one must make way for the new cache line.
When a CPU executes a store operation it will try to write the data to the L1 cache nearest to the CPU. If a cache miss occurs at this stage the CPU goes out to the next layer of cache. At this point on an Intel, and many other, CPUs a technique known as “write combining” comes into play.
While the request for ownership of the L2 cache line is outstanding the data to be stored is written to one of a number of cache line sized buffers on the processor itself, known as line fill buffers on Intel CPUs. These on chip buffers allow the CPU to continue processing instructions while the cache sub-system gets ready to receive and process the data. The biggest advantage comes when the data is not present in any of the other cache layers.
These buffers become very interesting when subsequent writes happen to require the same cache line. The subsequent writes can be combined into the buffer before it is committed to the L2 cache. These 64-byte buffers maintain a 64-bit field which has the corresponding bit set for each byte that is updated to indicate what data is valid when the buffer is transferred to the outer caches.
Hang on I hear you say. What happens if the program wants to read some of the data that has been written to a buffer? Well our hardware friends have thought of that and they will snoop the buffers before they read the caches.
What does all this mean for our programs?
If we can fill these buffers before they are transferred to the outer caches then we will greatly improve the effective use of the transfer bus at every level. How do we do this? Well programs spend most of their time in loops doing work.
There are a limited number of these buffers, and they differ by CPU model. For example on an Intel CPU you are only guaranteed to get 4 of them at one time. What this means is that within a loop you should not write to more than 4 distinct memory locations at one time or you will not benefit from the write combining effect.
What does this look like in code?
import static java.lang.System.out; public final class WriteCombining { private static final int ITERATIONS = Integer.MAX_VALUE; private static final int ITEMS = 1 << 24; private static final int MASK = ITEMS - 1; private static final byte[] arrayA = new byte[ITEMS]; private static final byte[] arrayB = new byte[ITEMS]; private static final byte[] arrayC = new byte[ITEMS]; private static final byte[] arrayD = new byte[ITEMS]; private static final byte[] arrayE = new byte[ITEMS]; private static final byte[] arrayF = new byte[ITEMS]; public static void main(final String[] args) { for (int i = 1; i <= 3; i++) { out.println(i + " SingleLoop duration (ns) = " + runCaseOne()); out.println(i + " SplitLoop duration (ns) = " + runCaseTwo()); } int result = arrayA[1] + arrayB[2] + arrayC[3] + arrayD[4] + arrayE[5] + arrayF[6]; out.println("result = " + result); } public static long runCaseOne() { long start = System.nanoTime(); int i = ITERATIONS; while (--i != 0) { int slot = i & MASK; byte b = (byte)i; arrayA[slot] = b; arrayB[slot] = b; arrayC[slot] = b; arrayD[slot] = b; arrayE[slot] = b; arrayF[slot] = b; } return System.nanoTime() - start; } public static long runCaseTwo() { long start = System.nanoTime(); int i = ITERATIONS; while (--i != 0) { int slot = i & MASK; byte b = (byte)i; arrayA[slot] = b; arrayB[slot] = b; arrayC[slot] = b; } i = ITERATIONS; while (--i != 0) { int slot = i & MASK; byte b = (byte)i; arrayD[slot] = b; arrayE[slot] = b; arrayF[slot] = b; } return System.nanoTime() - start; } }This program on my Windows 7 64-bit Intel Core i7 860 @ 2.8 GHz system produces the following output:
1 SingleLoop duration (ns) = 14019753545
1 SplitLoop duration (ns) = 8972368661
2 SingleLoop duration (ns) = 14162455066
2 SplitLoop duration (ns) = 8887610558
3 SingleLoop duration (ns) = 13800914725
3 SplitLoop duration (ns) = 7271752889
To spell it out, if we write to 6 array locations (memory addresses) inside one loop we see that the program takes significantly longer than if we split the work up, and write first to 3 array locations, then to the other 3 locations sequentially.
By splitting the loop we do much more work yet the program completes in much less time! Welcome to the magic of “write combining”. By using our knowledge of the CPU architecture to fill those buffers properly we can use the underlying hardware to accelerate our code by a factor of two.
Don’t forget that with hyper-threading you can have 2 threads in competition for these buffers on the same core.
Why Mechanical Sympathy?
A little while ago I discovered this wonderful quote by Henry Peteroski:
This observation has been a great source of employment for me over the last two decades. In this series of blog posts I will try and share some of the techniques I've learned for getting the most out of our underlying hardware. The name "Mechanical Sympathy" comes from the great racing driver Jackie Stewart, who was a 3 times world Formula 1 champion. He believed the best drivers had enough understanding of how a machine worked so they could work in harmony with it.
Why does the software we use today not feel any faster than the DOS based applications we used 20 years ago??? It does not have to be this way. As a software developer I want to try and produce software which does justice to the wonderful achievements of our hardware friends.
"The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry."
This observation has been a great source of employment for me over the last two decades. In this series of blog posts I will try and share some of the techniques I've learned for getting the most out of our underlying hardware. The name "Mechanical Sympathy" comes from the great racing driver Jackie Stewart, who was a 3 times world Formula 1 champion. He believed the best drivers had enough understanding of how a machine worked so they could work in harmony with it.
Why does the software we use today not feel any faster than the DOS based applications we used 20 years ago??? It does not have to be this way. As a software developer I want to try and produce software which does justice to the wonderful achievements of our hardware friends.
Subscribe to:
Posts (Atom)