Last week I attended a review session of the new JSR166 StampedLock run by Heinz Kabutz at the excellent JCrete unconference. StampedLock is an attempt to address the contention issues that arise in a system when multiple readers concurrently access shared state. StampedLock is designed to perform better than ReentrantReadWriteLock by taking an optimistic read approach.
While attending the session a couple of things occurred to me. Firstly, I thought it was about time I reviewed the current status of Java lock implementations. Secondly, that although StampedLock looks like a good addition to the JDK, it seems to miss the fact that lock-free algorithms are often a better solution to the multiple reader case.
Test Case
To compare implementations I needed an API test case that would not favour a particular approach. For example, the API should be garbage free and allow the methods to be atomic. A simple test case is to design a spaceship that can be moved around a 2-dimensional space with the coordinates of its position available to be read atomically. At least 2 fields need to be read, or written, per transaction to make the concurrency interesting.
Multiple implementations are built for each spaceship and exercised by a test harness. All the code and results for this blog can be found here.
The test harness will run each of the implementations in turn by using a megamorphic dispatch pattern to try and prevent inlining, lock-coarsening, and loop unrolling when accessing the concurrent methods.
Each implementation is subjected to 4 distinct threading scenarios that result in different contention profiles:
Note: Other CPUs and operating systems can produce very different results.
Results
The raw data for the above charts can be found here.
Analysis
The real surprise for me from the results is the performance of ReentrantReadWriteLock. I cannot see a use for this implementation beyond a case whereby there is a huge balance of reads and very little writes. My main takeaways are:
It is nice seeing the influence of lock-free techniques appearing in lock-based algorithms. The optimistic strategy employed on read is effectively a lock-free algorithm at the times when a writer is not updating.
In my experience of teaching and developing lock-free algorithms, not only do they provide significant throughput advantages as evidenced here, they also provide much lower and less variance in latency.
While attending the session a couple of things occurred to me. Firstly, I thought it was about time I reviewed the current status of Java lock implementations. Secondly, that although StampedLock looks like a good addition to the JDK, it seems to miss the fact that lock-free algorithms are often a better solution to the multiple reader case.
Test Case
To compare implementations I needed an API test case that would not favour a particular approach. For example, the API should be garbage free and allow the methods to be atomic. A simple test case is to design a spaceship that can be moved around a 2-dimensional space with the coordinates of its position available to be read atomically. At least 2 fields need to be read, or written, per transaction to make the concurrency interesting.
/** * Interface to a concurrent representation of a ship that can move around * a 2 dimensional space with updates and reads performed concurrently. */ public interface Spaceship { /** * Read the position of the spaceship into the array of coordinates provided. * * @param coordinates into which the x and y coordinates should be read. * @return the number of attempts made to read the current state. */ int readPosition(final int[] coordinates); /** * Move the position of the spaceship by a delta to the x and y coordinates. * * @param xDelta delta by which the spaceship should be moved in the x-axis. * @param yDelta delta by which the spaceship should be moved in the y-axis. * @return the number of attempts made to write the new coordinates. */ int move(final int xDelta, final int yDelta); }The above API would be cleaner by factoring out an immutable Position object but I want to keep it garbage free and create the need to update multiple internal fields with minimal indirection. This API could easily be extended for a 3-dimensional space and require the implementations to be atomic.
Multiple implementations are built for each spaceship and exercised by a test harness. All the code and results for this blog can be found here.
The test harness will run each of the implementations in turn by using a megamorphic dispatch pattern to try and prevent inlining, lock-coarsening, and loop unrolling when accessing the concurrent methods.
Each implementation is subjected to 4 distinct threading scenarios that result in different contention profiles:
- 1 reader - 1 writer
- 2 readers - 1 writer
- 3 readers - 1 writer
- 2 readers - 2 writers
Note: Other CPUs and operating systems can produce very different results.
Results
Figure 1. |
Figure 2. |
Figure 3. |
Figure 4. |
The raw data for the above charts can be found here.
Analysis
The real surprise for me from the results is the performance of ReentrantReadWriteLock. I cannot see a use for this implementation beyond a case whereby there is a huge balance of reads and very little writes. My main takeaways are:
- StampedLock is a major improvement over existing lock implementations especially with increasing numbers of reader threads.
- StampedLock has a complex API. It is very easy to mistakenly call the wrong method for locking actions.
- Synchronised is a good general purpose lock implementation when contention is from only 2 threads.
- ReentrantLock is a good general purpose lock implementation when thread counts grow as previously discovered.
- Choosing to use ReentrantReadWriteLock should be based on careful and appropriate measurement. As with all major decisions, measure and make decisions based on data.
- Lock-free implementations can offer significant throughput advantages over lock-based algorithms.
It is nice seeing the influence of lock-free techniques appearing in lock-based algorithms. The optimistic strategy employed on read is effectively a lock-free algorithm at the times when a writer is not updating.
In my experience of teaching and developing lock-free algorithms, not only do they provide significant throughput advantages as evidenced here, they also provide much lower and less variance in latency.