Saturday, 13 August 2011

False Sharing && Java 7

In my previous post on False Sharing I suggested it can be avoided by padding the cache line with unused long fields.  It seems Java 7 got clever and eliminated or re-ordered the unused fields, thus re-introducing false sharing.  I've experimented with a number of techniques on different platforms and found the following code to be the most reliable.
import java.util.concurrent.atomic.AtomicLong;

public final class FalseSharing
    implements Runnable
{
    public final static int NUM_THREADS = 4; // change
    public final static long ITERATIONS = 500L * 1000L * 1000L;
    private final int arrayIndex;

    private static PaddedAtomicLong[] longs = new PaddedAtomicLong[NUM_THREADS];
    static
    {
        for (int i = 0; i < longs.length; i++)
        {
            longs[i] = new PaddedAtomicLong();
        }
    }

    public FalseSharing(final int arrayIndex)
    {
        this.arrayIndex = arrayIndex;
    }

    public static void main(final String[] args) throws Exception
    {
        final long start = System.nanoTime();
        runTest();
        System.out.println("duration = " + (System.nanoTime() - start));
    }

    private static void runTest() throws InterruptedException
    {
        Thread[] threads = new Thread[NUM_THREADS];

        for (int i = 0; i < threads.length; i++)
        {
            threads[i] = new Thread(new FalseSharing(i));
        }

        for (Thread t : threads)
        {
            t.start();
        }

        for (Thread t : threads)
        {
            t.join();
        }
    }

    public void run()
    {
        long i = ITERATIONS + 1;
        while (0 != --i)
        {
            longs[arrayIndex].set(i);
        }
    }

    public static long sumPaddingToPreventOptimisation(final int index)
    {
        PaddedAtomicLong v = longs[index];
        return v.p1 + v.p2 + v.p3 + v.p4 + v.p5 + v.p6;
    }

    public static class PaddedAtomicLong extends AtomicLong
    {
        public volatile long p1, p2, p3, p4, p5, p6 = 7L;
    }
}

With this code I get similar performance results to those stated in the previous False Sharing article.  The padding in PaddedAtomicLong above can be commented out to see the false sharing effect.

I think we should all lobby the powers that be inside Oracle to have intrinsics added to the language so we can have cache line aligned and padded atomic classes.  This and some other low-level changes would help make Java a real concurrent programming language.  We keep hearing them say multi-core is coming.   I say it is here and Java needs to catch up.

29 comments:

  1. In your prev article you created an array of VolatileLong and here an array of AtomicLongArray.

    What is the guarantee that the AtomicLongArray or VolatileLong instances will be allocated next to each other?

    Well, yes if you create them in a loop, then it is very likely that they will be co-located. But there is no guarantee that these 4 instances will be next to each other on the heap.

    If they do get spread across on the old gen heap where there is no compaction, re-location (until a major GC) padding might not even be necessary if they are spread out. Correct?

    It would be nice if you made it clear to your readers that we really have no control over where the JVM places instances on the heap.

    ReplyDelete
  2. Ashwin,

    You are totally right in that we have no guarantee how Java objects will be placed on the heap. This is the source of the problem with false sharing. If you have counters or pointers that are used across threads it is vitally important to ensure they are in different cache lines or the application will not scale up with cores. The point of the padding is to protect the counters/pointers to ensure they ever get placed in the same cache line.

    Does this make sense?

    Martin...

    ReplyDelete
  3. Yup. Makes sense. Couldn't you have created a large AtomicLongArray and then let each thread update position 8, 16, 24, 32 and so on?

    Instead of 4 AtomicLongArrays and then updating position 0 in each of those 4 arrays?

    Thanks for taking the time to write such things.

    ReplyDelete
  4. If I knew up front that would be a possibility. Often when designing large systems we don't know such things, or we have created a library used by another application.

    It is hard to create a small enough example with sufficient context. The above example illustrates how bad things can been when it happens. If you pad such structures then you need not worry about where they get located in memory. Only the highly contended concurrent structures need such padding.

    ReplyDelete
  5. After a good discussion with Jeff Hain we have discovered the JVM cannot be trusted when it comes to re-ordering code. What happens before the volatile seems to be safe but what happens after is not reliable. We have a better solution by replacing the VolatileLong with the following and using the normal methods of AtomicLong.

    static class PaddedAtomicLong extends AtomicLong
    {
    public volatile long p1, p2, p3, p4, p5, p6, p7 = 7L;
    }

    Oh how I wish the Java powers the be would get serious about this stuff and add some intrinsics that are cache line aligned and padded. This have been the biggest source or performance bugs in the Disruptor.

    ReplyDelete
  6. I've updated this blog based on all the feedback.

    ReplyDelete
  7. Martin,

    Agree that it would be great if there was a way to declare a field such it is guaranteed to occupy it's own cache line, and let the JVM deal with placing the right padding in the object layout. What you do here with artificial padding is the next best thing, but as you know, what actually happens to object layout becomes JVM-implementation specific.

    Being paranoid, I'd add some more stuff to your padding technique to make it harder to get rid of the padding fields through optimization. A hypothetically smart enough JVM would still be able to prove away the need to actually implement the atomic long field p1...p7 in your code above: The PaddedAtomicLong class is only visible from the final (and therefor non-extendable) FalseSharing class. The compiler can therefore "know" that it is looking at all the code that will ever see the padding fields, and can prove that no behavior actually depends on p1...p7. The JVM could then implement the above without taking any space for the padding fields...

    You can probably defeat this by making the PaddedAtomicLong class visible outside of FalseSharing, either directly or by adding accessor method on FalseSharing that would depend on all the p1...p7 values and could theoretically be called by someone outside.

    ReplyDelete
  8. Updated example based on Gil's feedback.

    ReplyDelete
  9. Just use an array (or direct bytebuffer but you pay a full page for it) and place the element in the middle. Java cant reorder that. This is what I do if I want dedicated cache lines.

    ReplyDelete
  10. Stanimir,

    I've often used an array of 15 longs for example and put the value at index 7 to achieve the same result. However this does not work if the value requires volatile semantics. For that you need to use AtomicLongArray or similar. Based on my measurements the indirection and bounds checking costs for this can be significant in an algorithm.

    I know a bunch of folk are proposing an @Contended annotation for Java to mark fields so they can be cache line aligned and padded. I hope it comes soon. :-)

    ReplyDelete
    Replies
    1. Hi Martin,

      I saw that the current version of Sequence in Disruptor now uses unsafe.compareAndSwapLong(..) to update the 7th index.

      Why is the length of the long array 15 and not another size? Is it because an array of 15 longs exactly fills the Level 2 cache?

      Cheers

      John

      Delete
    2. By using index 7 (8th element) ensures that there will be 56 bytes of padding either side of the value. 56 bytes plus the 8 for the value ensures nothing else can share the same 64-byte cache line regardless of starting location in memory for the array.

      Delete
    3. Hi Martin,
      I was trying your benchmark code with the Java 8 @Contended annotation. The performance was similar to the one without padding.... Did you manage to try it ?

      Delete
  11. Martin,
    yes I meant AtomicLongArray. if you do not wish to pay for the indirection, Unsafe is (always) an option. No bounds checking w/ unsafe either.

    ReplyDelete
  12. Are there any simple hardware manuals of multiple cores that show key components like caches ? This is for illustration to understand how caches and cores interact to create false sharing.

    ReplyDelete
  13. Mohan,

    See section 3.4 of the following document:

    http://img.delivery.net/cm50content/intel/ProductLibrary/100412_Parallel_Programming_02.pdf

    ReplyDelete
  14. FWIW - I have a centos5.8 VM running on hyperv w/ oracle java 1.6.0_22, and this version of the false sharing test with the padded atomic is about 4 times faster than VolatileLong. A typical run for me with padded atomic long:

    duration = 7019998000

    ReplyDelete
  15. Your blog is a wonderful reading. Thanks. I have 2 questions regarding memory padding:
    1. long type take 8byte. obj takes 2 words = 16 byte. but with impl as
    public final static class VolatileLong // 16byte
    {
    public volatile long value = 0L; // 8 byte
    public long p1, p2, p3, p4, p5, p6; // 6*8 = 48byte
    }
    seems total = 16 + 8 + 48 = 72 bytes > 64 byte cache line.
    2. How do you find this problem? are you looking at assembly code to understand cache line is constantly sync-ed?

    ReplyDelete
    Replies
    1. I do not want the mark word to be in the cache line which can be modified by taking out locks or the garbage collector ageing the object.

      Even in 64-bit mode compressed pointers are enabled by default, this includes the class pointer in the object header.

      https://wikis.oracle.com/display/HotSpotInternals/CompressedOops

      If found the problem years ago when investigating the reasons why one can get variations in performance runs.

      Delete
    2. ok, so you find the problem during perf test run. How do you narrow down the issue? Are you digging the assembly?

      Delete
    3. Assembly does not show the problem. You need to look for a lot of unexplained L2 caches misses as a hint that something is going on.

      Delete
  16. See also @Contended proposal: http://mail.openjdk.java.net/pipermail/hotspot-dev/2012-November/007309.html

    Great blog, thumbs up!

    ReplyDelete
    Replies
    1. I've not tried @Contended as it has not made it into a mainstream JVM yet.

      Delete
  17. Hi Martin,

    Wonderful blog.

    Can you explain how do you come up with how many long member variable to use in PaddedAtomicLong?

    Eg.
    public static class PaddedAtomicLong extends AtomicLong
    {
    public volatile long p1, p2, p3, p4, p5, p6 = 7L; // 8 * 6 = 48bytes
    }
    }

    How do we ensure this will fit exactly inside the cache line (which is 64 bits I presume)? Does this means that PaddedAtomicLong object itself will take up 16bytes (8 bytes for object but what about the remaining 8 bytes) ?

    Thanks Martin.

    ReplyDelete
    Replies
    1. Add another 8 bytes for the value itself and a minimum of 8 bytes for the object header.

      Delete
  18. Hi Martin,

    how will be store an object of type VolatileLong in this case:

    public final static class VolatileLong // 16byte
    {
    public volatile long value = 0L; // 8 byte
    public long p1, p2, p3, p4, p5, p6; // 6*8 = 48byte
    }
    seems total = 16 + 8 + 48 = 72 bytes > 64 byte cache line. As far as I know header will have 2 words since is about 64bites system. an object of type VolatileLong will be store on 2 cache lines ? If was the case when two different objects of type VolatileLong can be stored one after another in memory, in the example bellow, second object of type VolatileLong will be stored from byte 9 of second cache line, since first object ends at byte 8 on second cache line or it's added another 7 bytes as padding and start from byte 16 ?

    ReplyDelete
    Replies
    1. It does not matter if too much padding it used. It is important that we use enough. To discover object layout try the following tool:

      http://openjdk.java.net/projects/code-tools/jol/

      I've seen object headers range from 4 - 12 bytes depending on JVM.

      Delete
  19. Fantastic article! Martin, small question: does Intel for example have a tool that would measure the cache hits and misses that you could recommend?

    ReplyDelete
    Replies
    1. Yes Intel has VTune. There are some other free tools that I discuss in this blog.

      http://mechanical-sympathy.blogspot.co.uk/2012/08/memory-access-patterns-are-important.html

      Delete