blob: 20d2ee064fcd4721d80e390ac4d6e632883744af [file] [log] [blame]
page.title=SMP Primer for Android
page.tags=ndk,native
page.article=true
@jd:body
<div id="tb-wrapper">
<div id="tb">
<h2>In this document</h2>
<ol class="nolist">
<li><a href="#theory">Theory</a>
<ol class="nolist">
<li style="margin: 3px 0 0"><a href="#mem_consistency">Memory consistency models</a>
</li>
<li style="margin:3px 0 0"><a href="#racefree">Data-race-free programming</a>
<ol class="nolist">
<li style="margin:0"><a href="#dataraces">What's a "data race"?</a></li>
<li style="margin:0"><a href="#avoiding">Avoiding data races</a></li>
<li style="margin:0"><a href="#reordering">When memory reordering becomes visible</a></li>
</ol>
</li>
</ol>
</li>
<li><a href="#practice">Practice</a>
<ol class="nolist">
<li style="margin:3px 0 0"><a href="#c_dont">What not to do in C</a>
<ol class="nolist">
<li style="margin:0"><a href="#volatile">C/C++ and “volatile”</a></li>
<li style="margin:0"><a href="#examplesc">Examples</a></li>
</ol>
</li>
<li style="margin:3px 0 0"><a href="#j_dont">What not to do in Java</a>
<ol class="nolist">
<li style="margin:0"><a href="#sync_volatile">“synchronized” and “volatile”</a></li>
<li style="margin:0"><a href="#examplesj">Examples</a></li>
</ol>
</li>
<li style="margin:3px 0 0"><a href="#bestpractice">What to do</a>
</li>
</ol>
</li>
<li><a href="#weak">A little more about weak memory orders</a>
<ol class="nolist">
<li style="margin:0"><a href="#nonracing">Non-racing accesses</a></li>
<li style="margin:0"><a href="#hint_only">Result is not relied upon for correctness</a></li>
<li style="margin:0"><a href="#unread">Atomically modified but unread data</a></li>
<li style="margin:0"><a href="#flag">Simple flag communication</a></li>
<li style="margin:0"><a href="#immutable">Immutable fields</a></li>
</ol>
</li>
<li><a href="#closing_notes">Closing Notes</a></li>
<li><a href="#appendix">Appendix</a>
<ol class="nolist">
<li style="margin:0"><a href="#sync_stores">Implementing synchronization stores</a></li>
<li style="margin:0"><a href="#more">Further reading</a></li>
</ol>
</li>
</ol>
</div>
</div>
<p>Android 3.0 and later platform versions are optimized to support
multiprocessor architectures. This document introduces issues that
can arise when writing multithreaded code for symmetric multiprocessor systems in C, C++, and the Java
programming language (hereafter referred to simply as “Java” for the sake of
brevity). It's intended as a primer for Android app developers, not as a complete
discussion on the subject.</p>
<h2 id="intro">Introduction</h2>
<p>SMP is an acronym for “Symmetric Multi-Processor”. It describes a design in
which two or more identical CPU cores share access to main memory. Until
a few years ago, all Android devices were UP (Uni-Processor).</p>
<p>Most &mdash; if not all &mdash; Android devices always had multiple CPUs, but
in the past only one of them was used to run applications while others manage various bits of device
hardware (for example, the radio). The CPUs may have had different architectures, and the
programs running on them couldn’t use main memory to communicate with each
other.</p>
<p>Most Android devices sold today are built around SMP designs,
making things a bit more complicated for software developers. Race conditions
in a multi-threaded program may not cause visible problems on a uniprocessor,
but may fail regularly when two or more of your threads
are running simultaneously on different cores.
What’s more, code may be more or less prone to failures when run on different
processor architectures, or even on different implementations of the same
architecture. Code that has been thoroughly tested on x86 may break badly on ARM.
Code may start to fail when recompiled with a more modern compiler.</p>
<p>The rest of this document will explain why, and tell you what you need to do
to ensure that your code behaves correctly.</p>
<h2 id="theory">Memory consistency models: Why SMPs are a bit different</h2>
<p>This is a high-speed, glossy overview of a complex subject. Some areas will
be incomplete, but none of it should be misleading or wrong. As you
will see in the next section, the details here are usually not important.</p>
<p>See <a href="#more">Further reading</a> at the end of the document for
pointers to more thorough treatments of the subject.</p>
<p>Memory consistency models, or often just “memory models”, describe the
guarantees the programming language or hardware architecture
makes about memory accesses. For example,
if you write a value to address A, and then write a value to address B, the
model might guarantee that every CPU core sees those writes happen in that
order.</p>
<p>The model most programmers are accustomed to is <em>sequential
consistency</em>, which is described like this <span
style="font-size:.9em;color:#777">(<a href="#more" style="color:#777">Adve &
Gharachorloo</a>)</span>:</p>
<ul>
<li>All memory operations appear to execute one at a time</li>
<li>All operations in a single thread appear to execute in the order described
by that processor's program.</li>
</ul>
<p>Let's assume temporarily that we have a very simple compiler or interpreter
that introduces no surprises: It translates
assignments in the source code to load and store instructions in exactly the
corresponding order, one instruction per access. We'll also assume for
simplicity that each thread executes on its own processor.
<p>If you look at a bit of code and see that it does some reads and writes from
memory, on a sequentially-consistent CPU architecture you know that the code
will do those reads and writes in the expected order. It’s possible that the
CPU is actually reordering instructions and delaying reads and writes, but there
is no way for code running on the device to tell that the CPU is doing anything
other than execute instructions in a straightforward manner. (We’ll ignore
memory-mapped device driver I/O.)</p>
<p>To illustrate these points it’s useful to consider small snippets of code,
commonly referred to as <em>litmus tests</em>.</p>
<p>Here’s a simple example, with code running on two threads:</p>
<table>
<tr>
<th>Thread 1</th>
<th>Thread 2</th>
</tr>
<tr>
<td><code>A = 3<br />
B = 5</code></td>
<td><code>reg0 = B<br />
reg1 = A</code></td>
</tr>
</table>
<p>In this and all future litmus examples, memory locations are represented by
capital letters (A, B, C) and CPU registers start with “reg”. All memory is
initially zero. Instructions are executed from top to bottom. Here, thread 1
stores the value 3 at location A, and then the value 5 at location B. Thread 2
loads the value from location B into reg0, and then loads the value from
location A into reg1. (Note that we’re writing in one order and reading in
another.)</p>
<p>Thread 1 and thread 2 are assumed to execute on different CPU cores. You
should <strong>always</strong> make this assumption when thinking about
multi-threaded code.</p>
<p>Sequential consistency guarantees that, after both threads have finished
executing, the registers will be in one of the following states:</p>
<table>
<tr>
<th>Registers</th>
<th>States</th>
</tr>
<tr>
<td>reg0=5, reg1=3</td>
<td>possible (thread 1 ran first)</td>
</tr>
<tr>
<td>reg0=0, reg1=0</td>
<td>possible (thread 2 ran first)</td>
</tr>
<tr>
<td>reg0=0, reg1=3</td>
<td>possible (concurrent execution)</td>
</tr>
<tr>
<td>reg0=5, reg1=0</td>
<td>never</td>
</tr>
</table>
<p>To get into a situation where we see B=5 before we see the store to A, either
the reads or the writes would have to happen out of order. On a
sequentially-consistent machine, that can’t happen.</p>
<p>Uni-processors, including x86 and ARM, are normally sequentially consistent.
Threads appear to execute in interleaved fashion, as the OS kernel switches
between them. Most SMP systems, including x86 and ARM,
are not sequentially consistent. For example, it is common for
hardware to buffer stores on their way to memory, so that they
don't immediately reach memory and become visible to other cores.</p>
<p>The details vary substantially. For example, x86, though not sequentially
consistent, still guarantees that reg0 = 5 and reg1 = 0 remains impossible.
Stores are buffered, but their order is maintained.
ARM, on the other hand, does not. The order of buffered stores is not
maintained, and stores may not reach all other cores at the same time.
These differences are important to assembly programmers.
However, as we will see below, C, C++, or Java programmers can
and should program in a way that hides such architectural differences.</p>
<p>So far, we've unrealistically assumed that it is only the hardware that
reorders instructions. In reality, the compiler also reorders instructions to
improve performance. In our example, the compiler might decide that some later
code in Thread 2 needed the value of reg1 before it needed reg0, and thus load
reg1 first. Or some prior code may already have loaded A, and the compiler
might decide to reuse that value instead of loading A again. In either case,
the loads to reg0 and reg1 might be reordered.</p>
<p>Reordering accesses to different memory locations,
either in the hardware, or in the compiler, is
allowed, since it doesn't affect the execution of a single thread, and
it can significantly improve performance. As we will see, with a bit of care,
we can also prevent it from affecting the results of multithreaded programs.</p>
<p>Since compilers can also reorder memory accesses, this problem is actually
not new to SMPs. Even on a uniprocessor, a compiler could reorder the loads to
reg0 and reg1 in our example, and Thread 1 could be scheduled between the
reordered instructions. But if our compiler happened to not reorder, we might
never observe this problem. On most ARM SMPs, even without compiler
reordering, the reordering will probably be seen, possibly after a very large
number of successful executions. Unless you're programming in assembly
language, SMPs generally just make it more likely you'll see problems that were
there all along.</p>
<h2 id="racefree">Data-race-free programming</h2>
<p>Fortunately, there is usually an easy way to avoid thinking about any of
these details. If you follow some straightforward rules, it's usually safe
to forget all of the preceding section except the "sequential consistency" part.
Unfortunately, the other complications may become visible if you
accidentally violate those rules.
<p>Modern programming languages encourage what's known as a "data-race-free"
programming style. So long as you promise not to introduce "data races",
and avoid a handful of constructs that tell the compiler otherwise, the compiler
and hardware promise to provide sequentially consistent results. This doesn't
really mean they avoid memory access reordering. It does mean that if you
follow the rules you won't be able to tell that memory accesses are being
reordered. It's a lot like telling you that sausage is a delicious and
appetizing food, so long as you promise not to visit the
sausage factory. Data races are what expose the ugly truth about memory
reordering.</p>
<h3 id="dataraces">What's a "data race"?</h3>
<p>A <i>data race</i> occurs when at least two threads simultaneously access
the same ordinary data, and at least one of them modifies it. By "ordinary
data" we mean something that's not specifically a synchronization object
intended for thread communication. Mutexes, condition variables, Java
volatiles, or C++ atomic objects are not ordinary data, and their accesses
are allowed to race. In fact they are used to prevent data races on other
objects.</p>
<p>In order to determine whether two threads simultaneously access the same
memory location, we can ignore the memory-reordering discussion from above, and
assume sequential consistency. The following program doesn't have a data race
if <code>A</code> and <code>B</code> are ordinary boolean variables that are
initially false:</p>
<table>
<tr>
<th>Thread 1</th>
<th>Thread 2</th>
</tr>
<tr>
<td><code>if (A) B = true</code></td>
<td><code>if (B) A = true</code></td>
</tr>
</table>
<p>Since operations are not reordered, both conditions will evaluate to false, and
neither variable is ever updated. Thus there cannot be a data race. There is
no need to think about what might happen if the load from <code>A</code>
and store to <code>B</code> in
Thread 1 were somehow reordered. The compiler is not allowed to reorder Thread
1 by rewriting it as "<code>B = true; if (!A) B = false</code>". That would be
like making sausage in the middle of town in broad daylight.
<p>Data races are officially defined on basic built-in types like integers and
references or pointers. Assigning to an <code>int</code> while simultaneously
reading it in another thread is clearly a data race. But both the C++
standard library and
the Java Collections libraries are written to allow you to also reason about
data races at the library level. They promise to not introduce data races
unless there are concurrent accesses to the same container, at least one of
which updates it. Updating a <code>set&lt;T&gt;</code> in one thread while
simultaneously reading it in another allows the library to introduce a
data race, and can thus be thought of informally as a "library-level data race".
Conversely, updating one <code>set&lt;T&gt;</code> in one thread, while reading
a different one in another, does not result in a data race, because the
library promises not to introduce a (low-level) data race in that case.
<p>Normally concurrent accesses to different fields in a data structure
cannot introduce a data race. However there is one important exception to
this rule: Contiguous sequences of bit-fields in C or C++ are treated as
a single "memory location". Accessing any bit-field in such a sequence
is treated as accessing all of them for purposes of determining the
existence of a data race. This reflects the inability of common hardware
to update individual bits without also reading and re-writing adjacent bits.
Java programmers have no analogous concerns.</p>
<h3 id="avoiding">Avoiding data races</h3>
Modern programming languages provide a number of synchronization
mechanisms to avoid data races. The most basic tools are:
<dl>
<dt>Locks or Mutexes</dt>
<dd>Mutexes (C++11 <code>std::mutex</code>, or <code>pthread_mutex_t</code>), or
<code>synchronized</code> blocks in Java can be used to ensure that certain
section of code do not run concurrently with other sections of code accessing
the same data. We'll refer to these and other similar facilities generically
as "locks." Consistently acquiring a specific lock before accessing a shared
data structure and releasing it afterwards, prevents data races when accessing
the data structure. It also ensures that updates and accesses are atomic, i.e. no
other update to the data structure can run in the middle. This is deservedly
by far the most common tool for preventing data races. The use of Java
<code>synchronized</code> blocks or C++ <code>lock_guard</code>
or <code>unique_lock</code> ensure that locks are properly released in the
event of an exception.
</dd>
<dt>Volatile/atomic variables</dt>
<dd>Java provides <code>volatile</code> fields that support concurrent access
without introducing data races. Since 2011, C and C++ support
<code>atomic</code> variables and fields with similar semantics. These are
typically more difficult to use then locks, since they only ensure that
individual accesses to a single variable are atomic. (In C++ this normally
extends to simple read-modify-write operations, like increments. Java
requires special method calls for that.)
Unlike locks, <code>volatile</code> or <code>atomic</code> variables can't
be used directly to prevent other threads from interfering with longer code sequences.</dd>
</dl>
<p>It's important to note that <code>volatile</code> has very different
meanings in C++ and Java. In C++, <code>volatile</code> does not prevent data
races, though older code often uses it as a workaround for the lack of
<code>atomic</code> objects. This is no longer recommended; in
C++, use <code>atomic&lt;T&gt;</code> for variables that can be concurrently
accessed by multiple threads. C++ <code>volatile</code> is meant for
device registers and the like.</p>
<p>C/C++ <code>atomic</code> variables or Java <code>volatile</code> variables
can be used to prevent data races on other variables. If <code>flag</code> is
declared to have type <code>atomic&lt;bool&gt;</code>
or <code>atomic_bool</code>(C/C++) or <code>volatile boolean</code> (Java),
and is initially false then the following snippet is data-race-free:</p>
<table>
<tr>
<th>Thread 1</th>
<th>Thread 2</th>
</tr>
<tr>
<td><code>A = ...<br />
&nbsp;&nbsp;flag = true</code>
</td>
<td><code>while (!flag) {}<br />
... = A</code>
</td>
</tr>
</table>
<p>Since Thread 2 waits for <code>flag</code> to be set, the access to
<code>A</code> in Thread 2 must happen after, and not concurrently with, the
assignment to <code>A</code> in Thread 1. Thus there is no data race on
<code>A</code>. The race on <code>flag</code> doesn't count as a data race,
since volatile/atomic accesses are not "ordinary memory accesses".</p>
<p>The implementation is required to prevent or hide memory reordering
sufficiently to make code like the preceding litmus test behave as expected.
This normally makes volatile/atomic memory accesses
substantially more expensive than ordinary accesses.</p>
<p>Although the preceding example is data-race-free, locks together with
<code>Object.wait()</code> in Java or condition variables in C/C++ usually
provide a better solution that does not involve waiting in a loop while
draining battery power.</p>
<h3 id="reordering">When memory reordering becomes visible</h3>
Data-race-free programming normally saves us from having to explicitly deal
with memory access reordering issues. However, there are several cases in
which reordering does become visible:
<ol>
<li> If your program has a bug resulting in an unintentional data race,
compiler and hardware transformations can become visible, and the behavior
of your program may be surprising. For example, if we forgot to declare
<code>flag</code> volatile in the preceding example, Thread 2 may see an
uninitialized <code>A</code>. Or the compiler may decide that flag can't
possibly change during Thread 2's loop and transform the program to
<table>
<tr>
<th>Thread 1</th>
<th>Thread 2</th>
</tr>
<tr>
<td><code>A = ...<br />
&nbsp;&nbsp;flag = true</code>
</td>
<td>reg0 = flag;
while (!reg0) {}<br />
... = A
</td>
</tr>
</table>
When you debug, you may well see the loop continuing forever in spite of
the fact that <code>flag</code> is true.</li>
<li> C++ provides facilities for explicitly relaxing
sequential consistency even if there are no races. Atomic operations
can take explicit <code>memory_order_</code>... arguments. Similarly, the
<code>java.util.concurrent.atomic</code> package provides a more restricted
set of similar facilities, notably <code>lazySet()</code>. And Java
programmers occasionally use intentional data races for similar effect.
All of these provide performance improvements at a large
cost in programming complexity. We discuss them only briefly
<a href="#weak">below</a>.</li>
<li> Some C and C++ code is written in an older style, not entirely
consistent with current language standards, in which <code>volatile</code>
variables are used instead of <code>atomic</code> ones, and memory ordering
is explicitly disallowed by inserting so called <i>fences</i> or
<i>barriers</i>. This requires explicit reasoning about access
reordering and understanding of hardware memory models. A coding style
along these lines is still used in the Linux kernel. It should not
be used in new Android applications, and is also not further discussed here.
</li>
</ol>
<h2 id="practice">Practice</h2>
<p>Debugging memory consistency problems can be very difficult. If a missing
lock, <code>atomic</code> or <code>volatile</code> declaration causes
some code to read stale data, you may not be able to
figure out why by examining memory dumps with a debugger. By the time you can
issue a debugger query, the CPU cores may have all observed the full set of
accesses, and the contents of memory and the CPU registers will appear to be in
an “impossible” state.</p>
<h3 id="c_dont">What not to do in C</h3>
<p>Here we present some examples of incorrect code, along with simple ways to
fix them. Before we do that, we need to discuss the use of a basic language
feature.</p>
<h4 id="volatile">C/C++ and "volatile"</h4>
<p>C and C++ <code>volatile</code> declarations are a very special purpose tool.
They prevent <i>the compiler</i> from reordering or removing <i>volatile</i>
accesses. This can be helpful for code accessing hardware device registers,
memory mapped to more than one location, or in connection with
<code>setjmp</code>. But C and C++ <code>volatile</code>, unlike Java
<code>volatile</code>, is not designed for thread communication.</p>
<p>In C and C++, accesses to <code>volatile</code>
data may be reordered with accessed to non-volatile data, and there are no
atomicity guarantees. Thus <code>volatile</code> can't be used for sharing data between
threads in portable code, even on a uniprocessor. C <code>volatile</code> usually does not
prevent access reordering by the hardware, so by itself it is even less useful in
multi-threaded SMP environments. This is the reason C11 and C++11 support
<code>atomic</code> objects. You should use those instead.</p>
<p>A lot of older C and C++ code still abuses <code>volatile</code> for thread
communication. This often works correctly for data that fits
in a machine register, provided it is used with either explicit fences or in cases
in which memory ordering is not important. But it is not guaranteed to work
correctly with future compilers.</p>
<h4 id="examplesc">Examples</h4>
<p>In most cases you’d be better off with a lock (like a
<code>pthread_mutex_t</code> or C++11 <code>std::mutex</code>) rather than an
atomic operation, but we will employ the latter to illustrate how they would be
used in a practical situation.</p>
<pre>MyThing* gGlobalThing = NULL; // Wrong! See below.
void initGlobalThing() // runs in Thread 1
{
MyStruct* thing = malloc(sizeof(*thing));
memset(thing, 0, sizeof(*thing));
thing-&gt;x = 5;
thing-&gt;y = 10;
/* initialization complete, publish */
gGlobalThing = thing;
}
void useGlobalThing() // runs in Thread 2
{
if (gGlobalThing != NULL) {
int i = gGlobalThing-&gt;x; // could be 5, 0, or uninitialized data
...
}
}</pre>
<p>The idea here is that we allocate a structure, initialize its fields, and at
the very end we “publish” it by storing it in a global variable. At that point,
any other thread can see it, but that’s fine since it’s fully initialized,
right?</p>
<p>The problem is that the store to <code>gGlobalThing</code> could be observed
before the fields are initialized, typically because either the compiler or the
processor reordered the stores to <code>gGlobalThing</code> and
<code>thing-&gt;x</code>. Another thread reading from <code>thing-&gt;x</code> could
see 5, 0, or even uninitialized data.</p>
<p>The core problem here is a data race on <code>gGlobalThing</code>.
If Thread 1 calls <code>initGlobalThing()</code> while Thread 2
calls <code>useGlobalThing()</code>, <code>gGlobalThing</code> can be
read while being written.
<p>This can be fixed by declaring <code>gGlobalThing</code> as
atomic. In C++11:</p>
<pre>atomic&lt;MyThing*&gt; gGlobalThing(NULL);</pre>
<p>This ensures that the writes will become visible to other threads
in the proper order. It also guarantees to prevent some other failure
modes that are otherwise allowed, but unlikely to occur on real
Android hardware. For example, it ensures that we cannot see a
<code>gGlobalThing</code> pointer that has only been partially written.</p>
<h3 id="j_dont">What not to do in Java</h3>
<p>We haven’t discussed some relevant Java language features, so we’ll take a
quick look at those first.</p>
<p>Java technically does not require code to be data-race-free. And there
is a small amount of very-carefully-written Java code that works correctly
in the presence of data races. However, writing such code is extremely
tricky, and we discuss it only briefly below. To make matters
worse, the experts who specified the meaning of such code no longer believe the
specification is correct. (The specification is fine for data-race-free
code.)
<p>For now we will adhere to the data-race-free model, for which Java provides
essentially the same guarantees as C and C++. Again, the language provides
some primitives that explicitly relax sequential consistency, notably the
<code>lazySet()</code> and <code>weakCompareAndSet()</code> calls
in <code>java.util.concurrent.atomic</code>.
As with C and C++, we will ignore these for now.
<h4 id="sync_volatile">Java's "synchronized" and "volatile" keywords</h4>
<p>The “synchronized” keyword provides the Java language’s in-built locking
mechanism. Every object has an associated “monitor” that can be used to provide
mutually exclusive access. If two threads try to "synchronize" on the
same object, one of them will wait until the other completes.</p>
<p>As we mentioned above, Java's <code>volatile T</code> is the analog of
C++11's <code>atomic&lt;T&gt;</code>. Concurrent accesses to
<code>volatile</code> fields are allowed, and don't result in data races.
Ignoring <code>lazySet()</code> et al. and data races, it is the Java VM's job to
make sure that the result still appears sequentially consistent.
<p>In particular, if thread 1 writes to a <code>volatile</code> field, and
thread 2 subsequently reads from that same field and sees the newly written
value, then thread 2 is also guaranteed to see all writes previously made by
thread 1. In terms of memory effect, writing to
a volatile is analogous to a monitor release, and
reading from a volatile is like a monitor acquire.</p>
<p>There is one notable difference from C++'s <code>atomic</code>:
If we write <code>volatile int x;</code>
in Java, then <code>x++</code> is the same as <code>x = x + 1</code>; it
performs an atomic load, increments the result, and then performs an atomic
store. Unlike C++, the increment as a whole is not atomic.
Atomic increment operations are instead provided by
the <code>java.util.concurrent.atomic</code>.</p>
<h4 id="examplesj">Examples</h4>
<p>Here’s a simple, <em>incorrect</em> implementation of a monotonic counter: <span
style="font-size:.9em;color:#777">(<em><a href="#more" style="color:#777">Java
theory and practice: Managing volatility</a></em>)</span>.</p>
<pre>class Counter {
private int mValue;
public int get() {
return mValue;
}
public void incr() {
mValue++;
}
}</pre>
<p>Assume <code>get()</code> and <code>incr()</code> are called from multiple
threads, and we want to be sure that every thread sees the current count when
<code>get()</code> is called. The most glaring problem is that
<code>mValue++</code> is actually three operations:</p>
<ol>
<li><code>reg = mValue</code></li>
<li><code>reg = reg + 1</code></li>
<li><code>mValue = reg</code></li>
</ol>
<p>If two threads execute in <code>incr()</code> simultaneously, one of the
updates could be lost. To make the increment atomic, we need to declare
<code>incr()</code> “synchronized”.</p>
<p>It’s still broken however, especially on SMP. There is still a data race,
in that <code>get()</code> can access <code>mValue</code> concurrently with
<code>incr()</code>. Under Java rules, the <code>get()</code> call can be
appear to be reordered with respect to other code. For example, if we read two
counters in a row, the results might appear to be inconsistent
because the <code>get()</code> calls we reordered, either by the hardware or
compiler. We can correct the problem by declaring <code>get()</code> to be
synchronized. With this change, the code is obviously correct.</p>
<p>Unfortunately, we’ve introduced the possibility of lock contention, which
could hamper performance. Instead of declaring <code>get()</code> to be
synchronized, we could declare <code>mValue</code> with “volatile”. (Note
<code>incr()</code> must still use <code>synchronize</code> since
<code>mValue++</code> is otherwise not a single atomic operation.)
This also avoids all data races, so sequential consistency is preserved.
<code>incr()</code> will be somewhat slower, since it incurs both monitor entry/exit
overhead, and the overhead associated with a volatile store, but
<code>get()</code> will be faster, so even in the absence of contention this is
a win if reads greatly outnumber writes. (See also {@link
java.util.concurrent.atomic.AtomicInteger} for a way to completely
remove the synchronized block.)</p>
<p>Here’s another example, similar in form to the earlier C examples:</p>
<pre>class MyGoodies {
public int x, y;
}
class MyClass {
static MyGoodies sGoodies;
void initGoodies() { // runs in thread 1
MyGoodies goods = new MyGoodies();
goods.x = 5;
goods.y = 10;
sGoodies = goods;
}
void useGoodies() { // runs in thread 2
if (sGoodies != null) {
int i = sGoodies.x; // could be 5 or 0
....
}
}
}</pre>
<p>This has the same problem as the C code, namely that there is
a data race on <code>sGoodies</code>. Thus the assignment
<code>sGoodies = goods</code> might be observed before the initialization of the
fields in <code>goods</code>. If you declare <code>sGoodies</code> with the
<code>volatile</code> keyword, sequential consistency is restored, and things will work
as expected.
<p>Note that only the <code>sGoodies</code> reference itself is volatile. The
accesses to the fields inside it are not. Once <code>sGoodies</code> is
<code>volatile</code>, and memory ordering is properly preserved, the fields
cannot be concurrently accessed. The statement <code>z =
sGoodies.x</code> will perform a volatile load of <code>MyClass.sGoodies</code>
followed by a non-volatile load of <code>sGoodies.x</code>. If you make a local
reference <code>MyGoodies localGoods = sGoodies</code>, then a subsequent <code>z =
localGoods.x</code> will not perform any volatile loads.</p>
<p>A more common idiom in Java programming is the infamous “double-checked
locking”:</p>
<pre>class MyClass {
private Helper helper = null;
public Helper getHelper() {
if (helper == null) {
synchronized (this) {
if (helper == null) {
helper = new Helper();
}
}
}
return helper;
}
}</pre>
<p>The idea is that we want to have a single instance of a <code>Helper</code>
object associated with an instance of <code>MyClass</code>. We must only create
it once, so we create and return it through a dedicated <code>getHelper()</code>
function. To avoid a race in which two threads create the instance, we need to
synchronize the object creation. However, we don’t want to pay the overhead for
the “synchronized” block on every call, so we only do that part if
<code>helper</code> is currently null.</p>
<p>This has a data race on the <code>helper</code> field. It can be
set concurrently with the <code>helper == null</code> in another thread.
</p>
<p>To see how this can fail, consider
the same code rewritten slightly, as if it were compiled into a C-like language
(I’ve added a couple of integer fields to represent <code>Helper’s</code>
constructor activity):</p>
<pre>if (helper == null) {
synchronized() {
if (helper == null) {
newHelper = malloc(sizeof(Helper));
newHelper-&gt;x = 5;
newHelper-&gt;y = 10;
helper = newHelper;
}
}
return helper;
}</pre>
<p>There is nothing to prevent either the hardware or the compiler
from reordering the store to <code>helper</code> with those to the
<code>x</code>/<code>y</code> fields. Another thread could find
<code>helper</code> non-null but its fields not yet set and ready to use.
For more details and more failure modes, see the “‘Double Checked
Locking is Broken’ Declaration” link in the appendix for more details, or Item
71 (“Use lazy initialization judiciously”) in Josh Bloch’s <em>Effective Java,
2nd Edition.</em>.</p>
<p>There are two ways to fix this:</p>
<ol>
<li>Do the simple thing and delete the outer check. This ensures that we never
examine the value of <code>helper</code> outside a synchronized block.</li>
<li>Declare <code>helper</code> volatile. With this one small change, the code
in Example J-3 will work correctly on Java 1.5 and later. (You may want to take
a minute to convince yourself that this is true.)</li>
</ol>
<p>Here is another illustration of <code>volatile</code> behavior:</p>
<pre>class MyClass {
int data1, data2;
volatile int vol1, vol2;
void setValues() { // runs in Thread 1
data1 = 1;
vol1 = 2;
data2 = 3;
}
void useValues() { // runs in Thread 2
if (vol1 == 2) {
int l1 = data1; // okay
int l2 = data2; // wrong
}
}
}</pre>
<p>Looking at <code>useValues()</code>, if Thread 2 hasn’t yet observed the
update to <code>vol1</code>, then it can’t know if <code>data1</code> or
<code>data2</code> has been set yet. Once it sees the update to
<code>vol1</code>, it knows that <code>data1</code> can be safely accessed
and correctly read without introducing a data race. However,
it can’t make any assumptions about <code>data2</code>, because that store was
performed after the volatile store.</p>
<p>Note that <code>volatile</code> cannot be used to prevent reordering
of other memory accesses that race with each other. It is not guaranteed to
generate a machine memory fence instruction. It can be used to prevent
data races by executing code only when another thread has satisfied a
certain condition.
<h3 id="bestpractice">What to do</h3>
<p>In C/C++, prefer C++11
synchronization classes, such as <code>std::mutex</code>. If not, use
the corresponding <code>pthread</code> operations.
These include the proper memory fences, providing correct (sequentially consistent
unless otherwise specified)
and efficient behavior on all Android platform versions. Be sure to use them
correctly. For example, remember that condition variable waits may spuriously
return without being signaled, and should thus appear in a loop.</p>
<p>It's best to avoid using atomic functions directly, unless the data structure
you are implementing is extremely simple, like a counter. Locking and
unlocking a pthread mutex require a single atomic operation each,
and often cost less than a single cache miss, if there’s no
contention, so you’re not going to save much by replacing mutex calls with
atomic ops. Lock-free designs for non-trivial data structures require
much more care to ensure that higher level operations on the data structure
appear atomic (as a whole, not just their explicitly atomic pieces).</p>
<p>If you do use atomic operations, relaxing ordering with
<code>memory_order</code>... or <code>lazySet()</code> may provide performance
advantages, but requires deeper understanding than we have conveyed so far.
A large fraction of existing code using
these is discovered to have bugs after the fact. Avoid these if possible.
If your use cases doesn't exactly fit one of those in the next section,
make sure you either are an expert, or have consulted one.
<p>Avoid using <code>volatile</code> for thread communication in C/C++.</p>
<p>In Java, concurrency problems are often best solved by
using an appropriate utility class from
the {@link java.util.concurrent} package. The code is well written and well
tested on SMP.</p>
<p>Perhaps the safest thing you can do is make your objects immutable. Objects
from classes like Java's String and Integer hold data that cannot be changed once an
object is created, avoiding all potential for data races on those objects.
The book <em>Effective
Java, 2nd Ed.</em> has specific instructions in “Item 15: Minimize Mutability”. Note in
particular the importance of declaring Java fields “final" <span
style="font-size:.9em;color:#777">(<a href="#more" style="color:#777">Bloch</a>)</span>.</p>
<p>Even if an object is immutable, remember that communicating it to another
thread without any kind of synchronization is a data race. This can occasionally
be acceptable in Java (see below), but requires great care, and is likely to result in
brittle code. If it's not extremely performance critical, add a
<code>volatile</code> declaration. In C++, communicating a pointer or
reference to an immutable object without proper synchronization,
like any data race, is a bug.
In this case, it is reasonably likely to result in intermittent crashes since,
for example, the receiving thread may see an uninitialized method table
pointer due to store reordering.</p>
<p>If neither an existing library class, nor an immutable class is
appropriate, the Java <code>synchronized</code> statement or C++
<code>lock_guard</code> / <code>unique_lock</code> should be used to guard
accesses to any field that can be accessed by more than one thread. If mutexes won’t
work for your situation, you should declare shared fields
<code>volatile</code> or <code>atomic</code>, but you must take great care to
understand the interactions between threads. These declarations won’t
save you from common concurrent programming mistakes, but they will help you
avoid the mysterious failures associated with optimizing compilers and SMP
mishaps.</p>
<p>You should avoid
"publishing" a reference to an object, i.e. making it available to other
threads, in its constructor. This is less critical in C++ or if you stick to
our "no data races" advice in Java. But it's always good advice, and becomes
critical if your Java code is
run in other contexts in which the Java security model matters, and untrusted
code may introduce a data race by accessing that "leaked" object reference.
It's also critical if you choose to ignore our warnings and use some of the techniques
in the next section.
See <span style="font-size:.9em;color:#777">(<a href="#more"
style="color:#777">Safe Construction Techniques in Java</a>)</span> for
details</p>
<h2 id="weak">A little more about weak memory orders</h2>
<p>C++11 and later provide explicit mechanisms for relaxing the sequential
consistency guarantees for data-race-free programs. Explicit
<code>memory_order_relaxed</code>, <code>memory_order_acquire</code> (loads
only), and <code>memory_order_release</code>(stores only) arguments for atomic
operations each provide strictly weaker guarantees than the default, typically
implicit, <code>memory_order_seq_cst</code>. <code>memory_order_acq_rel</code>
provides both <code>memory_order_acquire</code> and
<code>memory_order_release</code> guarantees for atomic read-modify write
operations. <code>memory_order_consume</code> is not yet sufficiently
well specified or implemented to be useful, and should be ignored for now.</p>
<p>The <code>lazySet</code> methods in <code>Java.util.concurrent.atomic</code>
are similar to C++ <code>memory_order_release</code> stores. Java's
ordinary variables are sometimes used as a replacement for
<code>memory_order_relaxed</code> accesses, though they are actually
even weaker. Unlike C++, there is no real mechanism for unordered
accesses to variables that are declared as <code>volatile</code>.</p>
<p>You should generally avoid these unless there are pressing performance reasons to
use them. On weakly ordered machine architectures like ARM, using them will
commonly save on the order of a few dozen machine cycles for each atomic operation.
On x86, the performance win is limited to stores, and likely to be less
noticeable.
Somewhat counter-intuitively, the benefit may decrease with larger core counts,
as the memory system becomes more of a limiting factor.</p>
<p>The full semantics of weakly ordered atomics are complicated.
In general they require
precise understanding of the language rules, which we will
not go into here. For example:
<ul>
<li> The compiler or hardware can move <code>memory_order_relaxed</code>
accesses into (but not out of) a critical section bounded by a lock
acquisition and release. This means that two
<code>memory_order_relaxed</code> stores may become visible out of order,
even if they are separated by a critical section.
<li> An ordinary Java variable, when abused as a shared counter, may appear
to another thread to decrease, even though it is only incremented by a single
other thread. But this is not true for C++ atomic
<code>memory_order_relaxed</code>.
</ul>
With that as a warning,
here we give a small number of idioms that seem to cover many of the use
cases for weakly ordered atomics. Many of these are applicable only to C++:
<h3 id="nonracing">Non-racing accesses</h3>
<p>It is fairly common that a variable is atomic because it is <em>sometimes</em>
read concurrently with a write, but not all accesses have this issue.
For example a variable
may need to be atomic because it is read outside a critical section, but all
updates are protected by a lock. In that case, a read that happens to be
protected by the same lock
cannot race, since there cannot be concurrent writes. In such a case, the
non-racing access (load in this case), can be annotated with
<code>memory_order_relaxed</code> without changing the correctness of C++ code.
The lock implementation already enforces the required memory ordering
with respect to access by other threads, and <code>memory_order_relaxed</code>
specifies that essentially no additional ordering constraints need to be
enforced for the atomic access.</p>
<p>There is no real analog to this in Java.</p>
<h3 id="hint_only">Result is not relied upon for correctness</h3>
<p>When we use a racing load only to generate a hint, it's generally also OK
to not enforce any memory ordering for the load. If the value is
not reliable, we also can't reliably use the result to infer anything about
other variables. Thus it's OK
if memory ordering is not guaranteed, and the load is
supplied with a <code>memory_order_relaxed</code> argument.</p>
<p>A common
instance of this is the use of C++ <code>compare_exchange</code>
to atomically replace <code>x</code> by <code>f(x)</code>.
The initial load of <code>x</code> to compute <code>f(x)</code>
does not need to be reliable. If we get it wrong, the
<code>compare_exchange</code> will fail and we will retry.
It is fine for the initial load of <code>x</code> to use
a <code>memory_order_relaxed</code> argument; only memory ordering
for the actual <code>compare_exchange</code> matters.</p>
<h3 id="unread">Atomically modified but unread data</h3>
<p>Occasionally data is modified in parallel by multiple threads, but
not examined until the parallel computation is complete. A good
example of this is a counter that is atomically incremented (e.g.
using <code>fetch_add()</code> in C++ or
<code>atomic_fetch_add_explicit()</code>
in C) by multiple threads in parallel, but the result of these calls
is always ignored. The resulting value is only read at the end,
after all updates are complete.</p>
<p>In this case, there is no way to tell whether accesses to this data
was reordered, and hence C++ code may use a <code>memory_order_relaxed</code>
argument.<p>
<p>Simple event counters are a common example of this. Since it is
so common, it is worth making some observations about this case:</p>
<ul>
<li> Use of <code>memory_order_relaxed</code> improves performance,
but may not address the most important performance issue: Every update
requires exclusive access to the cache line holding the counter. This
results in a cache miss every time a new thread accesses the counter.
If updates are frequent and alternate between threads, it is much faster
to avoid updating the shared counter every time by,
for example, using thread-local counters and summing them at the end.
<li> This technique is combinable with the previous section: It is possible to
concurrently read approximate and unreliable values while they are being updated,
with all operations using <code>memory_order_relaxed</code>.
But it is important to treat the resulting values as completely unreliable.
Just because the count appears to have been incremented once does not
mean another thread can be counted on to have reached the point
at which the increment has been performed. The increment may instead have
been reordered with earlier code. (As for the similar case we mentioned
earlier, C++ does guarantee that a second load of such a counter will not
return a value less than an earlier load in the same thread. Unless of
course the counter overflowed.)
<li> It is common to find code that tries to compute approximate
counter values by performing individual atomic (or not) reads and writes, but
not making the increment as a whole atomic. The usual argument is that
this is "close enough" for performance counters or the like.
It's typically not.
When updates are sufficiently frequent (a case
you probably care about), a large fraction of the counts are typically
lost. On a quad core device, more than half the counts may commonly be lost.
(Easy exercise: construct a two thread scenario in which the counter is
updated a million times, but the final counter value is one.)
</ul>
<h3 id="flag">Simple Flag communication</h3>
<p>A <code>memory_order_release</code> store (or read-modify-write operation)
ensures that if subsequently a <code>memory_order_acquire</code> load
(or read-modify-write operation) reads the written value, then it will
also observe any stores (ordinary or atomic) that preceded the
A <code>memory_order_release</code> store. Conversely, any loads
preceding the <code>memory_order_release</code> will not observe any
stores that followed the <code>memory_order_acquire</code> load.
Unlike <code>memory_order_relaxed</code>, this allows such atomic operations
to be used to communicate the progress of one thread to another.</p>
<p>For example, we can rewrite the double-checked locking example
from above in C++ as</p>
<pre>
class MyClass {
private:
atomic&lt;Helper*&gt; helper {nullptr};
mutex mtx;
public:
Helper* getHelper() {
Helper* myHelper = helper.load(memory_order_acquire);
if (myHelper == nullptr) {
lock_guard&lt;mutex&gt; lg(mtx);
myHelper = helper.load(memory_order_relaxed);
if (myHelper == nullptr) {
myHelper = new Helper();
helper.store(myHelper, memory_order_release);
}
}
return myHelper;
}
};
</pre>
<p>The acquire load and release store ensure that if we see a non-null
<code>helper</code>, then we will also see its fields correctly initialized.
We've also incorporated the prior observation that non-racing loads
can use <code>memory_order_relaxed</code>.</p>
<p>A Java programmer could conceivably represent <code>helper</code> as a
<code>java.util.concurrent.atomic.AtomicReference&lt;Helper&gt;</code>
and use <code>lazySet()</code> as the release store. The load
operations would continue to use plain <code>get()</code> calls.</p>
<p>In both cases, our performance tweaking concentrated on the initialization
path, which is unlikely to be performance critical.
A more readable compromise might be:</p>
<pre>
Helper* getHelper() {
Helper* myHelper = helper.load(memory_order_acquire);
if (myHelper != nullptr) {
return myHelper;
}
lock_guard&ltmutex&gt; lg(mtx);
if (helper == nullptr) {
helper = new Helper();
}
return helper;
}
</pre>
<p>This provides the same fast path, but resorts to default,
sequentially-consistent, operations on the non-performance-critical slow
path.</p>
<p>Even here, <code>helper.load(memory_order_acquire)</code> is
likely to generate the same code on current Android-supported
architectures as a plain (sequentially-consistent) reference to
<code>helper</code>. Really the most beneficial optimization here
may be the introduction of <code>myHelper</code> to eliminate a
second load, though a future compiler might do that automatically.</p>
<p>Acquire/release ordering does not prevent stores from getting visibly
delayed, and does not ensure that stores become visible to other threads
in a consistent order. As a result, it does not support a tricky,
but fairly common coding pattern exemplified by Dekker's mutual exclusion
algorithm: All threads first set a flag indicating that they want to do
something; if a thread <i>t</i> then notices that no other thread is
trying to do something, it can safely proceed, knowing that there
will be no interference. No other thread will be
able to proceed, since <i>t</i>'s flag is still set. This fails
if the flag is accessed using acquire/release ordering, since that doesn't
prevent making a thread's flag visible to others late, after they have
erroneously proceeded. Default <code>memory_order_seq_cst</code>
does prevent it.</p>
<h3 id="immutable">Immutable fields</h3>
<p> If an object field is initialized on first use and then never changed,
it may be possible to initialize and subsequently read it using weakly
ordered accesses. In C++, it could be declared as <code>atomic</code>
and accessed using <code>memory_order_relaxed</code> or in Java, it
could be declared without <code>volatile</code> and accessed without
special measures. This requires that all of the following hold:</p>
<ul>
<li>It should be possible to tell from the value of the field itself
whether it has already been initialized. To access the field,
the fast path test-and-return value should read the field only once.
In Java the latter is essential. Even if the field tests as initialized,
a second load may read the earlier uninitialized value. In C++
the "read once" rule is merely good practice.
<li>Both initialization and subsequent loads must be atomic,
in that partial updates should not be visible. For Java, the field
should not be a <code>long</code> or <code>double</code>. For C++,
an atomic assignment is required; constructing it in place will not work, since
construction of an <code>atomic</code> is not atomic.
<li>Repeated initializations must be safe, since multiple threads
may read the uninitialized value concurrently. In C++, this generally
follows from the "trivially copyable" requirement imposed for all
atomic types; types with nested owned pointers would require
deallocation in the
copy constructor, and would not be trivially copyable. For Java,
certain reference types are acceptable:
<li>Java references are limited to immutable types containing only final
fields. The constructor of the immutable type should not publish
a reference to the object. In this case the Java final field rules
ensure that if a reader sees the reference, it will also see the
initialized final fields. C++ has no analog to these rules and
pointers to owned objects are unacceptable for this reason as well (in
addition to violating the "trivially copyable" requirements).
</ul>
<h2 id="closing_notes">Closing Notes</h2>
<p>While this document does more than merely scratch the surface, it doesn’t
manage more than a shallow gouge. This is a very broad and deep topic. Some
areas for further exploration:</p>
<ul>
<li>The actual Java and C++ memory models are expressed in terms of a
<em>happens-before</em> relation that specifies when two actions are guaranteed
to occur in a certain order. When we defined a data race, we informally
talked about two memory accesses happening "simultaneously".
Officially this is defined as neither one happening before the other.
It is instructive to learn the actual definitions of <em>happens-before</em>
and <em>synchronizes-with</em> in the Java or C++ Memory Model.
Although the intuitive notion of "simultaneously" is generally good
enough, these definitions are instructive, particularly if you
are contemplating using weakly ordered atomic operations in C++.
(The current Java specification only defines <code>lazySet()</code>
very informally.)</li>
<li>Explore what compilers are and aren’t allowed to do when reordering code.
(The JSR-133 spec has some great examples of legal transformations that lead to
unexpected results.)</li>
<li>Find out how to write immutable classes in Java and C++. (There’s more to
it than just “don’t change anything after construction”.)</li>
<li>Internalize the recommendations in the Concurrency section of <em>Effective
Java, 2nd Edition.</em> (For example, you should avoid calling methods that are
meant to be overridden while inside a synchronized block.)</li>
<li>Read through the {@link java.util.concurrent} and {@link
java.util.concurrent.atomic} APIs to see what's available. Consider using
concurrency annotations like <code>@ThreadSafe</code> and
<code>@GuardedBy</code> (from net.jcip.annotations).</li>
</ul>
<p>The <a href="#more">Further Reading</a> section in the appendix has links to
documents and web sites that will better illuminate these topics.</p>
<h2 id="appendix">Appendix</h2>
<h3 id="sync_stores">Implementing synchronization stores</h3>
<p><em>(This isn’t something most programmers will find themselves implementing,
but the discussion is illuminating.)</em></p>
<p> For small built-in types like <code>int</code>, and hardware supported by
Android, ordinary load and store instructions ensure that a store
will be made visible either in its entirety, or not at all, to another
processor loading the same location. Thus some basic notion
of "atomicity" is provided for free.</p>
<p> As we saw before, this does not suffice. In order to ensure sequential
consistency we also need to prevent reordering of operations, and to ensure
that memory operations become visible to other processes in a consistent
order. It turns out that the latter is automatic on Android-supported
hardware, provided we make judicious choices for enforcing the former,
so we largely ignore it here.</p>
<p> Order of memory operations is preserved by both preventing reordering
by the compiler, and preventing reordering by the hardware. Here we focus
on the latter.
<p> Memory ordering on current hardware is generally enforced with
"fence" instructions that
roughly prevent instructions following the fence from becoming visible
before instructions preceding the fence. (These are also commonly
called "barrier" instructions, but that risks confusion with
<code>pthread_barrier</code>-style barriers, which do much more
than this.) The precise meaning of
fence instructions is a fairly complicated topic that has to address
the way in which guarantees provided by multiple different kinds of fences
interact, and how these combine with other ordering guarantees usually
provided by the hardware. This is a high level overview, so we will
gloss over these details.</p>
<p> The most basic kind of ordering guarantee is that provided by C++
<code>memory_order_acquire</code> and <code>memory_order_release</code>
atomic operations: Memory operations preceding a release store
should be visible following an acquire load. On ARM, this is
enforced by:</p>
<ul>
<li>Preceding the store instruction with a suitable fence instruction.
This prevents all prior memory accesses from being reordered with the
store instruction. (It also unnecessarily prevents reordering with
later store instruction. ARMv8 provides an alternative that doesn't share
this problem.)
<li>Following the load instruction with a suitable fence instruction,
preventing the load from being reordered with subsequent accesses.
(And once again providing unneeded ordering with at least earlier loads.)
</ul>
<p> Together these suffice for C++ acquire/release ordering.
They are necessary, but not sufficient, for Java <code>volatile</code>
or C++ sequentially consistent <code>atomic</code>.</p>
<p>To see what else we need, consider the fragment of Dekker’s algorithm
we briefly mentioned earlier.
<code>flag1</code> and <code>flag2</code> are C++ <code>atomic</code>
or Java <code>volatile</code> variables, both initially false.</p>
<table>
<tr>
<th>Thread 1</th>
<th>Thread 2</th>
</tr>
<tr>
<td><code>flag1 = true<br />
if (flag2 == false)<br />
&nbsp;&nbsp;&nbsp;&nbsp;<em>critical-stuff</em></code></td>
<td><code>flag2 = true<br />
if (flag1 == false)<br />
&nbsp;&nbsp;&nbsp;&nbsp;<em>critical-stuff</em></code></td>
</tr>
</table>
<p>Sequential consistency implies that one of the assignments to
<code>flag</code><i>n</i> must be executed first, and be seen by the
test in the other thread. Thus, we will never see
these threads executing the “critical-stuff” simultaneously.</p>
<p>But the fencing required for acquire-release ordering only adds
fences at the beginning and end of each thread, which doesn't help
here. We additionally need to ensure that if a
<code>volatile</code>/<code>atomic</code> store is followed by
a <code>volatile</code>/<code>atomic</code> load, the two are not reordered.
This is normally enforced by add a fence not just before a
sequentially consistent store, but also after it.
(This is again much stronger than required, since this fence typically orders
all earlier memory accesses with respect to all later ones. Again ARMv8
offers a more targeted solution.)</p>
<p>We could instead associate the extra fence with sequentially
consistent loads. Since stores are less frequent, the convention
we described is more common and used on Android.</p>
<p>As we saw in an earlier section, we need to insert a store/load barrier
between the two operations. The code executed in the VM for a volatile access
will look something like this:</p>
<table>
<tr>
<th>volatile load</th>
<th>volatile store</th>
</tr>
<tr>
<td><code>reg = A<br />
<em>fence for "acquire" (1)</em></code></td>
<td><code><em>fence for "release" (2)</em><br />
A = reg<br />
<em>fence for later atomic load (3)</em></code></td>
</tr>
</table>
<p>Real machine architectures commonly provide multiple types of
fences, which order different types of accesses and may have
different cost. The choice between these is subtle, and influenced
by the need to ensure that stores are made visible to other cores in
a consistent order, and that the memory ordering imposed by the
combination of multiple fences composes correctly. For more details,
please see the University of Cambridge page with <a href="#more">
collected mappings of atomics to actual processors</a>.</p>
<p>On some architectures, notably x86, the "acquire" and "release"
barriers are unnecessary, since the hardware always implicitly
enforces sufficient ordering. Thus on x86 only the last fence (3)
is really generated. Similarly on x86, atomic read-modify-write
operations implicitly include a strong fence. Thus these never
require any fences. On ARM all fences we discussed above are
required.</p>
<h3 id="more">Further reading</h3>
<p>Web pages and documents that provide greater depth or breadth. The more generally useful articles are nearer the top of the list.</p>
<dl>
<dt>Shared Memory Consistency Models: A Tutorial</dt>
<dd>Written in 1995 by Adve &amp; Gharachorloo, this is a good place to start if you want to dive more deeply into memory consistency models.
<br /><a href="http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf">http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf</a></dd>
<dt>Memory Barriers</dt>
<dd>Nice little article summarizing the issues.
<br /><a href="http://en.wikipedia.org/wiki/Memory_barrier">http://en.wikipedia.org/wiki/Memory_barrier</a></dd>
<dt>Threads Basics</dt>
<dd>An introduction to multi-threaded programming in C++ and Java, by Hans Boehm. Excellent discussion of data races and basic synchronization methods.
<br /><a href="http://www.hpl.hp.com/personal/Hans_Boehm/c++mm/threadsintro.html">http://www.hboehm.info/c++mm/threadsintro.html</a></dd>
<dt>Java Concurrency In Practice</dt>
<dd>Published in 2006, this book covers a wide range of topics in great detail. Highly recommended for anyone writing multi-threaded code in Java.
<br /><a href="http://www.javaconcurrencyinpractice.com">http://www.javaconcurrencyinpractice.com</a></dd>
<dt>JSR-133 (Java Memory Model) FAQ</dt>
<dd>A gentle introduction to the Java memory model, including an explanation of synchronization, volatile variables, and construction of final fields.
(A bit dated, particularly when it discusses other languages.)
<br /><a href="http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html">http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html</a></dd>
<dt>Validity of Program Transformations in the Java Memory Model</dt>
<dd>A rather technical explanation of remaining problems with the
Java memory model. These issues do not apply to data-race-free
programs.
<br /><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.1790&type=pdf">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.1790&type=pdf</a></dd>
<dt>Overview of package java.util.concurrent</dt>
<dd>The documentation for the <code>java.util.concurrent</code> package. Near the bottom of the page is a section entitled “Memory Consistency Properties” that explains the guarantees made by the various classes.
<br />{@link java.util.concurrent java.util.concurrent} Package Summary</dd>
<dt>Java Theory and Practice: Safe Construction Techniques in Java</dt>
<dd>This article examines in detail the perils of references escaping during object construction, and provides guidelines for thread-safe constructors.
<br /><a href="http://www.ibm.com/developerworks/java/library/j-jtp0618.html">http://www.ibm.com/developerworks/java/library/j-jtp0618.html</a></dd>
<dt>Java Theory and Practice: Managing Volatility</dt>
<dd>A nice article describing what you can and can’t accomplish with volatile fields in Java.
<br /><a href="http://www.ibm.com/developerworks/java/library/j-jtp06197.html">http://www.ibm.com/developerworks/java/library/j-jtp06197.html</a></dd>
<dt>The “Double-Checked Locking is Broken” Declaration</dt>
<dd>Bill Pugh’s detailed explanation of the various ways in which double-checked locking is broken without <code>volatile</code> or <code>atomic</code>.
Includes C/C++ and Java.
<br /><a href="http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html">http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html</a></dd>
<dt>[ARM] Barrier Litmus Tests and Cookbook</dt>
<dd>A discussion of ARM SMP issues, illuminated with short snippets of ARM code. If you found the examples in this document too un-specific, or want to read the formal description of the DMB instruction, read this. Also describes the instructions used for memory barriers on executable code (possibly useful if you’re generating code on the fly). Note that this predates ARMv8, which also
supports an additional set of memory ordering instructions.
<br /><a href="http://infocenter.arm.com/help/topic/com.arm.doc.genc007826/Barrier_Litmus_Tests_and_Cookbook_A08.pdf">http://infocenter.arm.com/help/topic/com.arm.doc.genc007826/Barrier_Litmus_Tests_and_Cookbook_A08.pdf</a></dd>
<dt>Linux Kernel Memory Barriers
<dd>Documentation for Linux kernel memory barriers. Includes some useful examples and ASCII art.
<br/><a href="http://www.kernel.org/doc/Documentation/memory-barriers.txt">http://www.kernel.org/doc/Documentation/memory-barriers.txt</a></dd>
<dt>ISO/IEC JTC1 SC22 WG21 (C++ standards) 14882 (C++ programming language), section 1.10 and clause 29 (“Atomic operations library”)</dt>
<dd>Draft standard for C++ atomic operation features. This version is
close to the C++14 standard, which includes minor changes in this area
from C++11.
<br /><a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4527.pdf">http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4527.pdf</a>
<br />(intro: <a href="http://www.hpl.hp.com/techreports/2008/HPL-2008-56.pdf">http://www.hpl.hp.com/techreports/2008/HPL-2008-56.pdf</a>)</dd>
<dt>ISO/IEC JTC1 SC22 WG14 (C standards) 9899 (C programming language) chapter 7.16 (“Atomics &lt;stdatomic.h&gt;”)</dt>
<dd>Draft standard for ISO/IEC 9899-201x C atomic operation features.
For details, also check later defect reports.
<br /><a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf">http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf</a></dd>
<dt>C/C++11 mappings to processors (University of Cambridge)</dt>
<dd>Jaroslav Sevcik and Peter Sewell's collection of translations
of C++ atomics to various common processor instruction sets.
<br /><a href="http://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html">
http://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html</a></dd>
<dt>Dekker’s algorithm</dt>
<dd>The “first known correct solution to the mutual exclusion problem in concurrent programming”. The wikipedia article has the full algorithm, with a discussion about how it would need to be updated to work with modern optimizing compilers and SMP hardware.
<br /><a href="http://en.wikipedia.org/wiki/Dekker's_algorithm">http://en.wikipedia.org/wiki/Dekker's_algorithm</a></dd>
<dt>Comments on ARM vs. Alpha and address dependencies</dt>
<dd>An e-mail on the arm-kernel mailing list from Catalin Marinas. Includes a nice summary of address and control dependencies.
<br /><a href="http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-05/msg11811.html">http://linux.derkeiler.com/Mailing-Lists/Kernel/2009-05/msg11811.html</a></dd>
<dt>What Every Programmer Should Know About Memory</dt>
<dd>A very long and detailed article about different types of memory, particularly CPU caches, by Ulrich Drepper.
<br /><a href="http://www.akkadia.org/drepper/cpumemory.pdf">http://www.akkadia.org/drepper/cpumemory.pdf</a></dd>
<dt>Reasoning about the ARM weakly consistent memory model</dt>
<dd>This paper was written by Chong & Ishtiaq of ARM, Ltd. It attempts to describe the ARM SMP memory model in a rigorous but accessible fashion. The definition of “observability” used here comes from this paper.
<br /><a href="http://portal.acm.org/ft_gateway.cfm?id=1353528&type=pdf&coll=&dl=&CFID=96099715&CFTOKEN=57505711">http://portal.acm.org/ft_gateway.cfm?id=1353528&type=pdf&coll=&dl=&CFID=96099715&CFTOKEN=57505711</a></dd>
<dt>The JSR-133 Cookbook for Compiler Writers</dt>
<dd>Doug Lea wrote this as a companion to the JSR-133 (Java Memory Model) documentation. It contains the initial set of implementation guidelines
for the Java memory model that was used by many compiler writers, and is
still widely cited and likely to provide insight.
Unfortunately, the four fence varieties discussed here are not a good
match for Android-supported architectures, and the above C++11 mappings
are now a better source of precise recipes, even for Java.
<br /><a href="http://g.oswego.edu/dl/jmm/cookbook.html">http://g.oswego.edu/dl/jmm/cookbook.html</a></dd>
<dt>x86-TSO: A Rigorous and Usable Programmer’s Model for x86 Multiprocessors</dt>
<dd>A precise description of the x86 memory model. Precise descriptions of
the ARM memory model are unfortunately significantly more complicated.
<br /><a href="http://www.cl.cam.ac.uk/~pes20/weakmemory/cacm.pdf">http://www.cl.cam.ac.uk/~pes20/weakmemory/cacm.pdf</a></dd>
</dl>