Happens-before relationships with volatile fields and synchronized blocks in Java - and their impact on non-volatile variables?
I am still pretty new to the concept of threading, and try to understand more about it. Recently, I came across a blog post on What Volatile Means in Java by Jeremy Manson, where he writes:
When one thread writes to a volatile variable, and another thread sees that write, the first thread is telling the second about all of the contents of memory up until it performed the write to that volatile variable. [...] all of the memory contents seen by Thread 1, before it wrote to
[volatile] ready
, must be visible to Thread 2, after it reads the valuetrue
forready
. [emphasis added by myself]
Now, does that mean that all variables (volatile or not) held in Thread 1's memory at the time of the write to the volatile variable will become visible to Thread 2 after it reads that volatile variable? If so, is it possible to puzzle that statement together from the official Java documentation/Oracle sources? And from which version of Java onwards will this work?
In particular, if all Threads share the following class variables:
private String s = "running";
private volatile boolean b = false;
And Thread 1 executes the following first:
s = "done";
b = true;
And Thread 2 then executes afterwards (after Thread 1 wrote to the volatile field):
boolean flag = b; //read from volatile
System.out.println(s);
Would this be guaranteed to print "done"?
What would happen if instead of declaring b
as volatile
I put the write and read into a synchronized
block?
Additionally, in a discussion entitled "Are static variables shared between threads?", @TREE writes:
Don't use volatile to protect more than one piece of shared state.
Why? (Sorry; I can't comment yet on other questions, or I would have asked there...)
Yes, it is guaranteed that thread 2 will print "done" . Of course, that is if the write to b
in Thread 1 actually happens before the read from b
in Thread 2, rather than happening at the same time, or earlier!
The heart of the reasoning here is the happens-before relationship. Multithreaded program executions are seen as being made of events. Events can be related by happens-before relationships, which say that one event happens before another. Even if two events are not directly related, if you can trace a chain of happens-before relationships from one event to another, then you can say that one happens before the other.
In your case, you have the following events:
- Thread 1 writes to
s
- Thread 1 writes to
b
- Thread 2 reads from
b
- Thread 2 reads from
s
And the following rules come into play:
- "If x and y are actions of the same thread and x comes before y in program order, then hb(x, y)." (the program order rule)
- "A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field." (the volatile rule)
The following happens-before relationships therefore exist:
-
Thread 1 writes to
s
happens before Thread 1 writes tob
(program order rule) -
Thread 1 writes to
b
happens before Thread 2 reads fromb
(volatile rule) -
Thread 2 reads from
b
happens before Thread 2 reads froms
(program order rule)
If you follow that chain, you can see that as a result:
-
Thread 1 writes to
s
happens before Thread 2 reads froms
What would happen if instead of declaring b as volatile I put the write and read into a synchronized block?
If and only if you protect all such synchronized blocks with the same lock will you have the same guarantee of visibility as with your volatile
example. You will in addition have mutual exclusion of the execution of such synchronized blocks.
Don't use volatile to protect more than one piece of shared state.
Why?
volatile
does not guarantee atomicity: in your example the s
variable may also have been mutated by other threads after the write you are showing; the reading thread won't have any guarantee as to which value it sees. Same thing goes for writes to s
occurring after your read of the volatile
, but before the read of s
.
What is safe to do, and done in practice, is sharing immutable state transitively accessible from the reference written to a volatile
variable. So maybe that's the meaning intended by "one piece of shared state".
is it possible to puzzle that statement together from the official Java documentation/Oracle sources?
Quotes from the spec:
17.4.4. Synchronization Order
A write to a volatile variable v (§8.3.1.4) synchronizes-with all subsequent reads of v by any thread (where "subsequent" is defined according to the synchronization order).
17.4.5. Happens-before Order
If x and y are actions of the same thread and x comes before y in program order, then hb(x, y).
If an action x synchronizes-with a following action y, then we also have hb(x, y).
This should be enough.
And from which version of Java onwards will this work?
Java Language Specification, 3rd Edition introduced the rewrite of the Memory Model specification which is the key to the above guarantees. NB most previous versions acted as if the guarantees were there and many lines of code actually depended on it. People were surprised when they found out that the guarantees had in fact not been there.
Would this be guaranteed to print "done"?
As said in Java Concurrency in Practice:
When thread
A
writes to avolatile
variable and subsequently threadB
reads that same variable, the values of all variables that were visible to A prior to writing to thevolatile
variable become visible to B after reading thevolatile
variable.
So YES, This guarantees to print "done".
What would happen if instead of declaring b as volatile I put the write and read into a synchronized block?
This too will guarantee the same.
Don't use volatile to protect more than one piece of shared state.
Why?
Because, volatile guarantees only Visibility. It does'nt guarantee atomicity. If We have two volatile writes in a method which is being accessed by a thread A
and another thread B
is accessing those volatile variables , then while thread A
is executing the method it might be possible that thread A
will be preempted by thread B
in the middle of operations(e.g. after first volatile write but before second volatile write by the thread A
). So to guarantee the atomicity of operation synchronization
is the most feasible way out.