Home > Java > javaTutorial > body text

[Fighting Java Concurrency]-----Analysis of Java Memory Model Volatile

黄舟
Release: 2017-02-24 10:07:47
Original
1407 people have browsed it

The previous blog [Fuck Java Concurrency] - In-depth analysis of the implementation principle of volatile has already explained the characteristics of volatile:

  1. volatile visibility; for a For volatile reads, you can always see the final write to this variable;

  2. volatile atomicity; volatile is atomic for a single read/write (32-bit Long, Double), but composite Except for operations, such as i++;

  3. The bottom layer of JVM uses "memory barrier" to implement volatile semantics

The following LZ uses the happens-before principle and The memory semantics of volatile introduce volatile in two directions.

volatile and happens-before

In this blog [Fuck Java Concurrency] - Java Memory Model Happend-Before, LZ explains that happens-before is used to determine whether to store data. The main basis for competition and thread safety, which ensures visibility in a multi-threaded environment. Let's take the classic example to analyze the happens-before relationship established by reading and writing volatile variables.

public class VolatileTest {

    int i = 0;    volatile boolean flag = false;    //Thread A
    public void write(){
        i = 2;              //1
        flag = true;        //2
    }    //Thread B
    public void read(){        if(flag){                                   //3
            System.out.println("---i = " + i);      //4
        }
    }
}
Copy after login

According to the happens-before principle, the following relationship is obtained for the above program:

  • According to the happens-before program sequence principle: 1 happens-before 2, 3 happens-before 4;

  • According to the volatile principle of happens-before: 2 happens-before 3;

  • According to the transitivity of happens-before: 1 happens -before 4

Operation 1 and operation 4 have a happens-before relationship, so 1 must be visible to 4. Some students may ask, operation 1 and operation 2 may be reordered, is it possible? If you have read LZ's blog, you will understand that in addition to ensuring visibility, volatile also prohibits reordering. Therefore, all shared variables visible to thread A before writing the volatile variable will become visible to thread B immediately after thread B reads the same volatile variable.

volataile's memory semantics and its implementation

In JMM, communication between threads is implemented using shared memory. The memory semantics of volatile are:

When writing a volatile variable, JMM will immediately refresh the shared variable value in the local memory corresponding to the thread to the main memory.
When reading a volatile variable, JMM will set the local memory corresponding to the thread to invalid and read the shared variable directly from the main memory.

So the write memory semantics of volatile are directly refreshed to the main memory. , the memory semantics of reading is to read directly from main memory.
So how are volatile memory semantics implemented? For general variables, they will be reordered, but for volatile variables, they will not be reordered. This will affect its memory semantics, so in order to achieve volatile memory semantics, JMM will limit reordering. The reordering rules are as follows:

The translation is as follows:

  1. If the first operation is a volatile read, no matter what the second operation is, it cannot be reordered. This operation ensures that operations after the volatile read will not be reordered by the compiler to before the volatile read;

  2. When the second operation is a volatile write, no matter what the first operation is, Neither can be reordered. This operation ensures that operations before volatile write will not be reordered by the compiler to after volatile write;

  3. When the first operation is volatile write and the second operation is volatile read, it cannot be reordered. .

The underlying implementation of volatile is by inserting memory barriers, but it is almost impossible for the compiler to find an optimal arrangement that minimizes the total number of inserted memory barriers, so, JMM adopts a conservative strategy. As follows:

  • Insert a StoreStore barrier before each volatile write operation

  • Insert a StoreLoad barrier after each volatile write operation

  • Insert a LoadLoad barrier after each volatile read operation

  • Insert a LoadStore barrier after each volatile read operation

StoreStore barrier can ensure that all ordinary write operations in front of it have been flushed to main memory before volatile writing.

The function of the StoreLoad barrier is to prevent volatile writes from being reordered by subsequent volatile read/write operations.

The LoadLoad barrier is used to prevent the processor from reordering the volatile read above and the normal read below.

The LoadStore barrier is used to prevent the processor from reordering volatile reads above and ordinary writes below.

Let’s analyze the VolatileTest example above:

public class VolatileTest {
    int i = 0;    
    volatile boolean flag = false;    
    public void write(){
        i = 2;
        flag = true;
    }    public void read(){        
    if(flag){
            System.out.println("---i = " + i); 
        }
    }
}
Copy after login

[Fighting Java Concurrency]-----Analysis of Java Memory Model Volatile

The memory barrier legend of the volatile instruction is slightly demonstrated through an example.

volatile's memory barrier insertion strategy is very conservative. In fact, in practice, as long as the volatile write-read memory semantics are not changed, the compiler can optimize according to the specific situation and omit unnecessary barriers. As follows (excerpted from Fang Tengfei's "The Art of Java Concurrent Programming"):

public class VolatileBarrierExample {
    int a = 0;    
    volatile int v1 = 1;    
    volatile int v2 = 2;    
    void readAndWrite(){        
    int i = v1;     //volatile读
        int j = v2;     //volatile读
        a = i + j;      //普通读
        v1 = i + 1;     //volatile写
        v2 = j * 2;     //volatile写
    }
}
Copy after login

The sample diagram without optimization is as follows:

[Fighting Java Concurrency]-----Analysis of Java Memory Model Volatile

Let's analyze what is in the above picture The memory barrier instruction is redundant

1: This must be retained

2: All normal writes below are prohibited from being reordered with the volatile reads above. However, due to the existence of a second volatile read, the normal read cannot bypass the second volatile read at all. So it can be omitted.

3: There is no ordinary reading below and can be omitted.

4: Reserved

5: Reserved

6: Followed by a volatile write, So

7 can be omitted: keep

8: keep

so 2, 3, and 6 can be omitted, the schematic diagram As follows:

[Fighting Java Concurrency]-----Analysis of Java Memory Model Volatile


## The above is the content of [Java Concurrency]-----Analysis of Java Memory Model Volatile, more For related content, please pay attention to the PHP Chinese website (www.php.cn)!



Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!