This article mainly introduces relevant information on the implementation principle of volatile in java high concurrency. In multi-thread concurrencyProgramming, both synchronized and Volatile play an important role. Volatile is a lightweight synchronized , which ensures the "visibility" of shared variables in multi-processor development. Friends in need can refer to
java implementation principle of volatile in high concurrency
Abstract: In multi-threaded concurrent programming, both synchronized and Volatile play an important role. Volatile is a lightweight synchronized, which ensures the security of shared variables in multi-processor development. "visibility". Visibility means that when one thread modifies a shared variable, another thread can read the modified value. It has less overhead than synchronized in some cases
1. Definition:
javaProgramming languageAllow threads to access shared Variables, in order to ensure that shared variables can be accurately and consistently updated, threads should ensure that this variable is obtained individually through an exclusive lock. The Java language provides volatile, which is more convenient than locks in some situations. If a field is declared volatile, java thread memory model ensures that all threads see the same value of this variable
2. Volatile implementation principle
So how does Volatile ensure visibility? Under the x86 processor, use tools to obtain the assembly instructions generated by the JIT compiler to see what the CPU will do when writing to Volatile.
Java code: instance = new Singleton();//instance is a volatile variable
Assembly code: 0x01a3de1d: movb $0x0,0x1104800(%esi );0x01a3de24: lock addl $0x0,(%esp);
When a shared variable modified with a volatile variable is written, a second line of assembly code will be added. By checking the IA-32Architecture software developer manual, we can see that the lock prefixed instruction will cause two things on multi-core processors.
The data of the cache rows of the current processor will be written back to the system memory.
This operation of writing back to the memory will cause the data cached at this memory address in other CPUs to be invalid.
In order to improve the processing speed, the processor does not directly communicate with the memory. Instead, it first reads the data in the system memory into the internal cache (L1, L2 or other) before performing the operation. However, it is not known after the operation. When will it be written to memory? If a write operation is performed on a declared Volatile variable, the JVM will send a Lock prefix instruction to the processor to write the data in the cache line where the variable is located back to the system memory. But even if it is written back to the memory, if the values cached by other processors are still old, there will be problems when performing calculation operations. Therefore, under multi-processors, in order to ensure that the caches of each processor are consistent, cache consistency will be achieved. protocol, each processor checks whether its cached value has expired by sniffing the data uploaded on the bus. When the processor finds that the memory address corresponding to its cache line has been modified, it will The current processor's cache line is set to the invalid state. When the processor wants to modify this data, it will be forced to read the data from the system memory into the processor cache again. Lock prefix instructions will cause the processor cache to be written back to memory. The Lock prefix instruction causes the processor's LOCK# signal to be asserted during execution of the instruction. In a multiprocessor environment, the LOCK# signal ensures that the processor has exclusive use of any shared memory while the signal is asserted. (Because it will lock the bus, causing other CPUs to be unable to access the bus. Failure to access the bus means that system memory cannot be accessed.) However, in recent processors, the LOCK# signal generally does not lock the bus, but locks the cache. After all, the lock The bus overhead is relatively large. The impact of locking operations on processor cache is detailed in Chapter 8.1.4. For Intel486 and Pentium processors, the LOCK# signal is always asserted on the bus during locking operations. But in P6 and recent processors, if the memory area being accessed is already cached within the processor, the LOCK# signal will not be asserted. Instead, it locks the cache of this memory area and writes it back to the memory, and uses the cache consistency mechanism to ensure the atomicity of the modification. This operation is called "cache locking". The cache consistency mechanism prevents simultaneous modifications from being modified. Memory area data cached by more than two processors.
Writing back the cache of one processor to memory will invalidate the cache of other processors. The IA-32 processor and Intel 64 processor use the MESI (Modify, Exclusive, Shared, Invalidate) control protocol to maintain coherence between the internal cache and other processor caches. When operating on multi-core processor systems, IA-32 and Intel 64 processors can sniff other processors' access to system memory and their internal caches. They use sniffing techniques to ensure that the data in its internal cache, system memory, and other processor caches remains consistent on the bus. For example, in Pentium and P6 family processors, if one processor is sniffed to detect that another processor intends to write to a memory address that currently handles shared state, then the sniffing processor will invalidate its cache line, as follows Force cache line filling on multiple accesses to the same memory address
The above is the detailed content of Detailed introduction to the implementation principle of volatile in Java high concurrency. For more information, please follow other related articles on the PHP Chinese website!