#What is concurrency?
Concurrency: refers to the alternate execution of multiple tasks within a certain period of time. When multiple threads are operating, the CPU running time is divided into several time periods, and then the time periods are allocated to each thread for execution. While one thread's code is running, other threads are suspended.
In a concurrent environment, the closedness of the program is broken, and the following characteristics appear:
● There is a mutual restriction relationship between concurrent programs. Direct constraints are reflected in one program requiring the calculation results of another program; indirect constraints are reflected in multiple programs competing for shared resources, such as processors, buffers, etc.
● The execution process of concurrent programs is intermittent. The program needs to remember on-site instructions and execution points.
●When the number of concurrency is set appropriately and the CPU has sufficient processing power, concurrency will improve the running efficiency of the program.
In a concurrent environment, when an object can be accessed by multiple threads, it will cause the object to be modified by any accessed thread, resulting in data inconsistency. Therefore, the concept of thread safety is proposed.
Concurrency and parallelism
are very confusing concepts. Concurrency refers to the alternating execution of multiple tasks, while parallelism refers to "executing at the same time" in the true sense. In fact, if there is only one CPU in the system and multi-threading is used, it cannot be parallelized in the real system environment. It can only be done alternately by switching time slices to execute tasks concurrently. True parallelism can only occur on systems with multiple CPUs.
[Recommended learning: Java video tutorial]
Why should we use concurrency?
Concurrent programming is inseparable from the development of multi-core CPUs to a certain extent. With the development of single-core CPUs, it is no longer possible to follow "Moore's Law" (Moore's Law is the observational law of hardware development, and there is also "Anti-Moore's Law" based on "Moore's Law", but "Anti-Moore's Law" is a law in the software field , those who are interested can learn about it by themselves), in order to further improve the computing speed, hardware engineers no longer pursue separate computing units, but integrate multiple computing units together, which is to form a multi-core CPU. In just a dozen years, home CPUs such as Intel i7 can reach 4 cores or even 8 cores. Professional servers can usually have several independent CPUs, and each CPU even has up to 8 or more cores.
Therefore, "Moore's Law" seems to continue to be experienced in CPU core expansion. In the context of multi-core CPUs, the trend of concurrent programming has been born. Through the form of concurrent programming, the computing power of multi-core CPUs can be maximized and the performance can be improved.
It is inherently suitable for concurrent programming in special business scenarios. For example, in the field of image processing, a 1024X768 pixel picture contains more than 786,000 pixels. It takes a long time to traverse all the pixels on one side. Faced with such a complex calculation, it is necessary to make full use of the computing power of multiple cores.
In addition, when developing a shopping platform, in order to improve the response speed, operations such as splitting, inventory reduction, order generation, etc. can be split and completed using multi-threading technology. In the face of complex business models, parallel programs will be more adaptable to business needs than serial programs, and concurrent programming is more consistent with this kind of business split. It is precisely because of these advantages that multi-threading technology can be valued, and it is also what a CS learner should Mastered:
● Make full use of the computing power of multi-core CPU;
● Convenient business splitting and improve application performance
What are the disadvantages of concurrent programming?
1. Frequent context switching
The time slice is the time allocated by the CPU to each thread. Because the time is very short, the CPU constantly switches threads, making us feel that there are multiple threads. They are executed simultaneously, and the time slice is generally tens of milliseconds.
Every time you switch, you need to save the current state so that you can restore the previous state. This switching behavior is very performance-intensive. Switching too frequently will not take advantage of multi-threaded programming. Generally, context switching can be reduced by lock-free concurrent programming, CAS algorithm, using minimal threads and using coroutines.
Lock-free concurrent programming: You can refer to the idea of ConcurrentHashMap lock segmentation. Different threads process different segments of data, so that under the condition of multi-thread competition, the context switching time can be reduced.
CAS algorithm uses the CAS algorithm to update data atomically. It uses optimistic locking, which can effectively reduce some of the context switches caused by unnecessary lock competition.
Use the fewest threads: avoid Creating unnecessary threads, for example, if there are few tasks but many threads are created, this will cause a large number of threads to be in a waiting state. Maintain switching between multiple tasks in the thread
Since context switching is a relatively time-consuming operation, there is an experiment in the book "The Art of Concurrent Programming in Java". Concurrent accumulation may not be faster than serial accumulation.
2. Thread safety issues
The most difficult thing to grasp in multi-threaded programming is the thread safety issue of critical sections. Deadlocks will occur if you are not careful. Once a deadlock occurs, System functions may become unavailable.
The above is the detailed content of What is java concurrency?. For more information, please follow other related articles on the PHP Chinese website!