Concurrency systems can be implemented using different concurrency models. A concurrency model specifies how threads collaborate in the system to complete the tasks they are given. Different concurrency models use different ways to break up work, and threads may communicate and cooperate with each other in different ways. This concurrency model tutorial will provide an in-depth explanation of the most widely used concurrency model at the time of writing.
The concurrency model is similar to the distributed system
The concurrency model mentioned in this text is different from that used in the distributed system The framework is similar. Different threads communicate with each other in a concurrent system. Different processes communicate in a distributed system (perhaps on different computers). In essence, threads and processes are very similar. This is why different concurrency models often look similar to different distributed frameworks.
Of course, distributed systems also have additional challenges, such as network failures, remote computers and processes crashing, etc. But in a concurrent system running on a large server, similar problems may occur if a CPU fails, a network card fails, a disk fails, etc. Although the likelihood of such a failure may be low, it is theoretically possible.
Because the concurrency model is similar to the distributed system framework, they can often learn some ideas from each other. For example, the model for distributing work among workers (threads) is similar to load balancing in distributed systems. Techniques for handling errors like logging, fault tolerance, etc. are also the same.
Parallel worker
The first concurrency model, we call it the parallel worker model. Incoming tasks are assigned to different workers. Here is a diagram:
In the parallel worker concurrency model, an agent distributes incoming work to different workers. Each worker completes the entire task. The entire worker works in parallel, running in different threads, and possibly on different CPUs.
If a parallel worker model is implemented in a car factory, each car will be produced by a worker. This worker will be given instructions to build and will build everything from start to finish.
The parallel worker concurrency model is the most widely used concurrency model in Java applications (although that is changing). Many concurrency utility classes in the Java package java.util.concurrent are designed to use this model. You can also see traces of this model in Java enterprise applications.
Advantages of parallel workers
The advantage of the parallel worker concurrency model is that it is relatively simple to understand. To increase the parallelism of your application, you simply add more workers.
For example, if you are implementing a web crawler function, you will use different numbers of workers to crawl a certain number of pages, and see which worker will take a shorter crawl time ( means higher performance). Because web scraping is an IO intensive job, you may end up with multiple threads per CPU/core on your computer. One thread per CPU is too few, because it will be idle for a long time while waiting for data to be downloaded.
Disadvantages of Parallel Workers
The parallel worker concurrency model has some disadvantages lurking under the surface. I'll explain most of the disadvantages in the sections below.
Shared state acquisition is complicated
In reality, the parallel worker concurrency model is more complicated than explained above. This shared worker often needs to access some shared data, either in memory or in a shared database. The diagram below shows the complexity of the parallel worker concurrency model.
#Some of this shared state is in a communication mechanism like a work queue. But some of this shared state is business data, data cache, database connection pool, etc.
As soon as shared state creeps into the parallel worker concurrency model, it starts to get complicated. This thread needs to access the shared data in some way to ensure that changes by one thread are visible to other threads (pushed to main memory, and not just stuck in the CPU cache of the CPU executing this thread). Threads need to avoid race conditions, deadlocks, and many other shared state concurrency problems.
In addition, when accessing shared data structures, when threads wait for each other, the parallel computing part is lost. Many concurrent data structures are clogged, meaning that one or a limited set of threads can access them at any given time. This may lead to contention on these shared data structures. High contention will inherently result in some degree of serialization in the execution of code portions that access shared data structures.
Modern non-blocking concurrent algorithms may reduce contention and improve performance, but non-blocking algorithms are difficult to implement.
Persistent data structures are another option. A persistent data structure always retains its previous version when modified. Additionally, if multiple threads point to the same persistent data structure and one of the threads modifies it, the modifying thread gets a reference to the new structure. All other threads will keep references to the old structures, which remain unchanged. The Scala programming language contains several persistent data structures.
Persistent data structures do not perform well while providing an elegant and concise solution for concurrent modifications to shared data structures.
For example, a persistent list will add all new elements to the head of the list and return a reference to the newly added element (this will then execute the rest of the list). All other threads still maintain a reference to the first previous element in the list, and the list appears unchanged to other threads. They cannot see this newly added element.
Such a persistent list is implemented as a linked list. Unfortunately linked lists don't perform well in modern software. Each element in the list is a separate object, and these objects can be spread throughout the computer's memory. Modern CPUs are much faster at accessing data sequentially, so that implementing it on top of an array instead of a list will result in higher performance on modern hardware. An array stores data sequentially. This CPU cache can load larger chunks into the cache at a time, and the data can be accessed directly in this CPU cache once loaded. This is impossible to implement using a linked list, because the elements in the linked list will be scattered across all RAM.
Stateless workers
State shared in the system can be modified by other threads. Therefore, the worker must re-read this state every time it needs it to confirm whether it is working on the latest copy. This is true regardless of whether the shared state is in memory or in an external database. A worker that does not maintain state internally (but needs to be re-read each time) is said to be stateless.
It will be slower if you need to re-read the data every time. Especially if this state is stored in an external database.
The order of tasks cannot be determined
Another disadvantage of the parallel worker model is that the order of executing tasks cannot be determined. There is no way to guarantee which task will be executed first and which task will be executed last. Task A may be given to a worker before task B, but task B may execute before task A.
The parallel worker model is naturally nondeterministic, making it difficult to reason about the state of the system at a certain point in time. It would also be difficult to ensure that one task happens before another (basically impossible).
Assembly Line (Assembly Line)
The second concurrency model is what I call the assembly line concurrency model. I chose this name simply to fit the "parallel worker" metaphor more simply. Other developers use other names (e.g., reactive systems, or event-driven systems) to rely on the platform or community. Here is an example picture to illustrate:
#This worker is like a worker on an assembly line in a factory. Each worker performs only a part of the total work. When that part is completed, the worker transfers the task to the next worker.
Each worker runs in their own thread, and there is no shared state between workers. So this is sometimes mentioned as a shared-nothing concurrency model.
Systems that use the pipeline concurrency model are usually designed using non-blocking IO. Non-blocking IO means that when a worker starts an IO operation (such as reading a file or data from a network connection), the worker does not wait for the IO call to complete. IO operations are so slow that waiting for the IO operation to complete is a waste of CPU. This CPU can do some other things at the same time. When the IO operation ends, the results of the IO operation (such as the status of data reading or data writing) will be passed to another worker.
For non-blocking IO, this IO operation determines the boundary range between workers. A worker does what it can until it has to start an IO operation. Then it gives up the task of controlling it. When this IO operation ends, the next worker in the pipeline continues to work on this task, until that one also has to start an IO operation, and so on.
In fact, these tasks do not necessarily flow on a production line. Because most systems do more than just perform one task, the flow of tasks between workers depends on which task needs to be done. In fact, there will be multiple different virtual pipelines running at the same time. The diagram below shows how tasks flow in a real pipeline system.
# Tasks may even execute more than one worker in order to run concurrently. For example, a job might point to both a task executor and a task log. This diagram illustrates how all three pipelines end up by directing their tasks to the same worker (the last worker is on the middle pipeline):
Pipelines can even get more complex than this.
Reactive system, event-driven system
Systems that use a pipeline concurrency model are often called reactive systems, event-driven systems. Workers in this system react to events happening in the system, either received from outside or emitted by other workers. Examples of events can be an HTTP request, or the end of a file being loaded into memory, etc.
At the time of writing, there are many interesting reactive/event-driven platforms available, and there will be more in the future. Some of the more common ones look like this:
Vert.x
Akka
Node.JS(JavaScript)
For me personally, I find Vert.x more interesting (especially if I think I am outdated about Java/JVM)
Actor VS. Channel
Actor and channel are two similar examples in the pipeline (or reactive system/event-driven) model.
In the actor model, every worker is called an actor. Actors can send messages to each other. Messages are sent and then executed asynchronously. Actors can be used to implement one or more tasks, as described previously. Here is a model of actors:
In the channel model, workers cannot communicate with each other. Instead, they publish their information (events) in different channels. Other workers can listen for messages on these channels, but the sender will not know who is listening. Here is an illustration of the channel model:
As I write this, this channel model looks more resilient to me. A worker does not need to know what tasks will be performed by subsequent workers in the pipeline. It just needs to know what the channel will point to for this job (or where to send the message). Listeners on the channel can subscribe or cancel without affecting workers who are writing to the channel. This allows for loose coupling between workers.
Advantages of pipelines
This pipeline concurrency model has several advantages over the parallel worker model. In the following sections I will cover the biggest advantages.
No shared state
The fact that workers do not share state with other workers means that they do not have to account for all concurrency problem to realize. This makes it easier to implement workers. When you implement a worker, if only one thread performs that work, it is essentially a single-threaded implementation.
Stateful worker
Because the worker knows that no other thread will modify their data, this worker has status. Stateful, meaning they can keep the data they need to operate on in memory and just write the final changes back to the external storage system. A stateful worker is therefore faster than a stateless worker.
Better hardware integration
Single-threaded code has this advantage, it often adapts better to the underlying hardware. First, you can create more optimized data structures and algorithms when you assume that the code can be executed in single-threaded mode.
Second, single-threaded stateful workers, as mentioned above, can cache data in memory. When data is cached into memory, there is also a high probability here that the data will also be cached in the CPU cache of the executing thread's CPU. This makes accessing data faster.
I refer to it as hardware integration when the code is written, in some way benefiting from the way the underlying hardware works. Some developers call this the way the hardware works. I prefer the term hardware integration because computers have very few mechanical parts, and the word "sympathy" is used in this article as a metaphor for "fitting better", whereas I believe the word "conform" expresses More reasonable.
Anyway, this is nitpicking. Just use the words you like.
Sequential tasks are possible
It is possible to implement a concurrent system based on the pipeline concurrency model to ensure the order of tasks to some extent of. Sequential tasks make it easier to reason about the state of the system at any given point in time. Furthermore, you can write all incoming tasks to a log. This log can be used to rebuild from the point of failure should part of the system fail. This task can be written to the log in a certain order, and this order becomes a fixed task order. The diagram of the design is as follows:
Implementing a fixed task sequence is certainly not simple, but it is possible. If possible, it would greatly simplify tasks like backing up, restoring data, copying data, etc. These can all be done through log files.
Disadvantages of Pipelining
The main disadvantage of the pipeline concurrency model is that the execution of tasks will often be spread to multiple workers, and will pass through you Multiple classes in the project. Therefore it will be more difficult to see exactly what code is being executed for a given task.
Writing code may also become difficult. Worker code is often written as callback functions. Code that comes with more nested callbacks may cause some developers to get bored of what callbacks to call. Callback hell just means it's more difficult to keep track of what your code is doing, as well as determine the data each callback needs to access their data.
Using the parallel worker concurrency model, this will become simpler. You can open this worker code and read the executed code almost from start to finish. Of course, the parallel worker code may also be spread across different classes, but the sequence of execution will be easier to read from the code.
Functional Parallelism
Functional parallelism is the third concurrency model, which has been discussed very much in these years. many.
The basic idea of functional parallelism is to implement your program using function calls. Functions are viewed as "agents" or "actors" that send messages to each other, much like the pipeline concurrency model (AKA reactive systems or event-driven systems). When one function calls another, it is similar to sending a message.
All parameters passed to the function are copied, so that no entity outside the receiving function can combine this data. This copy is crucial to avoid static conditions on shared data. This makes the function perform similar to an atomic operation. Each function call can be executed independently of any other function call.
While a function call can be executed individually, each function call can be executed on a separate CPU. That means that an implemented functional algorithm can be executed in parallel on multiple CPUs.
With Java 7, we get the java.util.concurrent package containing ForkAndJoinPool, which can help you achieve things similar to functional parallelism. With Java 8, we get a parallel stream, which helps you parallelize iteration of large collections. Remember, there are developers who are unhappy with ForkAndJoinPool (you can find links to some criticisms in my ForkAndJoinPool tutorial).
The hardest part of functional parallelism is knowing which function calls to parallelize. Coordinated function calls across CPUs bring an overhead. A unit of work completed by a function requires a certain amount of overhead. If the function calls are very small, trying to parallelize them may indeed be slower than a single-threaded, CPU-only execution.
From my understanding (of course this is not perfect), you can use a reactive system and time drive to implement an algorithm and complete the decomposition of a work. This is similar to functional parallelism. With an event driven model, you just get more control over how much and how to parallelize (I think).
Also, breaking up a task across multiple CPUs comes with the cost of coordination, which only makes sense if that task is currently the only task being executed by this program. However, if the system is executing multiple other tasks (e.g., web servers, database servers, and many other systems), then there is no point in parallelizing a single task. The other CPUs on the computer are busy performing other tasks anyway, so there's no reason to disrupt them with a slower functional parallel task. You'd most likely be wise to use a pipelined concurrency model, since it has less overhead (sequential execution in single-threaded mode) and is better compliant with the underlying hardware.
Which concurrency model is the best
So, which concurrency model is the best?
Usually that's the case, the answer depends on what your system will look like. If your tasks are naturally parallel, independent, and do not require shared state, then you may use a parallel work model to implement your system.
Many tasks though are not naturally parallel and independent. For these kinds of systems, I believe this pipeline concurrency model has more advantages than disadvantages, and has greater advantages than the parallel worker model.
You don’t even need to write the code for the pipeline structure yourself. Modern platforms like Vert.x already do a lot of this for you. I personally will explore designs running on top of platforms like Vert.x in my next project. I feel that Java EE will not have any advantages.
The above is a detailed introduction to the Java concurrency model. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!