Concurrent programming skills: Advanced usage of Go WaitGroup
In concurrent programming, coordinating and managing the execution of multiple concurrent tasks is an important task. The Go language provides a very practical concurrency primitive - WaitGroup, which can help us implement concurrency control elegantly. This article will introduce the basic usage of WaitGroup, and focus on its advanced usage, using specific code examples to help readers better understand and apply it.
WaitGroup is a concurrency primitive built into the Go language, which can help us wait for the completion of concurrent tasks. It provides three methods: Add, Done and Wait. The Add method is used to set the number of waiting tasks, the Done method is used to reduce the number of waiting tasks, and the Wait method is used to block the current coroutine until all waiting tasks are completed.
The following is a simple example showing the basic usage of WaitGroup:
package main import ( "fmt" "sync" "time" ) func main() { var wg sync.WaitGroup for i := 0; i < 5; i++ { wg.Add(1) go func(num int) { defer wg.Done() time.Sleep(time.Second) fmt.Println("Task", num, "done") }(i) } wg.Wait() fmt.Println("All tasks done") }
In the above code, we create a WaitGroup object wg and create 5 concurrent tasks through a loop . During the execution of each task, we use the Add method to increase the number of waiting tasks, and at the end of the task, we use the Done method to reduce the number of waiting tasks. Finally, we call the Wait method to block the main coroutine until all waiting tasks are completed.
In addition to basic usage, WaitGroup also provides some advanced usage, which can control the execution of concurrent tasks more flexibly. Below we will introduce several commonly used advanced usages in detail.
If we need to execute a set of tasks at the same time but want to limit the maximum number of concurrencies, we can use buffered channel combination WaitGroup to achieve. The code below shows how to execute a set of tasks at the same time, but only allows up to 3 tasks to execute concurrently:
package main import ( "fmt" "sync" "time" ) func main() { var wg sync.WaitGroup maxConcurrency := 3 tasks := []int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10} sem := make(chan struct{}, maxConcurrency) for _, task := range tasks { wg.Add(1) sem <- struct{}{} // 获取令牌,控制最大并发数 go func(num int) { defer wg.Done() time.Sleep(time.Second) fmt.Println("Task", num, "done") <-sem // 释放令牌,允许新的任务执行 }(task) } wg.Wait() fmt.Println("All tasks done") }
In the above code, we create a buffered channel sem and set its size is the maximum number of concurrencies. Before each task starts, we obtain a token through the sem <- struct{}{} statement. When the task is completed, we use the <-sem statement to release the token. By controlling the acquisition and release of tokens, we can limit the maximum number of concurrencies.
Sometimes we want to control the execution time of concurrent tasks and terminate the execution of the task when it times out. By using buffered channels and timers, we can easily implement this functionality. The following code shows how to set the timeout of concurrent tasks to 3 seconds:
package main import ( "fmt" "sync" "time" ) func main() { var wg sync.WaitGroup tasks := []int{1, 2, 3, 4, 5, 6, 7} timeout := 3 * time.Second done := make(chan struct{}) for _, task := range tasks { wg.Add(1) go func(num int) { defer wg.Done() // 模拟任务执行时间不定 time.Sleep(time.Duration(num) * time.Second) fmt.Println("Task", num, "done") // 判断任务是否超时 select { case <-done: // 任务在超时前完成,正常退出 return default: // 任务超时,向通道发送信号 close(done) } }(task) } wg.Wait() fmt.Println("All tasks done") }
In the above code, we create a channel done, and determine whether the channel is closed during task execution to determine whether the task is time out. When a task is completed, we use the close(done) statement to send a signal to the done channel to indicate that the task has timed out. Choose different branches through select statements to handle different situations.
Through the above sample code, we can see that the advanced usage of WaitGroup is very practical in actual concurrent programming. Mastering these techniques, we can better control the execution of concurrent tasks and improve the performance and maintainability of the code. I hope readers can gain a deep understanding of the usage of WaitGroup through the introduction and sample code of this article, and then apply it to actual projects.
The above is the detailed content of Concurrent programming skills: Advanced usage of Go WaitGroup. For more information, please follow other related articles on the PHP Chinese website!