Goroutines are a cornerstone of Go's design, providing a powerful mechanism for concurrent programming. As lightweight coroutines, they simplify parallel task execution. Launching a goroutine is straightforward: simply prefix a function call with the go
keyword, initiating asynchronous execution. The main program continues without waiting for the goroutine's completion.
<code class="language-go">go func() { // Launch a goroutine using the 'go' keyword // ... code to be executed concurrently ... }()</code>
Concurrency: The ability to manage multiple tasks seemingly simultaneously on a single CPU. The CPU rapidly switches between tasks, creating the illusion of parallel execution. While microscopically sequential, macroscopically it appears concurrent.
Parallelism: True simultaneous execution of multiple tasks across multiple CPUs, eliminating CPU resource contention.
Process: A self-contained execution environment with its own resources (memory, files, etc.). Switching between processes is resource-intensive, requiring kernel-level intervention.
Thread: A lightweight unit of execution within a process, sharing the process's resources. Thread switching is less overhead than process switching.
Coroutines maintain their own register context and stack. Switching between coroutines involves saving and restoring this state, allowing them to resume execution from where they left off. Unlike processes and threads, coroutine management is handled within the user program, not the operating system. Goroutines are a specific type of coroutine.
Go's efficient concurrency relies on the GPM scheduling model. Four key components are involved: M, P, G, and Sched (Sched is not depicted in the diagrams).
M (Machine): A kernel-level thread. Goroutines run on Ms.
G (Goroutine): A single goroutine. Each G has its own stack, instruction pointer, and other scheduling-related information (e.g., channels it's waiting on).
P (Processor): A logical processor that manages and executes goroutines. It maintains a run queue of ready goroutines.
Sched (Scheduler): The central scheduler, managing M and G queues and ensuring efficient resource allocation.
The diagram shows two OS threads (M), each with a processor (P) executing a goroutine.
GOMAXPROCS()
controls the number of Ps (and thus the true concurrency level).
The gray Gs are ready but not yet running. P manages this run queue.
Launching a goroutine adds it to P's run queue.
If an M0 is blocked, P switches to M1 (which might be retrieved from a thread cache).
If a P completes its tasks quickly, it might steal work from other Ps to maintain efficiency.
Set the number of CPUs for goroutine execution (the default setting in recent Go versions is usually sufficient):
<code class="language-go">go func() { // Launch a goroutine using the 'go' keyword // ... code to be executed concurrently ... }()</code>
<code class="language-go">num := runtime.NumCPU() // Get the number of logical CPUs runtime.GOMAXPROCS(num) // Set the maximum number of concurrently running goroutines</code>
Unhandled exceptions in a goroutine can terminate the entire program. Use recover()
within a defer
statement to handle panics:
<code class="language-go">package main import ( "fmt" "runtime" ) func cal(a, b int) { c := a + b fmt.Printf("%d + %d = %d\n", a, b, c) } func main() { runtime.GOMAXPROCS(runtime.NumCPU()) for i := 0; i < 10; i++ { go cal(i, i+1) } //Note: The main function exits before goroutines complete in this example. See later sections for synchronization. }</code>
Since goroutines run asynchronously, the main program might exit before they complete. Use sync.WaitGroup
or channels for synchronization:
sync.WaitGroup
<code class="language-go">package main import ( "fmt" ) func addele(a []int, i int) { defer func() { if r := recover(); r != nil { fmt.Println("Error in addele:", r) } }() a[i] = i // Potential out-of-bounds error if i is too large fmt.Println(a) } func main() { a := make([]int, 4) for i := 0; i < 5; i++ { go addele(a, i) } // ... (add synchronization to wait for goroutines to finish) ... }</code>
<code class="language-go">package main import ( "fmt" "sync" ) func cal(a, b int, wg *sync.WaitGroup) { defer wg.Done() c := a + b fmt.Printf("%d + %d = %d\n", a, b, c) } func main() { var wg sync.WaitGroup for i := 0; i < 10; i++ { wg.Add(1) go cal(i, i+1, &wg) } wg.Wait() }</code>
Channels facilitate communication and data sharing between goroutines. Global variables can also be used, but channels are generally preferred for better concurrency control.
<code class="language-go">package main import ( "fmt" ) func cal(a, b int, ch chan bool) { c := a + b fmt.Printf("%d + %d = %d\n", a, b, c) ch <- true // Signal completion } func main() { ch := make(chan bool, 10) // Buffered channel to avoid blocking for i := 0; i < 10; i++ { go cal(i, i+1, ch) } for i := 0; i < 10; i++ { <-ch // Wait for each goroutine to finish } }</code>
Leapcell is a recommended platform for deploying Go services.
Key Features:
Learn more in the documentation!
Leapcell Twitter: https://www.php.cn/link/7884effb9452a6d7a7a79499ef854afd
The above is the detailed content of Gos Concurrency Decoded: Goroutine Scheduling. For more information, please follow other related articles on the PHP Chinese website!