Goroutines Intermediate¶
Introduction¶
Goroutines are Go's lightweight concurrency primitive. They are not OS threads -- they're multiplexed onto a small pool of OS threads by the Go runtime scheduler. Starting a goroutine costs ~2 KB of stack space (which grows dynamically), so creating thousands or even millions of goroutines is practical. The go keyword is all it takes to launch one. However, goroutines that aren't properly managed become goroutine leaks -- one of the most common production bugs in Go. Understanding goroutine lifecycle, the scheduler model, closure pitfalls, and when to (and when not to) use goroutines is critical for interviews and production code.
Launching Goroutines¶
func main() {
// Launch a goroutine with the go keyword
go sayHello("Alice")
// Launch with an anonymous function
go func() {
fmt.Println("anonymous goroutine")
}()
// Without synchronization, main may exit before goroutines complete
time.Sleep(100 * time.Millisecond)
}
func sayHello(name string) {
fmt.Printf("Hello, %s!\n", name)
}
Don't Use time.Sleep for Synchronization
time.Sleep is a race condition waiting to happen. Use sync.WaitGroup, channels, or context for proper goroutine coordination.
Proper Goroutine Synchronization¶
Using sync.WaitGroup¶
func main() {
var wg sync.WaitGroup
urls := []string{
"https://api.example.com/users",
"https://api.example.com/products",
"https://api.example.com/orders",
}
for _, url := range urls {
wg.Add(1)
go func(u string) {
defer wg.Done()
resp, err := http.Get(u)
if err != nil {
log.Printf("error fetching %s: %v", u, err)
return
}
defer resp.Body.Close()
log.Printf("%s: status %d", u, resp.StatusCode)
}(url)
}
wg.Wait() // blocks until all goroutines call Done()
}
Using Channels for Synchronization¶
func main() {
done := make(chan struct{})
go func() {
defer close(done)
// ... do work ...
fmt.Println("work complete")
}()
<-done // blocks until the goroutine closes the channel
}
Goroutine Lifecycle¶
A goroutine runs until its function returns. There's no way to forcefully kill a goroutine from outside -- you must use cooperative cancellation via channels or context.
func worker(ctx context.Context, id int) {
for {
select {
case <-ctx.Done():
fmt.Printf("worker %d: shutting down\n", id)
return
default:
// do work
fmt.Printf("worker %d: working\n", id)
time.Sleep(500 * time.Millisecond)
}
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
go worker(ctx, 1)
go worker(ctx, 2)
<-ctx.Done()
time.Sleep(100 * time.Millisecond) // brief grace period for cleanup output
fmt.Println("all workers stopped")
}
The GMP Scheduling Model¶
Go uses an M:N scheduler that multiplexes goroutines (G) onto OS threads (M) using logical processors (P).
┌─────────────────────────────────────────────────┐
│ Go Scheduler │
│ │
│ G = Goroutine M = OS Thread P = Processor│
│ │
│ ┌───┐ ┌───┐ ┌───┐ │
│ │ G │ │ G │ │ G │ ← Local run queue (per P) │
│ └─┬─┘ └─┬─┘ └─┬─┘ │
│ └──────┼─────┘ │
│ ▼ │
│ ┌─────┐ │
│ │ P │ ← Logical processor │
│ └──┬──┘ (GOMAXPROCS controls count) │
│ ▼ │
│ ┌─────┐ │
│ │ M │ ← OS thread │
│ └─────┘ │
│ │
│ Global run queue (overflow / stolen from) │
│ ┌───┐ ┌───┐ ┌───┐ ┌───┐ │
│ │ G │ │ G │ │ G │ │ G │ │
│ └───┘ └───┘ └───┘ └───┘ │
└─────────────────────────────────────────────────┘
| Component | Role |
|---|---|
| G (Goroutine) | Lightweight unit of execution (~2 KB initial stack) |
| M (Machine/Thread) | OS thread that executes goroutines |
| P (Processor) | Logical processor; holds a local run queue. Count = GOMAXPROCS (default: number of CPU cores) |
Key scheduling behaviors:
- Work stealing: idle P steals goroutines from other P's local queues
- Preemption: since Go 1.14, goroutines are preempted asynchronously (no more infinite loops blocking the scheduler)
- Syscall handling: when a goroutine makes a blocking syscall, the M is parked and a new M picks up the P
- Goroutine yielding: goroutines yield at function calls, channel operations, and other scheduling points
Goroutine Leaks¶
A goroutine leak occurs when a goroutine is started but never terminates. These accumulate over time, consuming memory and potentially causing deadlocks.
// LEAK: goroutine blocks forever on channel send -- nobody receives
func leakyFunction() {
ch := make(chan int)
go func() {
result := expensiveComputation()
ch <- result // blocks forever if nobody reads from ch
}()
// function returns without reading from ch -- goroutine leaks
}
// FIXED: use context for cancellation
func safeFunction(ctx context.Context) (int, error) {
ch := make(chan int, 1) // buffered so goroutine can send even if we return early
go func() {
ch <- expensiveComputation()
}()
select {
case result := <-ch:
return result, nil
case <-ctx.Done():
return 0, ctx.Err()
}
}
Common Leak Patterns¶
| Pattern | Cause | Fix |
|---|---|---|
| Blocked channel send | No receiver | Use buffered channel or select with ctx.Done() |
| Blocked channel receive | No sender, channel never closed | Always close channels from the sender side |
| Infinite loop | No exit condition | Check ctx.Done() or use a done channel |
| Forgotten goroutine | Launched but never waited on | Use sync.WaitGroup or errgroup.Group |
| HTTP handler leak | Long-lived goroutine per request | Tie goroutine to request context |
Detecting Leaks¶
// In tests: check goroutine count before and after
func TestNoLeak(t *testing.T) {
before := runtime.NumGoroutine()
// ... run code under test ...
time.Sleep(100 * time.Millisecond) // allow goroutines to exit
after := runtime.NumGoroutine()
if after > before {
t.Errorf("goroutine leak: before=%d after=%d", before, after)
}
}
// In production: expose goroutine count as a metric
http.HandleFunc("/debug/goroutines", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "goroutines: %d\n", runtime.NumGoroutine())
})
Use goleak
Uber's goleak package automatically detects goroutine leaks in tests:
Goroutines and Closures¶
Goroutines frequently use closures. The biggest trap is capturing loop variables.
Pre-Go 1.22: The Loop Variable Gotcha¶
// BUG (before Go 1.22): all goroutines see the same final value of i
for i := 0; i < 5; i++ {
go func() {
fmt.Println(i) // prints "5" five times (or whatever i is when goroutine runs)
}()
}
// FIX 1: pass as argument (works in all Go versions)
for i := 0; i < 5; i++ {
go func(n int) {
fmt.Println(n) // prints 0, 1, 2, 3, 4 (in some order)
}(i)
}
// FIX 2: shadow the variable (works in all Go versions)
for i := 0; i < 5; i++ {
i := i // new variable per iteration
go func() {
fmt.Println(i)
}()
}
Go 1.22+ Fix
Starting with Go 1.22, loop variables are per-iteration by default. The closure gotcha no longer applies for for loops in Go 1.22+ modules. However, you'll still encounter pre-1.22 code in interviews and legacy codebases.
Cost of Goroutines¶
| Resource | Cost |
|---|---|
| Initial stack | ~2 KB (grows dynamically up to 1 GB default max) |
| Scheduling overhead | ~300 ns to create and schedule |
| Context switch | ~100-200 ns (vs ~1-10 μs for OS thread context switch) |
| Memory per goroutine | ~4-8 KB in practice (stack + runtime metadata) |
// You can easily run hundreds of thousands of goroutines
func main() {
var wg sync.WaitGroup
for i := 0; i < 100_000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
time.Sleep(time.Second)
}()
}
fmt.Printf("goroutines: %d\n", runtime.NumGoroutine()) // ~100001
wg.Wait()
}
Goroutine-Per-Request Pattern¶
The standard Go HTTP server spawns a goroutine per incoming request. This is idiomatic and scales well.
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/api/users", handleUsers)
srv := &http.Server{
Addr: ":8080",
Handler: mux,
}
// Each incoming request is handled in its own goroutine automatically
log.Fatal(srv.ListenAndServe())
}
func handleUsers(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
// Fan out to fetch data concurrently
var wg sync.WaitGroup
var users []User
var orders []Order
var mu sync.Mutex
wg.Add(2)
go func() {
defer wg.Done()
u, _ := fetchUsers(ctx)
mu.Lock()
users = u
mu.Unlock()
}()
go func() {
defer wg.Done()
o, _ := fetchOrders(ctx)
mu.Lock()
orders = o
mu.Unlock()
}()
wg.Wait()
json.NewEncoder(w).Encode(map[string]any{
"users": users,
"orders": orders,
})
}
When to Use Goroutines (and When Not To)¶
| Use Goroutines | Avoid Goroutines |
|---|---|
| I/O-bound work (HTTP calls, DB queries, file I/O) | Pure CPU-bound work where parallelism isn't needed |
| Handling concurrent requests | Simple sequential logic |
| Fan-out / fan-in patterns | When the overhead of synchronization exceeds the benefit |
| Background tasks (monitoring, heartbeats) | When you can't guarantee the goroutine will terminate |
| Pipeline processing | When shared mutable state makes reasoning difficult |
Quick Reference¶
| Concept | Syntax / API | Notes |
|---|---|---|
| Launch goroutine | go f() |
Starts f concurrently |
| Anonymous goroutine | go func() { ... }() |
Common for inline work |
| Wait for goroutines | sync.WaitGroup |
Add, Done, Wait |
| Cancellation | context.WithCancel / WithTimeout |
Cooperative -- goroutines must check ctx.Done() |
| Count goroutines | runtime.NumGoroutine() |
Useful for leak detection |
| Set parallelism | runtime.GOMAXPROCS(n) |
Default: number of CPU cores |
| Goroutine stack | ~2 KB initial, grows dynamically | Up to 1 GB default max |
Best Practices¶
- Always ensure goroutines can terminate -- use
context, done channels, orsync.WaitGroup - Never launch a goroutine without knowing how it stops -- document the exit condition
- Use
errgroup.Groupfromgolang.org/x/syncfor goroutines that return errors - Pass
context.Contextto goroutines for cancellation and deadline propagation - Pass loop variables as function arguments (pre-1.22) to avoid closure capture bugs
- Monitor
runtime.NumGoroutine()in production -- a steadily increasing count indicates leaks - Don't communicate by sharing memory; share memory by communicating -- prefer channels over mutexes when practical
- Limit concurrency with semaphore patterns (buffered channels) to avoid overwhelming resources
Common Pitfalls¶
Goroutine Leak
Always ensure someone reads from the channel, or use a buffered channel.Loop Variable Capture (Pre-Go 1.22)
for _, item := range items {
go func() {
process(item) // all goroutines process the LAST item
}()
}
item as a parameter: go func(it Item) { process(it) }(item).
Missing WaitGroup
Usesync.WaitGroup or a channel to wait for goroutines to complete.
WaitGroup Add Inside Goroutine
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
go func() {
wg.Add(1) // RACE: Add might run after Wait
defer wg.Done()
// ...
}()
}
wg.Wait()
wg.Add(1) before launching the goroutine.
Performance Considerations¶
| Scenario | Recommendation |
|---|---|
| Many short-lived goroutines | Fine -- goroutines are cheap. Use worker pools only if profiling shows scheduler overhead |
| CPU-bound parallelism | Limit goroutines to GOMAXPROCS to avoid excessive context switching |
| I/O-bound concurrency | Goroutines shine here -- thousands of goroutines waiting on I/O is fine |
| Goroutine creation overhead | ~300 ns per goroutine creation. If creating millions per second, consider a worker pool |
| Stack growth | Initial 2 KB stack grows via copy (causes brief pause). Pre-allocate large stacks only if profiling shows stack growth as bottleneck |
| Goroutine-local data | Go has no goroutine-local storage by design. Use context.Context to pass request-scoped data |
Interview Tips¶
Interview Tip
When asked "What are goroutines?", clarify: they are not threads. They're lightweight, user-space coroutines managed by the Go runtime scheduler. The Go scheduler multiplexes thousands of goroutines onto a small pool of OS threads using the GMP model (Goroutines, Machine threads, Processors).
Interview Tip
The goroutine leak question is extremely common. Explain: a goroutine that blocks forever (on a channel, mutex, or I/O) without a way to be cancelled is a leak. Prevention: always use context.Context for cancellation, and in tests, use runtime.NumGoroutine() or goleak to detect leaks.
Interview Tip
Know the loop variable gotcha. Before Go 1.22, the loop variable was shared across iterations, so closures in goroutines captured the same variable. The fix: pass the variable as a function argument or shadow it. Go 1.22 changed loop variables to be per-iteration.
Interview Tip
If asked about GOMAXPROCS, explain: it controls the number of OS threads that can execute goroutines simultaneously (the P count in GMP). Default is the number of CPU cores. Increasing it beyond the core count rarely helps and can increase context switching overhead.
Key Takeaways¶
- Goroutines are lightweight (~2 KB stack) and cheap to create -- use them liberally for concurrent work
- The GMP scheduler multiplexes goroutines onto OS threads efficiently
- Always ensure goroutines terminate -- use
context, channels, orWaitGroup - Goroutine leaks are the #1 concurrency bug in Go -- blocked goroutines that can't exit consume memory indefinitely
- The loop variable capture gotcha (pre-Go 1.22) is a top interview question
- Call
wg.Add()beforego func(), never inside the goroutine - The goroutine-per-request pattern is idiomatic for HTTP servers
- Go has no way to forcefully kill a goroutine -- cooperative cancellation via context is the only approach
- Monitor
runtime.NumGoroutine()in production to detect leaks