Skip to content

Sync Package Intermediate

Introduction

Go's sync package provides low-level synchronization primitives for coordinating goroutines that share memory. While channels are Go's preferred communication mechanism, mutexes and friends are essential when you need to protect shared state, wait for goroutine completion, perform one-time initialization, or pool expensive objects.

The golden rule: never copy a sync primitive after first use. If a struct contains a mutex, always pass it by pointer or use pointer receivers.

Syntax & Usage

sync.Mutex — Mutual Exclusion Lock

A Mutex ensures only one goroutine accesses a critical section at a time.

type SafeCounter struct {
    mu sync.Mutex
    counts map[string]int
}

func (c *SafeCounter) Increment(key string) {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.counts[key]++
}

func (c *SafeCounter) Get(key string) int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.counts[key]
}

Place the mutex directly above the fields it protects — this is a widely followed convention.

sync.RWMutex — Reader/Writer Lock

Allows multiple concurrent readers but only one writer. Use when reads far outnumber writes.

type UserCache struct {
    mu    sync.RWMutex
    users map[int]*User
}

func (c *UserCache) Get(id int) (*User, bool) {
    c.mu.RLock()
    defer c.mu.RUnlock()
    u, ok := c.users[id]
    return u, ok
}

func (c *UserCache) Set(id int, user *User) {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.users[id] = user
}
Method Who blocks Who proceeds
RLock() Writers Other readers
Lock() Everyone Nobody else

sync.WaitGroup — Waiting for Goroutines

Waits for a collection of goroutines to finish.

func fetchAll(urls []string) []Result {
    results := make([]Result, len(urls))
    var wg sync.WaitGroup

    for i, url := range urls {
        wg.Add(1)
        go func(i int, url string) {
            defer wg.Done()
            results[i] = fetch(url)
        }(i, url)
    }

    wg.Wait() // blocks until counter reaches 0
    return results
}

The three methods:

  • Add(n) — increments counter by n (call before launching goroutine)
  • Done() — decrements counter by 1 (equivalent to Add(-1))
  • Wait() — blocks until counter is 0

sync.Once — One-Time Initialization

Guarantees a function runs exactly once, regardless of how many goroutines call it. Perfect for lazy singleton initialization.

type DBPool struct {
    once sync.Once
    pool *sql.DB
}

func (d *DBPool) GetConnection() *sql.DB {
    d.once.Do(func() {
        var err error
        d.pool, err = sql.Open("postgres", connStr)
        if err != nil {
            log.Fatal(err)
        }
    })
    return d.pool
}

Once.Do is goroutine-safe

If multiple goroutines call Do simultaneously, only one executes the function. All others block until it completes, then return immediately. The function runs exactly once even under contention.

sync.Pool — Object Reuse

Reduces GC pressure by recycling temporary objects. The pool may be cleared at any GC cycle — never rely on objects persisting.

var bufPool = sync.Pool{
    New: func() any {
        return new(bytes.Buffer)
    },
}

func processRequest(data []byte) string {
    buf := bufPool.Get().(*bytes.Buffer)
    defer func() {
        buf.Reset()
        bufPool.Put(buf)
    }()

    buf.Write(data)
    // ... process buffer ...
    return buf.String()
}

Real-world usage: encoding/json, fmt, and net/http all use sync.Pool internally to reuse buffers.

sync.Map — Concurrent Map

A map safe for concurrent use without external locking. Optimized for two specific patterns:

  1. Write-once, read-many (e.g., caches that grow but rarely update)
  2. Disjoint key sets per goroutine (each goroutine reads/writes different keys)
var cache sync.Map

// Store a value
cache.Store("user:42", &User{Name: "Alice"})

// Load a value
val, ok := cache.Load("user:42")
if ok {
    user := val.(*User) // type assertion required — no generics
    fmt.Println(user.Name)
}

// Load or store atomically
actual, loaded := cache.LoadOrStore("user:42", &User{Name: "Bob"})
// loaded=true means key existed, actual is the existing value

// Delete
cache.Delete("user:42")

// Iterate
cache.Range(func(key, value any) bool {
    fmt.Printf("%s: %v\n", key, value)
    return true // return false to stop iteration
})

sync.Map vs Regular Map + Mutex

Criteria sync.Map map + sync.RWMutex
Type safety No (uses any) Yes (typed keys/values)
Read-heavy, stable keys Faster Slower
Frequent writes Slower Faster
Key iteration Range() only Standard for range
Memory overhead Higher (dual maps internally) Lower
General purpose No Yes — default choice

When to use sync.Map

Use sync.Map only when profiling shows lock contention on a regular map, and your access pattern matches its optimized cases. For most use cases, a regular map with sync.RWMutex is simpler, type-safe, and fast enough.

sync.Cond — Condition Variable

Allows goroutines to wait for or announce a condition change. Rarely used in Go because channels usually serve this purpose better.

type Queue struct {
    mu    sync.Mutex
    cond  *sync.Cond
    items []int
}

func NewQueue() *Queue {
    q := &Queue{}
    q.cond = sync.NewCond(&q.mu)
    return q
}

func (q *Queue) Enqueue(item int) {
    q.mu.Lock()
    q.items = append(q.items, item)
    q.mu.Unlock()
    q.cond.Signal() // wake one waiting goroutine
}

func (q *Queue) Dequeue() int {
    q.mu.Lock()
    defer q.mu.Unlock()
    for len(q.items) == 0 {
        q.cond.Wait() // releases lock, suspends, reacquires lock on wake
    }
    item := q.items[0]
    q.items = q.items[1:]
    return item
}
Method Purpose
Wait() Releases lock, blocks until signaled, reacquires lock
Signal() Wakes one waiting goroutine
Broadcast() Wakes all waiting goroutines

Quick Reference

Primitive Purpose Key Methods
sync.Mutex Exclusive access Lock(), Unlock()
sync.RWMutex Multiple readers, single writer RLock(), RUnlock(), Lock(), Unlock()
sync.WaitGroup Wait for goroutines to finish Add(n), Done(), Wait()
sync.Once Run function exactly once Do(func())
sync.Pool Reuse temporary objects Get(), Put(x)
sync.Map Concurrent map (specific patterns) Store(), Load(), Delete(), Range()
sync.Cond Wait for / signal conditions Wait(), Signal(), Broadcast()

Best Practices

  1. Always use defer for Unlock — prevents forgetting to unlock on early returns or panics.
  2. Keep critical sections small — lock, do the minimum work, unlock. Never do I/O or network calls while holding a lock.
  3. Place mutex above the fields it protects — add a comment if it's not obvious which fields are guarded.
  4. Use pointer receivers on any struct containing sync primitives — copying a mutex is a bug.
  5. Call wg.Add() before go func() — never inside the goroutine, or Wait() might return too early.
  6. Reset pooled objects before Put — stale data from a previous use can cause subtle bugs.
  7. Default to map + RWMutex — reach for sync.Map only when profiling justifies it.

Common Pitfalls

Copying sync primitives

type Counter struct {
    mu sync.Mutex
    n  int
}

// BUG: c is a copy — its mutex is independent of the original
func (c Counter) Value() int {
    c.mu.Lock()
    defer c.mu.Unlock()
    return c.n
}
Always use a pointer receiver: func (c *Counter) Value() int. The go vet tool catches this — run it regularly.

Deadlock from double-locking a Mutex

func (c *SafeCounter) IncrementAndLog(key string) {
    c.mu.Lock()
    c.counts[key]++
    c.log(key) // if log() also calls Lock() → deadlock!
    c.mu.Unlock()
}
sync.Mutex is not reentrant — the same goroutine cannot lock it twice. Factor out the inner locking or restructure to use a single lock acquisition.

WaitGroup Add inside the goroutine

// BUG: Add() might execute after Wait()
for _, url := range urls {
    go func(url string) {
        wg.Add(1)  // WRONG — race with wg.Wait()
        defer wg.Done()
        fetch(url)
    }(url)
}
wg.Wait()
Call wg.Add(1) in the launching goroutine, before the go statement.

Forgetting to Reset pooled objects

buf := bufPool.Get().(*bytes.Buffer)
buf.WriteString("secret data")
bufPool.Put(buf) // BUG: next Get() returns buffer with "secret data" still in it

// FIX: always reset before returning to pool
buf.Reset()
bufPool.Put(buf)

Using sync.Map as a general-purpose concurrent map

sync.Map has no type safety (everything is any), higher memory overhead, and is slower than a plain map with mutex under write-heavy workloads. Only use it when you match its optimized access patterns.

Performance Considerations

  • Mutex vs RWMutex: RWMutex has slightly higher overhead per operation. Only use it when you have a measurably high read-to-write ratio (typically 10:1 or more). Profile first.
  • Lock granularity: Fine-grained locks (one per field) increase parallelism but add complexity. Coarse locks (one per struct) are simpler but can become bottlenecks. Start coarse, split when profiling shows contention.
  • sync.Pool and GC: Pool contents can be garbage collected between GC cycles. This is by design — don't use Pool as a cache. It reduces allocation pressure in high-throughput paths (HTTP handlers, encoders).
  • Atomic operations: For simple counters, sync/atomic is faster than a mutex. Use atomic.Int64 (Go 1.19+) instead of Mutex + int64 for counters and flags.
  • Channel vs Mutex: Channels have higher overhead per operation than mutexes. Use mutexes when protecting shared state; use channels when transferring ownership or signaling events.

Interview Tips

Interview Tip

"When would you use a Mutex vs a channel?" Mutexes protect shared state — multiple goroutines access the same data. Channels transfer data ownership — one goroutine produces, another consumes. The Go proverb is: "Don't communicate by sharing memory; share memory by communicating." But when the natural model is shared state (e.g., a cache), use a mutex.

Interview Tip

"What happens if you copy a Mutex?" The copy gets its own independent lock state. Two goroutines locking the original and the copy don't synchronize at all. This is a silent data race — go vet detects it. Always pass structs containing sync primitives by pointer.

Interview Tip

"How does sync.Once work internally?" It uses an atomic flag plus a mutex. The fast path checks the atomic flag (no locking). If the function hasn't run, it acquires the mutex, double-checks the flag, runs the function, and sets the flag. Subsequent calls see the flag and return immediately.

Interview Tip

"Why is sync.Map not the default concurrent map?" It trades type safety and general-purpose performance for optimization in two narrow patterns: write-once/read-many and disjoint key access. For most workloads, a regular map with RWMutex is faster, simpler, and type-safe.

Key Takeaways

  • sync.Mutex provides exclusive access; sync.RWMutex allows concurrent reads.
  • sync.WaitGroup waits for goroutines — call Add() before launching, Done() inside, Wait() to block.
  • sync.Once guarantees exactly-one execution — ideal for lazy singleton initialization.
  • sync.Pool recycles temporary objects to reduce GC pressure — always reset before returning to pool.
  • sync.Map is only for specific access patterns — default to map + RWMutex.
  • Never copy a sync primitive after first use — use pointer receivers on structs that embed them.
  • Go's Mutex is not reentrant — double-locking from the same goroutine deadlocks.