Channel Patterns and Idioms Advanced¶
Introduction¶
Channels are Go's primary mechanism for communication between goroutines, but their true power lies in composition. Individual channel patterns are simple; combining them lets you build complex, cancellable, concurrent data-flow systems without shared mutable state.
These patterns originate from CSP (Communicating Sequential Processes) theory and have become the idiomatic vocabulary of Go concurrency. Mastering them is essential for writing production concurrent code and acing senior-level interviews.
Syntax & Usage¶
Done Channel Pattern¶
The foundational pattern for signaling goroutine termination. A done channel (or context.Done()) tells downstream goroutines to stop.
func doWork(done <-chan struct{}, input <-chan int) <-chan int {
output := make(chan int)
go func() {
defer close(output)
for {
select {
case <-done:
return
case v, ok := <-input:
if !ok {
return
}
// Process and send, but also check done
select {
case output <- v * 2:
case <-done:
return
}
}
}
}()
return output
}
func main() {
done := make(chan struct{})
defer close(done) // Signal all goroutines to stop on exit
input := make(chan int)
go func() {
defer close(input)
for i := 0; i < 100; i++ {
select {
case input <- i:
case <-done:
return
}
}
}()
output := doWork(done, input)
for v := range output {
fmt.Println(v)
if v > 10 {
return // done channel is closed by defer, stopping doWork
}
}
}
Context vs Done Channel
Modern Go code uses context.Context instead of bare done channels. Context provides the same cancellation signal via ctx.Done() plus deadline/timeout and value propagation. Use context in new code; understand the done channel pattern to read older codebases.
Or-Channel (First to Complete)¶
Combines multiple done channels — returns a channel that closes when any input channel closes. Useful for combining cancellation signals.
func or(channels ...<-chan struct{}) <-chan struct{} {
switch len(channels) {
case 0:
return nil
case 1:
return channels[0]
}
orDone := make(chan struct{})
go func() {
defer close(orDone)
switch len(channels) {
case 2:
select {
case <-channels[0]:
case <-channels[1]:
}
default:
select {
case <-channels[0]:
case <-channels[1]:
case <-channels[2]:
case <-or(append(channels[3:], orDone)...):
}
}
}()
return orDone
}
// Usage: cancel when ANY signal fires
func main() {
sig := func(after time.Duration) <-chan struct{} {
ch := make(chan struct{})
go func() {
defer close(ch)
time.Sleep(after)
}()
return ch
}
start := time.Now()
// Whichever completes first wins
<-or(
sig(2*time.Hour),
sig(5*time.Minute),
sig(1*time.Second), // This one fires first
sig(1*time.Hour),
)
fmt.Printf("Done after %v\n", time.Since(start)) // ~1 second
}
Or-Done Channel (Cancellable Stream)¶
Wraps a channel read so it respects a done/context signal. Prevents goroutine leaks when you want to read from a channel that may outlive your interest in it.
func orDone[T any](ctx context.Context, ch <-chan T) <-chan T {
out := make(chan T)
go func() {
defer close(out)
for {
select {
case <-ctx.Done():
return
case v, ok := <-ch:
if !ok {
return
}
select {
case out <- v:
case <-ctx.Done():
return
}
}
}
}()
return out
}
// Usage: safely iterate a channel with cancellation
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
// Some long-lived channel from an external source
dataStream := generateInfiniteData()
// orDone ensures we stop reading when context expires
for v := range orDone(ctx, dataStream) {
fmt.Println(v)
}
// No goroutine leak — orDone's goroutine exits when ctx is cancelled
}
Tee Channel (Split a Stream)¶
Sends every value from one channel to two output channels. Like the Unix tee command.
func tee[T any](ctx context.Context, in <-chan T) (<-chan T, <-chan T) {
out1 := make(chan T)
out2 := make(chan T)
go func() {
defer close(out1)
defer close(out2)
for val := range orDone(ctx, in) {
// Use local copies for select — we need to send to both
o1, o2 := out1, out2
for i := 0; i < 2; i++ {
select {
case o1 <- val:
o1 = nil // Disable this case after sending
case o2 <- val:
o2 = nil
case <-ctx.Done():
return
}
}
}
}()
return out1, out2
}
// Usage: split a data stream for logging and processing
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
data := generate(ctx, 1, 2, 3, 4, 5)
logStream, processStream := tee(ctx, data)
go func() {
for v := range logStream {
log.Printf("Audit: received %v", v)
}
}()
for v := range processStream {
fmt.Printf("Processing: %v\n", v)
}
}
Fan-In (Merge Multiple Channels)¶
Combines multiple channels into a single channel. Values arrive in the order they're produced.
func fanIn[T any](ctx context.Context, channels ...<-chan T) <-chan T {
merged := make(chan T)
var wg sync.WaitGroup
wg.Add(len(channels))
for _, ch := range channels {
go func(c <-chan T) {
defer wg.Done()
for v := range orDone(ctx, c) {
select {
case merged <- v:
case <-ctx.Done():
return
}
}
}(ch)
}
go func() {
wg.Wait()
close(merged)
}()
return merged
}
// Usage
func main() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ch1 := generate(ctx, "a", "b", "c")
ch2 := generate(ctx, "1", "2", "3")
ch3 := generate(ctx, "x", "y", "z")
for v := range fanIn(ctx, ch1, ch2, ch3) {
fmt.Println(v) // Interleaved output from all three
}
}
Bridge Channel (Channel of Channels)¶
Flattens a <-chan <-chan T into a single <-chan T. Useful when pipeline stages produce channels of results.
func bridge[T any](ctx context.Context, chanStream <-chan <-chan T) <-chan T {
out := make(chan T)
go func() {
defer close(out)
for {
var stream <-chan T
select {
case maybeStream, ok := <-chanStream:
if !ok {
return
}
stream = maybeStream
case <-ctx.Done():
return
}
for val := range orDone(ctx, stream) {
select {
case out <- val:
case <-ctx.Done():
return
}
}
}
}()
return out
}
// Usage: flatten paginated results
func fetchPages(ctx context.Context) <-chan <-chan Record {
chanStream := make(chan (<-chan Record))
go func() {
defer close(chanStream)
for page := 1; ; page++ {
records, hasMore := fetchPage(page)
ch := make(chan Record)
go func() {
defer close(ch)
for _, r := range records {
select {
case ch <- r:
case <-ctx.Done():
return
}
}
}()
select {
case chanStream <- ch:
case <-ctx.Done():
return
}
if !hasMore {
return
}
}
}()
return chanStream
}
// Consumer sees a flat stream of records
for record := range bridge(ctx, fetchPages(ctx)) {
process(record)
}
Queuing with Buffered Channels¶
// Buffered channel as a fixed-size queue
func newQueue[T any](size int) (enqueue func(T) bool, dequeue func() (T, bool)) {
ch := make(chan T, size)
enqueue = func(v T) bool {
select {
case ch <- v:
return true
default:
return false // Queue full
}
}
dequeue = func() (T, bool) {
select {
case v := <-ch:
return v, true
default:
var zero T
return zero, false // Queue empty
}
}
return
}
// Usage
enq, deq := newQueue[string](100)
enq("task-1") // true
enq("task-2") // true
val, ok := deq() // "task-1", true
Ring Buffer with Channels¶
A channel-based ring buffer that drops old items when full — useful for metrics, recent events, etc.
type RingBuffer[T any] struct {
in chan T
out chan T
}
func NewRingBuffer[T any](size int) *RingBuffer[T] {
rb := &RingBuffer[T]{
in: make(chan T),
out: make(chan T),
}
go func() {
defer close(rb.out)
var buf []T
for {
if len(buf) == 0 {
val, ok := <-rb.in
if !ok {
return
}
buf = append(buf, val)
continue
}
select {
case val, ok := <-rb.in:
if !ok {
// Drain remaining buffer
for _, v := range buf {
rb.out <- v
}
return
}
if len(buf) >= size {
buf = buf[1:] // Drop oldest
}
buf = append(buf, val)
case rb.out <- buf[0]:
buf = buf[1:]
}
}
}()
return rb
}
func (rb *RingBuffer[T]) In() chan<- T { return rb.in }
func (rb *RingBuffer[T]) Out() <-chan T { return rb.out }
Nil Channel Trick (Disable Select Cases)¶
Setting a channel variable to nil permanently blocks that select case, effectively disabling it. This is incredibly useful for dynamic control flow.
func mergeTwo[T any](ctx context.Context, ch1, ch2 <-chan T) <-chan T {
out := make(chan T)
go func() {
defer close(out)
for ch1 != nil || ch2 != nil {
select {
case v, ok := <-ch1:
if !ok {
ch1 = nil // ch1 is exhausted — disable this case
continue
}
out <- v
case v, ok := <-ch2:
if !ok {
ch2 = nil // ch2 is exhausted — disable this case
continue
}
out <- v
case <-ctx.Done():
return
}
}
}()
return out
}
Another powerful use — conditional channel operations:
func conditionalSend(ctx context.Context, data int, enabled bool) {
var ch chan int
if enabled {
ch = make(chan int, 1)
go func() { fmt.Println("received:", <-ch) }()
}
// ch is nil if not enabled — this select case is permanently blocked
select {
case ch <- data: // Only attempted if ch != nil
fmt.Println("sent")
case <-ctx.Done():
fmt.Println("cancelled")
}
}
Channel-Based Pub/Sub¶
type PubSub[T any] struct {
mu sync.RWMutex
subscribers map[string]chan T
closed bool
}
func NewPubSub[T any]() *PubSub[T] {
return &PubSub[T]{
subscribers: make(map[string]chan T),
}
}
func (ps *PubSub[T]) Subscribe(id string, bufSize int) <-chan T {
ps.mu.Lock()
defer ps.mu.Unlock()
ch := make(chan T, bufSize)
ps.subscribers[id] = ch
return ch
}
func (ps *PubSub[T]) Unsubscribe(id string) {
ps.mu.Lock()
defer ps.mu.Unlock()
if ch, ok := ps.subscribers[id]; ok {
close(ch)
delete(ps.subscribers, id)
}
}
func (ps *PubSub[T]) Publish(msg T) {
ps.mu.RLock()
defer ps.mu.RUnlock()
if ps.closed {
return
}
for _, ch := range ps.subscribers {
select {
case ch <- msg:
default:
// Subscriber is slow — drop message (or log, or buffer)
}
}
}
func (ps *PubSub[T]) Close() {
ps.mu.Lock()
defer ps.mu.Unlock()
if ps.closed {
return
}
ps.closed = true
for id, ch := range ps.subscribers {
close(ch)
delete(ps.subscribers, id)
}
}
// Usage
func main() {
ps := NewPubSub[string]()
defer ps.Close()
sub1 := ps.Subscribe("logger", 10)
sub2 := ps.Subscribe("metrics", 10)
go func() {
for msg := range sub1 {
log.Printf("[LOG] %s", msg)
}
}()
go func() {
for msg := range sub2 {
recordMetric(msg)
}
}()
ps.Publish("user.login")
ps.Publish("order.created")
}
Composing Patterns¶
These patterns compose to build complex systems. Here's a data processing pipeline with fan-out, rate limiting, and graceful shutdown:
graph LR
A[Source] --> B[orDone]
B --> C1[Worker 1]
B --> C2[Worker 2]
B --> C3[Worker 3]
C1 --> D[fanIn]
C2 --> D
C3 --> D
D --> E[tee]
E --> F[Logger]
E --> G[Output]
func processingPipeline(ctx context.Context, source <-chan Event) <-chan Result {
// Stage 1: Make source cancellable
events := orDone(ctx, source)
// Stage 2: Fan-out to 3 workers
workers := make([]<-chan Result, 3)
for i := range workers {
workers[i] = processEvents(ctx, events)
}
// Stage 3: Fan-in results
merged := fanIn(ctx, workers...)
// Stage 4: Tee for logging
logStream, output := tee(ctx, merged)
go logResults(ctx, logStream)
return output
}
Quick Reference¶
| Pattern | Purpose | Key Insight |
|---|---|---|
| Done channel | Goroutine cancellation | close(done) broadcasts to all listeners |
| Or-channel | First-to-fire cancellation | Recursive select merging |
| Or-done | Cancellable stream read | Wraps channel reads with done/context check |
| Tee | Split stream into two | Send each value to both outputs via nil-channel trick |
| Fan-in | Merge N channels into 1 | One goroutine per input + WaitGroup to close output |
| Bridge | Flatten channel-of-channels | Sequential draining of inner channels |
| Nil channel | Disable select cases | ch = nil permanently blocks that case in select |
| Ring buffer | Bounded buffer, drop old | Channel-backed circular buffer |
| Pub/Sub | Broadcast to subscribers | Map of subscriber channels with non-blocking send |
Best Practices¶
-
Always pair channel creation with a closure plan — who closes this channel and when?
-
Use generics (Go 1.18+) for reusable channel utilities like
orDone,fanIn, andtee. -
Prefer context over bare done channels in new code — context composes better and carries deadlines/values.
-
Use non-blocking sends in pub/sub — a slow subscriber should not block all publishers. Decide on a policy: drop, buffer, or backpressure.
-
Test channel code with
-race— always rungo test -raceon concurrent code. The race detector catches most synchronization bugs. -
Keep channel direction in function signatures — use
<-chan T(receive-only) andchan<- T(send-only) to enforce intent at compile time.
Common Pitfalls¶
Forgetting the Second Select in Send Operations
When sending to a channel inside a select, you need a nested select to also check for cancellation on the send. Otherwise, the goroutine blocks if the receiver is gone.
// ❌ If output's consumer is cancelled, this goroutine blocks forever
select {
case v, ok := <-input:
if !ok { return }
output <- v // BLOCKS if nobody reads output
case <-ctx.Done():
return
}
// ✅ Double select — check cancellation on both receive AND send
select {
case v, ok := <-input:
if !ok { return }
select {
case output <- v:
case <-ctx.Done():
return
}
case <-ctx.Done():
return
}
Reading from a Nil Channel Blocks Forever
A nil channel blocks on both send and receive. This is useful for the nil channel trick but dangerous if accidental.
Ranging Over a Channel That's Never Closed
for v := range ch blocks until ch is closed. If the sender never closes ch, the consumer goroutine leaks.
Buffered Channel Size Assumptions
Don't rely on buffered channel size for correctness — only for performance tuning. If your program only works with a specific buffer size, the design has a synchronization bug.
Performance Considerations¶
-
Unbuffered channels: ~50–100ns per send/receive operation due to goroutine scheduling. Use when synchronization between producer and consumer is required.
-
Buffered channels: Faster throughput when producer and consumer have different speeds. Buffer sizes of 1 often suffice to decouple; larger buffers (100–1000) help with bursty workloads.
-
Channel overhead vs mutex: For simple state protection,
sync.Mutexis 2–5x faster than a channel. Use channels for communication, mutexes for state protection. -
Reflect-based select:
reflect.Select(used to select on a dynamic number of channels) is significantly slower than a staticselectstatement. Prefer fan-in with goroutines over reflect-based solutions. -
Contention: Many goroutines sending to the same channel creates contention on the channel's internal mutex. Consider sharding (multiple channels) for high-throughput scenarios.
Interview Tips¶
Interview Tip
When asked to design a concurrent system, start with the done/or-done pattern and build up. Interviewers want to see that you think about goroutine lifecycle and cancellation from the start, not as an afterthought.
Interview Tip
The nil channel trick is an advanced idiom that impresses interviewers. Explain it as: "Setting a channel to nil in a select case disables that case permanently, which is useful for handling channels that close at different times in a merge operation."
Interview Tip
If asked "How would you implement pub/sub in Go?", show the channel-based approach but discuss its trade-offs: no persistence, no guaranteed delivery, slow subscribers can lose messages. Then mention that for production pub/sub, you'd use NATS, Redis Pub/Sub, or Kafka — channels work for in-process communication.
Interview Tip
Know the bridge pattern — it's less common but demonstrates deep channel fluency. The scenario is: "You have a paginated API that returns a page at a time, and you want consumers to see a flat stream of results." Bridge + orDone is the answer.
Key Takeaways¶
- Channel patterns are composable primitives — combine them to build complex concurrent systems.
- The or-done wrapper is the most important pattern — it makes any channel read cancellable.
- The nil channel trick is the key to dynamic control flow in
selectstatements. - Always think about channel ownership — who creates, who sends, who closes.
- Prefer context.Context over bare done channels for cancellation in modern Go code.
- These patterns are the vocabulary of Go concurrency — knowing them lets you express concurrent designs fluently.