Real-Time Bidding Engine Simulator¶
Repository:
go-rtb-engine/(sibling directory)
What This Project Demonstrates¶
This project simulates the core system described in the job posting: a high-throughput, low-latency bidding platform that connects real-world intelligence with real-time decisioning.
| Skill Area | How It's Demonstrated |
|---|---|
| Low-latency systems | Context deadlines enforce 100ms bid SLA |
| Concurrent architecture | Goroutine fan-out to multiple bidders simultaneously |
| Real-time decisioning | Second-price (Vickrey) auction logic |
| Observability | Prometheus metrics for latency, bid counts, win rates |
| Resilience | Per-bidder circuit breakers prevent cascading failures |
| Production patterns | Graceful shutdown, structured logging, health checks |
Architecture¶
graph LR
Client["Ad Exchange\n(HTTP Client)"] -->|"POST /auction"| AuctionServer["Auction Server"]
AuctionServer -->|"Fan-out\n(goroutines)"| Bidder1["Bidder 1"]
AuctionServer -->|"Fan-out\n(goroutines)"| Bidder2["Bidder 2"]
AuctionServer -->|"Fan-out\n(goroutines)"| BidderN["Bidder N"]
Bidder1 -->|"BidResponse"| AuctionServer
Bidder2 -->|"BidResponse"| AuctionServer
BidderN -->|"BidResponse"| AuctionServer
AuctionServer -->|"Second-price\nauction"| Result["AuctionResult\n(winner pays 2nd price)"]
AuctionServer -->|"Metrics"| Prometheus["Prometheus\n/metrics"]
How an Auction Works¶
- Bid request arrives via HTTP POST to
/auction - Fan-out: Auction server sends the request to all registered bidders concurrently using goroutines
- Deadline enforcement: Each bidder must respond within the configured SLA (default: 100ms). Context with deadline ensures slow bidders are cut off.
- Collect responses: All valid responses received within the deadline are collected
- Second-price auction: Winner is the highest bidder, but pays the second-highest bid + $0.01 (Vickrey auction -- this is how real ad exchanges work)
- Return result: Winning bid, all bids, and auction duration are returned
Project Structure¶
go-rtb-engine/
├── cmd/
│ ├── auction-server/
│ │ └── main.go # HTTP server: /auction, /health, /metrics
│ └── bidder/
│ └── main.go # Sample bidder service with realistic latency
├── internal/
│ ├── auction/
│ │ ├── auction.go # Core auction logic (second-price)
│ │ └── auction_test.go # Table-driven tests + benchmarks
│ ├── bidder/
│ │ ├── client.go # BidderClient interface + HTTP + circuit breaker
│ │ └── client_test.go # Tests
│ └── metrics/
│ └── metrics.go # Prometheus counters + histograms
├── pkg/
│ └── openrtb/
│ └── types.go # BidRequest, BidResponse, AuctionResult
├── go.mod
├── Makefile
├── Dockerfile
└── README.md
Key Design Decisions¶
Why Second-Price Auction?¶
Real programmatic ad exchanges (Google Ad Manager, OpenRTB) use second-price auctions because they incentivize truthful bidding -- bidders bid their true valuation since they'll only pay the second-highest price. This demonstrates domain knowledge.
Why Circuit Breakers Per Bidder?¶
In production RTB, a single slow or failing bidder shouldn't degrade the entire auction. Circuit breakers:
- Closed: Normal operation, requests pass through
- Open: After N consecutive failures, requests are immediately rejected (fail fast)
- Half-open: After a timeout, one test request is allowed through
Why Context Deadlines Instead of Timeouts?¶
Context propagation is the Go-idiomatic way to enforce SLAs across service boundaries:
ctx, cancel := context.WithTimeout(r.Context(), 100*time.Millisecond)
defer cancel()
result := auctioneer.RunAuction(ctx, bidRequest, bidders)
The deadline propagates to all downstream goroutines, ensuring clean cancellation.
Why Goroutine Fan-Out?¶
All bidders are queried simultaneously (not sequentially). Total auction latency is determined by the slowest bidder (bounded by the deadline), not the sum of all bidder latencies.
Go Concepts Showcased¶
| Concept | Where It's Used |
|---|---|
| Goroutines + WaitGroup | RunAuction fans out to all bidders concurrently |
| Context with deadlines | SLA enforcement across the auction lifecycle |
| Interfaces | BidderClient interface enables testing with stubs |
| Channels + select | Could extend for streaming results (current: mutex + WaitGroup) |
| Circuit breaker pattern | HTTPBidderClient wraps calls with failure tracking |
| Prometheus metrics | Histograms for latency, counters for bids and wins |
| Structured logging | log/slog with JSON output throughout |
| Table-driven tests | auction_test.go tests all edge cases |
| Benchmarks | Performance testing of auction logic |
| Graceful shutdown | SIGINT/SIGTERM handling in main |
| Multi-stage Docker | ~15MB final image |
How to Talk About This in an Interview¶
Interview Talking Points
-
Start with the domain: "I built an RTB engine simulator because the ad-tech bidding domain requires specific latency and concurrency patterns that are Go's sweet spot."
-
Explain the architecture: Walk through the auction flow, emphasizing concurrent fan-out and deadline enforcement.
-
Highlight trade-offs: "I chose second-price auctions to match real exchanges. Circuit breakers prevent cascading failures from slow bidders. Context deadlines ensure we never exceed our SLA."
-
Discuss observability: "In production, you can't improve what you can't measure. Every auction records latency histograms and bid counts to Prometheus."
-
Testing strategy: "Table-driven tests cover edge cases like ties, floor price filtering, all-bidders-timeout, and single-bidder scenarios."
Running the Project¶
cd go-rtb-engine
# Install dependencies
go mod tidy
# Run tests
make test
# Start a bidder on port 8081
BIDDER_ID=bidder-1 PORT=8081 make run-bidder
# Start auction server (in another terminal)
PORT=8080 BID_TIMEOUT_MS=100 make run-auction
# Send a test auction request
curl -X POST http://localhost:8080/auction \
-H "Content-Type: application/json" \
-d '{"id":"test-1","impressions":[{"id":"imp-1","min_bid":0.5,"max_bid":5.0}],"floor_price":0.5}'