gRPC and Protocol Buffers Advanced¶
Introduction¶
gRPC is the standard for high-performance microservice communication. It uses Protocol Buffers (protobuf) for serialization — compact, strongly typed, and backward-compatible. In ad-tech and high-throughput systems, gRPC's binary format and HTTP/2 multiplexing provide massive performance advantages over REST/JSON.
Why This Matters
gRPC is the backbone of modern microservices at Google, Netflix, and most ad-tech platforms. Interviewers for backend/distributed systems roles expect you to understand service definitions, streaming, interceptors, and error handling — not just "I've used gRPC before."
Protocol Buffer Syntax¶
Basic .proto File¶
syntax = "proto3";
package userservice;
option go_package = "github.com/myorg/myapp/proto/userpb";
import "google/protobuf/timestamp.proto";
import "google/protobuf/empty.proto";
message User {
string id = 1;
string name = 2;
string email = 3;
UserRole role = 4;
google.protobuf.Timestamp created_at = 5;
repeated string tags = 6; // list
map<string, string> metadata = 7; // map
optional string phone = 8; // explicit optional
}
enum UserRole {
USER_ROLE_UNSPECIFIED = 0;
USER_ROLE_ADMIN = 1;
USER_ROLE_MEMBER = 2;
}
message GetUserRequest {
string id = 1;
}
message ListUsersRequest {
int32 page_size = 1;
string page_token = 2;
}
message ListUsersResponse {
repeated User users = 1;
string next_page_token = 2;
}
Service Definition with All RPC Types¶
service UserService {
// Unary
rpc GetUser(GetUserRequest) returns (User);
rpc CreateUser(User) returns (User);
rpc DeleteUser(GetUserRequest) returns (google.protobuf.Empty);
// Server streaming
rpc ListUsers(ListUsersRequest) returns (stream User);
// Client streaming
rpc BatchCreateUsers(stream User) returns (BatchCreateResponse);
// Bidirectional streaming
rpc SyncUsers(stream SyncRequest) returns (stream SyncResponse);
}
Code Generation¶
# Install protoc plugins
go install google.golang.org/protobuf/cmd/protoc-gen-go@latest
go install google.golang.org/grpc/cmd/protoc-gen-go-grpc@latest
# Generate Go code
protoc \
--go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
proto/user.proto
This generates two files:
user.pb.go— message types and serializationuser_grpc.pb.go— client/server interfaces and stubs
gRPC Service Types¶
graph TB
subgraph "Unary"
C1[Client] -->|"1 request"| S1[Server]
S1 -->|"1 response"| C1
end
subgraph "Server Streaming"
C2[Client] -->|"1 request"| S2[Server]
S2 -->|"N responses"| C2
end
subgraph "Client Streaming"
C3[Client] -->|"N requests"| S3[Server]
S3 -->|"1 response"| C3
end
subgraph "Bidirectional"
C4[Client] <-->|"N messages"| S4[Server]
end
| Type | Client Sends | Server Sends | Use Case |
|---|---|---|---|
| Unary | 1 request | 1 response | CRUD, standard RPC |
| Server streaming | 1 request | N responses | Real-time feeds, large result sets |
| Client streaming | N requests | 1 response | File upload, batch ingestion |
| Bidirectional | N messages | N messages | Chat, live sync, gaming |
Implementing a gRPC Server¶
type userServer struct {
userpb.UnimplementedUserServiceServer // forward compatibility
repo UserRepository
}
func NewUserServer(repo UserRepository) userpb.UserServiceServer {
return &userServer{repo: repo}
}
// Unary RPC
func (s *userServer) GetUser(ctx context.Context, req *userpb.GetUserRequest) (*userpb.User, error) {
if req.GetId() == "" {
return nil, status.Errorf(codes.InvalidArgument, "id is required")
}
user, err := s.repo.FindByID(ctx, req.GetId())
if err != nil {
if errors.Is(err, ErrNotFound) {
return nil, status.Errorf(codes.NotFound, "user %s not found", req.GetId())
}
return nil, status.Errorf(codes.Internal, "failed to get user: %v", err)
}
return toProtoUser(user), nil
}
// Server streaming RPC
func (s *userServer) ListUsers(req *userpb.ListUsersRequest, stream userpb.UserService_ListUsersServer) error {
users, err := s.repo.List(stream.Context(), 0, int(req.GetPageSize()))
if err != nil {
return status.Errorf(codes.Internal, "failed to list users: %v", err)
}
for _, u := range users {
if err := stream.Send(toProtoUser(u)); err != nil {
return err
}
}
return nil
}
// Client streaming RPC
func (s *userServer) BatchCreateUsers(stream userpb.UserService_BatchCreateUsersServer) error {
var count int32
for {
user, err := stream.Recv()
if err == io.EOF {
return stream.SendAndClose(&userpb.BatchCreateResponse{
CreatedCount: count,
})
}
if err != nil {
return err
}
if err := s.repo.Save(stream.Context(), fromProtoUser(user)); err != nil {
return status.Errorf(codes.Internal, "save failed: %v", err)
}
count++
}
}
Starting the Server¶
func main() {
lis, err := net.Listen("tcp", ":50051")
if err != nil {
log.Fatalf("failed to listen: %v", err)
}
grpcServer := grpc.NewServer(
grpc.ChainUnaryInterceptor(
LoggingUnaryInterceptor(),
RecoveryUnaryInterceptor(),
),
grpc.ChainStreamInterceptor(
LoggingStreamInterceptor(),
RecoveryStreamInterceptor(),
),
)
userpb.RegisterUserServiceServer(grpcServer, NewUserServer(repo))
// Enable reflection for grpcurl and debugging
reflection.Register(grpcServer)
// Health checking
healthServer := health.NewServer()
healthpb.RegisterHealthServer(grpcServer, healthServer)
healthServer.SetServingStatus("userservice", healthpb.HealthCheckResponse_SERVING)
log.Printf("gRPC server listening on :50051")
if err := grpcServer.Serve(lis); err != nil {
log.Fatalf("failed to serve: %v", err)
}
}
Implementing a gRPC Client¶
func main() {
conn, err := grpc.NewClient("localhost:50051",
grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithChainUnaryInterceptor(
TimeoutUnaryInterceptor(5*time.Second),
RetryUnaryInterceptor(3),
),
)
if err != nil {
log.Fatalf("dial: %v", err)
}
defer conn.Close()
client := userpb.NewUserServiceClient(conn)
// Unary call with deadline
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
user, err := client.GetUser(ctx, &userpb.GetUserRequest{Id: "123"})
if err != nil {
st, ok := status.FromError(err)
if ok {
log.Printf("gRPC error: code=%s msg=%s", st.Code(), st.Message())
}
log.Fatal(err)
}
fmt.Printf("User: %s\n", user.GetName())
}
Consuming a Server Stream¶
stream, err := client.ListUsers(ctx, &userpb.ListUsersRequest{PageSize: 100})
if err != nil {
log.Fatal(err)
}
for {
user, err := stream.Recv()
if err == io.EOF {
break
}
if err != nil {
log.Fatalf("stream recv: %v", err)
}
fmt.Printf("User: %s\n", user.GetName())
}
Interceptors (Middleware for gRPC)¶
Unary Server Interceptor¶
func LoggingUnaryInterceptor() grpc.UnaryServerInterceptor {
return func(
ctx context.Context,
req any,
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (any, error) {
start := time.Now()
resp, err := handler(ctx, req)
st, _ := status.FromError(err)
slog.Info("gRPC request",
"method", info.FullMethod,
"code", st.Code().String(),
"duration", time.Since(start),
)
return resp, err
}
}
Recovery Interceptor¶
func RecoveryUnaryInterceptor() grpc.UnaryServerInterceptor {
return func(
ctx context.Context,
req any,
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler,
) (resp any, err error) {
defer func() {
if r := recover(); r != nil {
slog.Error("panic recovered",
"method", info.FullMethod,
"panic", r,
"stack", string(debug.Stack()),
)
err = status.Errorf(codes.Internal, "internal server error")
}
}()
return handler(ctx, req)
}
}
Stream Interceptor¶
func LoggingStreamInterceptor() grpc.StreamServerInterceptor {
return func(
srv any,
ss grpc.ServerStream,
info *grpc.StreamServerInfo,
handler grpc.StreamHandler,
) error {
start := time.Now()
err := handler(srv, ss)
slog.Info("gRPC stream",
"method", info.FullMethod,
"duration", time.Since(start),
"error", err,
)
return err
}
}
Client Interceptor (Timeout)¶
func TimeoutUnaryInterceptor(timeout time.Duration) grpc.UnaryClientInterceptor {
return func(
ctx context.Context,
method string,
req, reply any,
cc *grpc.ClientConn,
invoker grpc.UnaryInvoker,
opts ...grpc.CallOption,
) error {
if _, ok := ctx.Deadline(); !ok {
var cancel context.CancelFunc
ctx, cancel = context.WithTimeout(ctx, timeout)
defer cancel()
}
return invoker(ctx, method, req, reply, cc, opts...)
}
}
Error Handling with Status Codes¶
import (
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// Returning errors with status codes
func (s *userServer) GetUser(ctx context.Context, req *userpb.GetUserRequest) (*userpb.User, error) {
if req.GetId() == "" {
return nil, status.Errorf(codes.InvalidArgument, "id is required")
}
user, err := s.repo.FindByID(ctx, req.GetId())
if err != nil {
switch {
case errors.Is(err, ErrNotFound):
return nil, status.Errorf(codes.NotFound, "user %q not found", req.GetId())
case errors.Is(err, context.DeadlineExceeded):
return nil, status.Errorf(codes.DeadlineExceeded, "database timeout")
default:
return nil, status.Errorf(codes.Internal, "internal error")
}
}
return toProtoUser(user), nil
}
// Rich error details
import "google.golang.org/genproto/googleapis/rpc/errdetails"
func validationError(field, desc string) error {
st := status.New(codes.InvalidArgument, "validation failed")
detailed, err := st.WithDetails(&errdetails.BadRequest{
FieldViolations: []*errdetails.BadRequest_FieldViolation{
{Field: field, Description: desc},
},
})
if err != nil {
return st.Err()
}
return detailed.Err()
}
gRPC to HTTP Status Code Mapping¶
| gRPC Code | HTTP Status | When to Use |
|---|---|---|
OK |
200 | Success |
InvalidArgument |
400 | Client sent bad data |
NotFound |
404 | Resource doesn't exist |
AlreadyExists |
409 | Duplicate creation |
PermissionDenied |
403 | Not authorized |
Unauthenticated |
401 | No/invalid credentials |
ResourceExhausted |
429 | Rate limited |
DeadlineExceeded |
504 | Timeout |
Unavailable |
503 | Service down (retryable) |
Internal |
500 | Bug or unexpected failure |
Metadata (Headers for gRPC)¶
import "google.golang.org/grpc/metadata"
// Client: send metadata
md := metadata.Pairs(
"authorization", "Bearer "+token,
"x-request-id", uuid.NewString(),
)
ctx := metadata.NewOutgoingContext(ctx, md)
user, err := client.GetUser(ctx, req)
// Server: read metadata
func (s *userServer) GetUser(ctx context.Context, req *userpb.GetUserRequest) (*userpb.User, error) {
md, ok := metadata.FromIncomingContext(ctx)
if !ok {
return nil, status.Errorf(codes.Unauthenticated, "missing metadata")
}
tokens := md.Get("authorization")
if len(tokens) == 0 {
return nil, status.Errorf(codes.Unauthenticated, "missing auth token")
}
// Server: send response metadata (headers + trailers)
header := metadata.Pairs("x-served-by", "node-1")
grpc.SendHeader(ctx, header)
trailer := metadata.Pairs("x-request-duration", "42ms")
grpc.SetTrailer(ctx, trailer)
// ...
}
Deadlines and Timeouts¶
// Client sets deadline — it propagates through the entire call chain
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
user, err := client.GetUser(ctx, &userpb.GetUserRequest{Id: "123"})
if err != nil {
st, _ := status.FromError(err)
if st.Code() == codes.DeadlineExceeded {
log.Println("request timed out")
}
}
// Server checks remaining deadline
func (s *userServer) GetUser(ctx context.Context, req *userpb.GetUserRequest) (*userpb.User, error) {
deadline, ok := ctx.Deadline()
if ok && time.Until(deadline) < 100*time.Millisecond {
return nil, status.Errorf(codes.DeadlineExceeded, "not enough time remaining")
}
// Pass context to downstream calls — deadline propagates
user, err := s.repo.FindByID(ctx, req.GetId())
// ...
}
Interview Tip
"Deadlines propagate through the entire call chain via context. When service A calls B with a 5s deadline, and B calls C, C inherits whatever time remains. This prevents cascading timeouts. I always set deadlines on client calls and check remaining time in servers before starting expensive operations."
TLS Configuration¶
// Server with TLS
creds, err := credentials.NewServerTLSFromFile("server.crt", "server.key")
if err != nil {
log.Fatal(err)
}
grpcServer := grpc.NewServer(grpc.Creds(creds))
// Client with TLS
creds, err := credentials.NewClientTLSFromFile("ca.crt", "")
if err != nil {
log.Fatal(err)
}
conn, err := grpc.NewClient("api.example.com:443",
grpc.WithTransportCredentials(creds),
)
// Mutual TLS (mTLS)
cert, _ := tls.LoadX509KeyPair("client.crt", "client.key")
caCert, _ := os.ReadFile("ca.crt")
pool := x509.NewCertPool()
pool.AppendCertsFromPEM(caCert)
creds := credentials.NewTLS(&tls.Config{
Certificates: []tls.Certificate{cert},
RootCAs: pool,
})
conn, err := grpc.NewClient("api.example.com:443",
grpc.WithTransportCredentials(creds),
)
gRPC-Gateway (REST + gRPC)¶
gRPC-Gateway generates a reverse proxy that translates RESTful JSON to gRPC.
import "google/api/annotations.proto";
service UserService {
rpc GetUser(GetUserRequest) returns (User) {
option (google.api.http) = {
get: "/api/v1/users/{id}"
};
}
rpc CreateUser(User) returns (User) {
option (google.api.http) = {
post: "/api/v1/users"
body: "*"
};
}
}
func runGateway() error {
ctx := context.Background()
mux := runtime.NewServeMux()
opts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}
err := userpb.RegisterUserServiceHandlerFromEndpoint(ctx, mux, "localhost:50051", opts)
if err != nil {
return err
}
// REST clients hit :8080, requests are proxied to gRPC on :50051
return http.ListenAndServe(":8080", mux)
}
graph LR
REST[REST Client] -->|"HTTP/JSON"| GW[gRPC-Gateway :8080]
GW -->|"gRPC/Protobuf"| SVC[gRPC Server :50051]
GRPC[gRPC Client] -->|"gRPC/Protobuf"| SVC
Health Checking and Reflection¶
import (
"google.golang.org/grpc/health"
healthpb "google.golang.org/grpc/health/grpc_health_v1"
"google.golang.org/grpc/reflection"
)
func main() {
grpcServer := grpc.NewServer()
// Register health service
healthServer := health.NewServer()
healthpb.RegisterHealthServer(grpcServer, healthServer)
// Set per-service health
healthServer.SetServingStatus("myservice.UserService",
healthpb.HealthCheckResponse_SERVING)
// Enable reflection (for grpcurl, grpc_cli)
reflection.Register(grpcServer)
}
# With reflection enabled, use grpcurl to test
grpcurl -plaintext localhost:50051 list
grpcurl -plaintext localhost:50051 describe userservice.UserService
grpcurl -plaintext -d '{"id": "123"}' localhost:50051 userservice.UserService/GetUser
# Health check
grpcurl -plaintext localhost:50051 grpc.health.v1.Health/Check
Quick Reference¶
| Concept | Key Type/Package | Notes |
|---|---|---|
| Service definition | .proto file |
Source of truth for API contract |
| Code generation | protoc-gen-go, protoc-gen-go-grpc |
Generates message types + client/server stubs |
| Unary interceptor | grpc.UnaryServerInterceptor |
Like HTTP middleware |
| Stream interceptor | grpc.StreamServerInterceptor |
For streaming RPCs |
| Error handling | status.Errorf(codes.X, ...) |
Always use gRPC status codes |
| Metadata | metadata.FromIncomingContext |
gRPC equivalent of HTTP headers |
| Deadlines | context.WithTimeout |
Propagate through call chain |
| Health check | grpc/health package |
Kubernetes liveness/readiness |
| Reflection | grpc/reflection package |
Enables grpcurl debugging |
| gRPC-Gateway | grpc-ecosystem/grpc-gateway |
REST+gRPC from same .proto |
Best Practices¶
- Define APIs in
.protofiles first — they are the contract between teams - Always embed
Unimplemented*Server— ensures forward compatibility when new RPCs are added - Use status codes correctly —
NotFoundfor missing resources,InvalidArgumentfor bad input,Internalfor bugs - Set deadlines on every client call — never make an RPC without a timeout
- Use interceptors for cross-cutting concerns — logging, metrics, auth, tracing
- Enable reflection in dev/staging — makes debugging with
grpcurltrivial - Version your protos — use package versioning (
v1,v2) for breaking changes - Never reuse field numbers in proto messages — mark removed fields as
reserved
Common Pitfalls¶
Missing Deadline Propagation
If you create a new context.Background() inside an RPC handler instead of using the incoming ctx, you break deadline propagation. The downstream call won't respect the client's timeout.
// BAD: breaks deadline chain
func (s *server) GetUser(ctx context.Context, req *pb.GetUserRequest) (*pb.User, error) {
user, err := s.repo.FindByID(context.Background(), req.GetId()) // WRONG
// ...
}
// GOOD: propagate context
func (s *server) GetUser(ctx context.Context, req *pb.GetUserRequest) (*pb.User, error) {
user, err := s.repo.FindByID(ctx, req.GetId()) // deadline propagates
// ...
}
Forgetting to Close Streams
Client streams must call CloseAndRecv() and bidirectional streams must call CloseSend() to signal completion. Missing this causes the server to hang waiting for more messages.
Large Messages
gRPC has a default 4MB message size limit. For large payloads, either increase the limit with grpc.MaxRecvMsgSize() or use streaming to send data in chunks.
Exposing Internal Errors
Never return raw Go errors to clients. Always wrap in status.Errorf() with appropriate codes. Raw error messages may leak internal details.
Performance Considerations¶
- Connection reuse —
grpc.ClientConnmultiplexes RPCs over a single HTTP/2 connection; create one per target and reuse it - Streaming reduces per-message overhead — use it for bulk operations instead of repeated unary calls
- Keepalive — configure keepalive pings to detect dead connections:
grpc.KeepaliveParams() - Load balancing — use client-side
round_robinor external LB; gRPC connections are long-lived, so connection-level LB can cause hotspots - Protobuf is 3-10x smaller than JSON and 2-5x faster to serialize — this matters at scale
- Connection pooling — for very high throughput, create multiple
ClientConninstances to a single server to utilize multiple HTTP/2 connections
Interview Tips¶
Interview Tip
"gRPC is my default for service-to-service communication. I use unary for CRUD, server streaming for real-time feeds, and bidirectional for chat-like patterns. The protobuf contract acts as documentation and ensures backward compatibility — I never break existing field numbers."
Interview Tip
"Interceptors are the middleware pattern applied to gRPC. I chain them for logging, metrics, auth, and tracing. The key insight is that interceptors compose just like HTTP middleware — each wraps the next handler."
Interview Tip
"For services that need both internal gRPC and external REST, I use gRPC-Gateway. The .proto file is the single source of truth, and the REST endpoints are auto-generated. This is common in ad-tech where internal services speak gRPC but external partners need REST."
Key Takeaways¶
- Protobuf + gRPC = strongly typed, high-performance, backward-compatible service APIs
- Four RPC types — unary, server streaming, client streaming, bidirectional — each for different data flow patterns
- Interceptors are gRPC middleware — use them for all cross-cutting concerns
- Status codes map cleanly to HTTP semantics — use them correctly for proper error handling
- Deadlines propagate via context — always set them on clients, always pass context on servers
- gRPC-Gateway bridges REST and gRPC from a single proto definition
- Reflection + grpcurl is the go-to debugging workflow for gRPC services
- Embed
Unimplemented*Serverfor forward compatibility when proto definitions evolve