TCP and HTTP are the unsung heroes of the internet, but gRPC is quietly taking over the backend.

Let’s see how it all shakes out. Imagine you’re building a distributed system. You’ve got microservices, databases, caches – all talking to each other. How do they do it?

The Foundation: TCP

TCP (Transmission Control Protocol) is the reliable workhorse. It’s the one that guarantees your data arrives, in the right order, and without errors. Think of it like a registered letter service.

Here’s a TCP handshake in action (simplified, of course):

  • Client: SYN (Synchronize sequence numbers) – "Hey, I want to talk."
  • Server: SYN-ACK (Synchronize-Acknowledge) – "Okay, I hear you. Let’s set this up."
  • Client: ACK (Acknowledge) – "Got it. We’re good to go."

Once established, TCP manages flow control (don’t overwhelm the receiver) and congestion control (don’t choke the network).

The Application Layer: HTTP

HTTP (Hypertext Transfer Protocol) sits on top of TCP. It’s what browsers use to fetch web pages, and what APIs use to communicate. It’s text-based and human-readable, which is great for debugging.

A basic HTTP request looks like this:

GET /users/123 HTTP/1.1
Host: api.example.com
User-Agent: MyClient/1.0
Accept: application/json

And a response:

HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 45

{
  "id": 123,
  "name": "Alice"
}

HTTP/1.1 introduced keep-alive connections, meaning the TCP connection stays open for multiple requests, avoiding the handshake overhead. HTTP/2 then brought multiplexing (multiple requests/responses over a single TCP connection) and header compression, making it much more efficient.

The Modern Backend: gRPC

gRPC, on the other hand, is a high-performance, open-source RPC (Remote Procedure Call) framework developed by Google. It uses HTTP/2 for transport and Protocol Buffers (protobuf) for message serialization. This is where things get interesting for backend services.

Instead of text-based HTTP, gRPC uses a binary format defined by .proto files. This is incredibly efficient.

Here’s a proto definition for a simple user service:

syntax = "proto3";

package users;

service UserService {
  rpc GetUser (GetUserRequest) returns (User);
}

message GetUserRequest {
  int32 user_id = 1;
}

message User {
  int32 id = 1;
  string name = 2;
  string email = 3;
}

With this, you define your services and messages once, and gRPC can generate client and server code in many languages. The actual communication happens over HTTP/2.

Consider a gRPC call. The client doesn’t send GET /users/123. Instead, it serializes the GetUserRequest (containing user_id: 123) into binary protobuf, sends it over an HTTP/2 stream, and the server deserializes it. The response is similarly serialized and sent back.

The "surprising" thing about gRPC is how it completely abstracts away the network layer for developers. You define the interface (the .proto file), and gRPC handles the messy details of serialization, deserialization, network transport, and error handling. It feels like calling a local function, but it’s a network call.

This binary serialization and HTTP/2’s multiplexing mean gRPC is significantly faster and more efficient than REST over HTTP/1.1 for inter-service communication. It’s particularly good for high-throughput, low-latency scenarios common in microservices.

The real advantage for system design is the strong contract enforced by Protobuf. When you define a service, you define its API precisely. This reduces ambiguity and makes it easier to evolve services over time, as long as you follow Protobuf’s backward compatibility rules.

The next frontier you’ll likely explore is how gRPC handles streaming – bi-directional, client-side, and server-side streaming – and its implications for real-time applications.

Want structured learning?

Take the full System Design course →