ZeroMQ’s transport types are less about how messages get from A to B, and more about where A and B are relative to each other.

Let’s watch ZeroMQ in action. We’ll set up a simple request-reply pattern. First, a server that binds to a transport and waits for requests.

# server.py
import zmq
import time

context = zmq.Context()
socket = context.socket(zmq.REP)
# For TCP:
# socket.bind("tcp://*:5555")
# For IPC:
# socket.bind("ipc:///tmp/my_ipc_socket.ipc")
# For inproc:
socket.bind("inproc://my_inproc_socket")

print("Server bound and ready.")

while True:
    message = socket.recv()
    print(f"Received request: {message.decode()}")
    time.sleep(1)
    socket.send(b"World")

And a client that connects to that same transport and sends a request.

# client.py
import zmq

context = zmq.Context()
socket = context.socket(zmq.REQ)
# For TCP:
# socket.connect("tcp://localhost:5555")
# For IPC:
# socket.connect("ipc:///tmp/my_ipc_socket.ipc")
# For inproc:
socket.connect("inproc://my_inproc_socket")

print("Client connected.")

for request in range(5):
    print(f"Sending request {request}...")
    socket.send(b"Hello")
    message = socket.recv()
    print(f"Received reply {request}: {message.decode()}")

Now, imagine you’re running these on the same machine. The core problem ZeroMQ solves here is providing a consistent messaging API regardless of how your processes are communicating. You write your code once, and then you can swap out the transport mechanism by changing just the bind and connect addresses.

The system is built around the idea of sockets that look like standard network sockets but are actually high-performance, message-oriented abstractions. When you bind or connect, you’re not opening a raw network connection in the traditional sense. You’re telling ZeroMQ which underlying transport mechanism to use to establish a communication channel between your ZeroMQ endpoints.

Here’s how the three main transport types work:

TCP (tcp://host:port): This is your standard network transport. It uses the familiar TCP/IP protocol.

  • When to use: When your processes need to communicate across different machines on a network, or even on the same machine but you want to simulate network conditions or have processes managed by different users.
  • Internal workings: ZeroMQ establishes a standard TCP socket. Messages are serialized and sent over this TCP connection. It handles network reliability, retransmissions, and flow control via TCP itself.
  • Levers: You control the IP address and port. tcp://*:5555 binds to all available network interfaces on port 5555. tcp://192.168.1.100:5555 binds to a specific interface. Clients connect using tcp://<server_ip>:5555.

IPC (ipc:///path/to/socket.ipc): Inter-Process Communication. This is for processes running on the same machine.

  • When to use: For high-performance, low-latency communication between processes on the same physical or virtual machine. It bypasses the network stack, making it faster than TCP for local communication.
  • Internal workings: ZeroMQ uses a Unix domain socket or a named pipe (depending on the OS) to establish a direct communication channel between processes. This avoids the overhead of IP packet headers and the network interface card.
  • Levers: You specify a file path. This path is used to create the IPC endpoint. Both the server and client must agree on this exact path. ipc:///tmp/my_app.ipc is a common pattern. The file itself is usually created by the binding process.

Inproc (inproc://<name>): Intra-process communication. This is for different threads within the same process.

  • When to use: For extreme low-latency communication between threads that share the same memory space. This is the fastest possible transport ZeroMQ offers.
  • Internal workings: ZeroMQ manages message passing directly between threads within the same process’s memory. No kernel involvement, no socket creation in the OS sense. It’s a direct, in-memory queue.
  • Levers: You provide a unique name for the inproc endpoint. All threads within the process that want to communicate must use the same name. inproc://my_thread_comm is a typical name.

The most surprising true thing about inproc is that it’s not just faster; it fundamentally changes the error model. Because it’s within a single process, the concept of a "connection refused" or network partition doesn’t exist in the same way. If your inproc socket is bound and another thread tries to connect, it will connect. Failures are almost exclusively logic errors in your application’s thread synchronization or message handling, not transport issues.

The next concept you’ll want to explore is how ZeroMQ handles message patterns beyond request-reply, like publish-subscribe.

Want structured learning?

Take the full Zeromq course →