The ZeroMQ proxy devices aren’t just simple bridges; they’re stateful intermediaries that actively manage message flow, often holding messages in memory and deciding their fate based on internal logic and socket states.

Let’s see a FORWARDER device in action. Imagine you have a publisher on tcp://127.0.0.1:5555 that sends out stock ticks, and you want to relay these ticks to multiple subscribers.

# publisher.py
import zmq
import time

context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://127.0.0.1:5555")
print("Publisher started on tcp://127.0.0.1:5555")

while True:
    symbol = "AAPL"
    price = 170.50 + (time.time() % 10)
    message = f"{symbol} {price:.2f}"
    socket.send_string(message)
    print(f"Sent: {message}")
    time.sleep(1)

Now, the FORWARDER. This is the simplest proxy. It takes messages from one socket and sends them to another, without any internal queuing or logic beyond socket readiness.

# forwarder.py
import zmq

context = zmq.Context()
frontend = context.socket(zmq.SUB)
frontend.connect("tcp://127.0.0.1:5555")
frontend.setsockopt_string(zmq.SUBSCRIBE, "") # Subscribe to all

backend = context.socket(zmq.PUB)
backend.bind("tcp://127.0.0.1:5556")

print("FORWARDER: Connecting frontend (SUB) to tcp://127.0.0.1:5555")
print("FORWARDER: Binding backend (PUB) to tcp://127.0.0.1:5556")

zmq.proxy(frontend, backend)

print("FORWARDER: Shutting down")
frontend.close()
backend.close()
context.term()

And a subscriber:

# subscriber.py
import zmq

context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://127.0.0.1:5556") # Connect to the forwarder's backend
socket.setsockopt_string(zmq.SUBSCRIBE, "") # Subscribe to all

print("Subscriber connected to tcp://127.0.0.1:5556")

while True:
    message = socket.recv_string()
    print(f"Received: {message}")

When you run publisher.py, then forwarder.py, then subscriber.py, you’ll see the stock ticks flowing from the publisher, through the forwarder, and to the subscriber. The forwarder isn’t doing much here; it’s just passing messages along as soon as the SUB socket receives them and the PUB socket is ready to send them.

The STREAMER device is a bit more sophisticated. It’s designed for unidirectional message flow, typically from a client to a server. It provides a simple way to manage a connection and ensure messages are sent and received reliably. A common use case is a client sending commands to a server.

# streamer_server.py
import zmq

context = zmq.Context()
# The streamer device will bind this socket
frontend = context.socket(zmq.ROUTER)
frontend.bind("tcp://127.0.0.1:5557")

# The streamer device will connect this socket
backend = context.socket(zmq.DEALER)
backend.connect("tcp://127.0.0.1:5558")

print("STREAMER SERVER: Binding frontend (ROUTER) to tcp://127.0.0.1:5557")
print("STREAMER SERVER: Connecting backend (DEALER) to tcp://127.0.0.1:5558")

zmq.proxy_streamer(frontend, backend)

print("STREAMER SERVER: Shutting down")
frontend.close()
backend.close()
context.term()
# streamer_client.py
import zmq

context = zmq.Context()
# The streamer device will connect this socket
frontend = context.socket(zmq.DEALER)
frontend.connect("tcp://127.0.0.1:5557")

# The streamer device will bind this socket
backend = context.socket(zmq.ROUTER)
backend.bind("tcp://127.0.0.1:5558")

print("STREAMER CLIENT: Connecting frontend (DEALER) to tcp://127.0.0.1:5557")
print("STREAMER CLIENT: Binding backend (ROUTER) to tcp://127.0.0.1:5558")

zmq.proxy_streamer(frontend, backend)

print("STREAMER CLIENT: Shutting down")
frontend.close()
backend.close()
context.term()

In this STREAMER example, the "server" side binds the ROUTER and connects the DEALER. The "client" side connects the DEALER and binds the ROUTER. The proxy_streamer function handles the actual message forwarding. This is useful for scenarios where you have a pool of workers behind a single entry point, and you want to distribute incoming requests. The ROUTER socket on the server side will receive messages, and the DEALER on the client side will send them out. The proxy_streamer ensures that messages sent from the DEALER on the client side are routed to the correct ROUTER socket on the server side, and vice-versa. It’s essentially a unidirectional pipeline.

The QUEUE device is the most complex, designed for load balancing and message queuing. It uses a ROUTER socket to accept incoming messages and a DEALER socket to distribute them to a pool of workers. When a worker finishes a task, it sends a reply back through the DEALER to the QUEUE device, which then routes it back to the original client via the ROUTER.

# queue_worker.py
import zmq
import time

context = zmq.Context()
worker = context.socket(zmq.DEALER)
# Connect to the QUEUE device's backend
worker.connect("tcp://127.0.0.1:5559")

print("Worker started, connecting to QUEUE at tcp://127.0.0.1:5559")

while True:
    # Wait for a request from the QUEUE device
    identity, message = worker.recv_multipart()
    print(f"Worker received: {message.decode()} from {identity.decode()}")

    # Simulate work
    time.sleep(1)
    reply = f"Processed: {message.decode()}".encode()

    # Send reply back to the QUEUE device
    worker.send_multipart([identity, reply])
    print(f"Worker sent reply for {identity.decode()}")
# queue_device.py
import zmq

context = zmq.Context()
# Frontend: ROUTER socket to receive client requests
frontend = context.socket(zmq.ROUTER)
frontend.bind("tcp://127.0.0.1:5555")

# Backend: DEALER socket to send requests to workers
backend = context.socket(zmq.DEALER)
backend.bind("tcp://127.0.0.1:5559")

print("QUEUE DEVICE: Binding frontend (ROUTER) to tcp://127.0.0.1:5555")
print("QUEUE DEVICE: Binding backend (DEALER) to tcp://127.0.0.1:5559")

# Start the proxy
zmq.proxy(frontend, backend)

print("QUEUE DEVICE: Shutting down")
frontend.close()
backend.close()
context.term()
# queue_client.py
import zmq
import random

context = zmq.Context()
# Client sends requests to the QUEUE device's frontend
client = context.socket(zmq.REQ)
client.connect("tcp://127.0.0.1:5555")

print("Client connected to QUEUE at tcp://127.0.0.1:5555")

for request_num in range(10):
    request_id = f"Req-{request_num}"
    print(f"Client sending: {request_id}")
    client.send_string(request_id)

    # Wait for reply
    reply = client.recv_string()
    print(f"Client received: {reply}")
    time.sleep(random.uniform(0.1, 0.5)) # Stagger requests

print("Client finished.")
client.close()
context.term()

When you run queue_device.py, then a few instances of queue_worker.py, and finally queue_client.py, you’ll observe the client sending requests. The QUEUE device receives these requests on its ROUTER socket and forwards them to available workers via its DEALER socket. The workers process the requests and send replies back. The QUEUE device then receives these replies and sends them back to the original client via the ROUTER socket, maintaining the request-reply correlation. The QUEUE device effectively distributes the workload among the workers, acting as a central dispatcher.

A crucial, often overlooked, aspect of ZeroMQ proxy devices is their internal state management. For instance, the QUEUE device doesn’t just blindly forward messages; it tracks the identity of incoming requests on its ROUTER socket and associates them with the worker that will process them. This association is maintained internally until the reply comes back from the worker and is routed to the correct client. If a worker crashes before sending a reply, the QUEUE device has no inherent mechanism to recover that message or reassign the task; it’s a fire-and-forget distribution until a reply is received.

The next logical step after mastering these proxy devices is understanding how to implement custom routing logic, often by writing your own proxy in application code rather than relying on the built-in devices.

Want structured learning?

Take the full Zeromq course →