Fair queuing in ZeroMQ is about ensuring that a message broker distributes incoming messages to connected clients in a round-robin fashion, preventing any single client from being starved of messages.

Let’s see this in action with a simple setup. We’ll have one publisher sending messages and two subscribers that will share the load.

Publisher (pub.py):

import zmq
import time

context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:5555")

print("Publisher started, sending messages...")

for i in range(100):
    message = f"Message {i}"
    print(f"Sending: {message}")
    socket.send_string(message)
    time.sleep(0.1)

print("Finished sending messages.")
socket.close()
context.term()

Subscriber 1 (sub1.py):

import zmq

context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5555")
socket.setsockopt_string(zmq.SUBSCRIBE, "") # Subscribe to all messages

print("Subscriber 1 connected, waiting for messages...")

while True:
    message = socket.recv_string()
    print(f"Subscriber 1 received: {message}")

Subscriber 2 (sub2.py):

import zmq

context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:5555")
socket.setsockopt_string(zmq.SUBSCRIBE, "") # Subscribe to all messages

print("Subscriber 2 connected, waiting for messages...")

while True:
    message = socket.recv_string()
    print(f"Subscriber 2 received: {message}")

If you run pub.py and then sub1.py and sub2.py in separate terminals, you’ll observe that messages are printed interleaved between "Subscriber 1 received:" and "Subscriber 2 received:". This is ZeroMQ’s default behavior when multiple subscribers connect to a single publisher using the PUB/SUB pattern. Each subscriber gets a copy of every message, which isn’t fair queuing.

The true power of fair queuing in ZeroMQ isn’t in the PUB/SUB pattern itself, but rather in how you structure your application with different socket types. To achieve true distribution of work, you’d typically use a pattern like ROUTER/DEALER or QUEUE.

Consider the ROUTER/DEALER pattern. The ROUTER socket acts as the central dispatch point (like a message broker), and DEALER sockets are the workers that consume the messages.

Broker (broker.py):

import zmq

context = zmq.Context()
frontend = context.socket(zmq.ROUTER)
backend = context.socket(zmq.DEALER)

frontend.bind("tcp://*:5559") # For clients to send requests
backend.bind("tcp://*:5560") # For workers to connect

print("Broker started, waiting for clients and workers...")

zmq.proxy(frontend, backend) # This is the magic!

frontend.close()
backend.close()
context.term()

Worker (worker.py):

import zmq
import time

context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.connect("tcp://localhost:5560") # Connect to the backend of the broker

print("Worker connected, ready to process tasks...")

while True:
    identity, message = socket.recv_multipart()
    print(f"Worker received task: {message.decode()}")
    time.sleep(1) # Simulate work
    response = f"Processed: {message.decode()}".encode()
    socket.send_multipart([identity, response])
    print(f"Worker sent response for: {message.decode()}")

Client (client.py):

import zmq
import uuid

context = zmq.Context()
client_socket = context.socket(zmq.DEALER)
client_socket.connect("tcp://localhost:5559") # Connect to the frontend of the broker

client_id = str(uuid.uuid4()).encode() # Unique identifier for the client

print("Client connected, sending tasks...")

for i in range(10):
    task = f"Task {i}".encode()
    client_socket.send_multipart([client_id, task])
    print(f"Client sent: {task.decode()}")
    # Optionally, receive a reply if the pattern requires it
    # identity, reply = client_socket.recv_multipart()
    # print(f"Client received reply: {reply.decode()}")

print("Finished sending tasks.")
client_socket.close()
context.term()

When you run broker.py, then multiple worker.py instances, and finally client.py, you’ll see the tasks being distributed among the workers. The zmq.proxy(frontend, backend) call in the broker is the core of this pattern. It automatically handles receiving messages from the ROUTER (frontend) and sending them to the DEALER sockets (backend) in a round-robin fashion. When a worker finishes a task and sends a reply back to the ROUTER, the ROUTER knows which client to send it to because the DEALER sockets (workers) preserve the identity of the original sender.

The zmq.proxy function is a highly optimized, low-level operation that essentially performs the receive-and-send logic for you. It manages the internal queues of the ROUTER and DEALER sockets, ensuring that messages are dispatched from the ROUTER’s incoming queue to the DEALER sockets one by one, cycling through them. This prevents a single worker from being overwhelmed if it can process messages much faster than others, and it ensures that tasks are generally spread out.

The most surprising aspect of zmq.proxy is that it handles the "fairness" implicitly by its design when used with ROUTER and DEALER. The ROUTER socket, when receiving messages from multiple clients, queues them. When zmq.proxy is active, it pulls from this queue and sends to the DEALER sockets. The DEALER sockets, by their nature, connect to the ROUTER and form a pool. The ROUTER will then distribute incoming messages to these DEALER sockets in a round-robin manner. It’s not explicitly configured as "round-robin" in zmq.proxy itself; rather, the combination of ROUTER receiving messages and DEALERs forming a pool for zmq.proxy to dispatch to achieves this.

The next logical step is exploring how to handle message priorities or different types of tasks within this fair queuing mechanism.

Want structured learning?

Take the full Zeromq course →