Spring Boot makes setting up Kafka producers and consumers surprisingly straightforward, but the most counterintuitive aspect is how much of the underlying complexity is abstracted away, allowing you to focus on your application logic rather than intricate network configurations and offset management.

Let’s get a Kafka producer and consumer up and running in a Spring Boot application.

First, ensure you have a Kafka broker running. For local development, docker-compose is your best friend.

# docker-compose.yml
version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    container_name: zookeeper
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:7.3.0
    container_name: kafka
    ports:
      - "9092:9092"
      - "29092:29092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://kafka:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_TRANSACTIONAL_ID_ENABLECTION_ENABLE: 'true'
      KAFKA_CREATE_TOPICS: "my-topic:1:1"
    depends_on:
      - zookeeper

With this docker-compose.yml in place, run docker-compose up -d. This will start Zookeeper and Kafka, and importantly, it will create a topic named my-topic with one partition and a replication factor of one.

Now, for our Spring Boot application. We’ll need the kafka-clients and spring-kafka dependencies. In your pom.xml (for Maven):

<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.0.9</version>
</dependency>
<dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>3.5.1</version>
</dependency>

In your application.properties (or application.yml):

# application.properties
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.consumer.group-id=my-group
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

spring.kafka.bootstrap-servers tells Spring where to find your Kafka broker. spring.kafka.consumer.group-id is crucial for consumer scaling; all consumers in the same group share the load for a given topic. auto-offset-reset=earliest means that if a consumer starts up and has no committed offset, it will begin reading from the very first message in the topic. The serializers and deserializers define how your messages (keys and values) are converted to bytes for transport and back again.

Here’s a simple Kafka producer:

// KafkaProducer.java
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Service;

@Service
public class KafkaProducer {

    private static final String TOPIC = "my-topic";

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    public void sendMessage(String message) {
        kafkaTemplate.send(TOPIC, message);
        System.out.println("Sent message: " + message);
    }
}

The @Service annotation makes this a Spring bean. KafkaTemplate is the core Spring Kafka abstraction for sending messages. When you call send(TOPIC, message), Spring handles serializing the message and sending it to the specified Kafka topic.

And here’s the corresponding Kafka consumer:

// KafkaConsumer.java
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

@Component
public class KafkaConsumer {

    @KafkaListener(topics = "my-topic", groupId = "my-group")
    public void listen(String message) {
        System.out.println("Received message: " + message);
    }
}

The @KafkaListener annotation is the magic here. It tells Spring to create a Kafka consumer for the my-topic topic, belonging to the my-group consumer group. When a message arrives on my-topic, the listen method will be invoked with the message content. Spring handles the polling, deserialization, and commits the offset automatically after the listen method completes successfully.

To test this, you can create a simple REST controller that uses the KafkaProducer:

// ProducerController.java
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@RestController
public class ProducerController {

    @Autowired
    private KafkaProducer producer;

    @GetMapping("/publish")
    public void publish(@RequestParam("message") String message) {
        producer.sendMessage(message);
    }
}

Start your Spring Boot application. Then, make a GET request to http://localhost:8080/publish?message=HelloKafka. You should see "Sent message: HelloKafka" in your Spring Boot application’s logs, and "Received message: HelloKafka" in the same logs, confirming the message flowed from producer to Kafka and back to the consumer.

The true power comes when you run multiple instances of your Spring Boot application. Because they share the same group-id, Kafka will automatically distribute the partitions of my-topic among them, ensuring load balancing and fault tolerance. If one instance goes down, Kafka will rebalance the partitions to the remaining instances.

The most misunderstood aspect of this setup is the automatic offset commit. By default, spring-kafka commits offsets after the listener method successfully returns. This means that if your listener method throws an exception, the offset for that message will not be committed, and the message will be redelivered to another consumer (or the same one after a rebalance). This is a form of at-least-once delivery. If you need exactly-once processing, it requires more advanced configurations like Kafka transactions.

The next natural step is to explore handling different message formats, like JSON, and implementing more robust error handling strategies for your consumers.

Want structured learning?

Take the full Spring-boot course →