Creating Kafka Topic in Spring Boot Application

In this tutorial, you will learn how to create a new Kafka Topic in a Spring Boot application. Kafka topics can also be created automatically when a new message is sent to them. However, in production environments, topic auto-creation is usually turned off. This is why creating Kafka topics before sending messages to them is considered a good practice.

If you need help creating a new Spring Boot application, check out this tutorial: How to create a simple Spring Boot project.

If you are interested in video lessons then check my video course Apache Kafka for Event-Driven Spring Boot Microservices.

Create Configuration Class

The first step in creating a new Kafka topic in the Spring Boot application will be to create a new configuration class. This is going to be a simple Java class annotated with @Configuration annotation.

import org.springframework.context.annotation.Configuration;
 
@Configuration
public class KafkaConfig {

 
}

Creating Kafka Topic

The second step is to create a new Java method which will actually create a new Kafka Topic.

import java.util.Map;

import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.TopicBuilder;

@Configuration
public class KafkaConfig {

    @Bean
    NewTopic createTopic() {
        return TopicBuilder.name("product-created-topic")
                .partitions(3)
                .replicas(3)
                .configs(Map.of("min.insync.replicas", "2"))
                .build();
    }
}

Let me walk you through this method line by line.

  1. Notice that this method is annotated with @Bean annotation? This will add the NewTopic object that is created and returned from this method to a Spring Application Context. And will make the NewTopic available to other application services and components.
  2. The topic is created using the TopicBuilder class provided by Spring Kafka. The topic is given the name “product-created-topic” using the .name() method.
  3. The .partitions(3) method sets the number of partitions for the topic to 3. In Kafka, a topic can be divided into multiple partitions, which allows for parallel processing and scalabilityBy increasing the number of partitions, you can increase the throughput and parallelism of your Kafka system. If I create a topic with 3 partitions, like I do here, I will be able to start up to 3 Microservices that will consume messages from this topic.
  4. The .replicas(3) method sets the number of replicas for each partition to 3. In Kafka, a replica is a copy of a topic’s data that is stored on a different broker. So if I configure 3 replicas here, this means that this topic will have 3 copies. Which improves the durability of your data. But this also means that I will need to have three brokers running in the Kafka cluster. If you are testing this code on a local computer with just one Kafka server, then set the number of replicas to 1. It is also important to note here that increasing the number of replicas also increases the storage requirements and network traffic between brokers. So it is good to have a reasonable number here. For example, if you say that this data is very critical and you really want to make sure that it is never lost. So I will make this number not 3 but three thousand. This might already be too much. So, it is good to find a balance between fault tolerance and resource utilization when deciding on the number of replicas for a topic.
  5.  The .configs(Map.of("min.insync.replicas", "2")) method sets additional configuration options for the topic. In this case, it sets the "min.insync.replicas" configuration to 2, which means that at least 2 replicas must acknowledge a write for it to be considered successful.
    This topic will have 3 replicas in total. With this configuration, we say that when we send a message to this topic, at least 2 replicas must acknowledge that the data is stored successfully. If the number of replicas that acknowledge a successful write is below this value, then the producer will receive an exception. Because we have to wait for replicas to acknowledge they have received information, it makes writing to a Topic a little bit slower. But it also gives us a guarantee that our data is stored more reliably.

For the latest documentation on how to work with Kafka in Spring, check out the Spring for Kafka official documentation.

Checking If Kafka Topic Exists

To check if the Kafka topic was successfully created, I can use Kafka CLI. To do that:

  1. Open a new terminal window,
  2. Change directory to Kafka folder, then
  3. Run the following command:
bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic product-created-topic

The --describe parameter will help us read detailed information about a Kafka topic. It will help us read the number of partitions, replication factor, and other configuration details.

Final words

I hope this tutorial was helpful to you. To learn more about Apache Kafka, check out my other Apache Kafka tutorials for beginners.