How To Set Up a Multi-Node Kafka Cluster using KRaft

Setting up a multi-node Kafka cluster using KRaft (Kafka Raft) mode involves several steps. KRaft mode enables Kafka to operate without the need for Apache ZooKeeper, streamlining the architecture and improving management. Here’s a comprehensive guide to setting up a multi-node Kafka cluster using KRaft.

Prerequisites
  1. Java Development Kit (JDK): Ensure you have JDK 8 or later installed on all nodes.
  2. Kafka Distribution: Download Kafka from the official Kafka website.
  3. Network Configuration: Ensure all nodes can communicate with each other.
Step-by-Step Setup

1. Download and Extract Kafka

Download Kafka on each node and extract it.

        
            wget https://downloads.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
            tar -xzf kafka_2.13-3.0.0.tgz
            cd kafka_2.13-3.0.0            
        
    

2. Configure the KRaft Cluster Broker Configuration

On each node, edit the server.properties file in the config directory.

        
            # Unique broker ID for each broker
            process.roles=broker,controller
            node.id=1  # Change this to a unique ID for each broker
            
            # Cluster ID (same for all brokers)
            controller.quorum.voters=1@node1:9093,2@node2:9093,3@node3:9093  # Adjust with your node details
            
            # KRaft metadata log directory
            log.dirs=/tmp/kraft-combined-logs
            
            # Broker listeners
            listeners=PLAINTEXT://node1:9092
            
            # Controller listeners
            controller.listener.names=CONTROLLER
            controller.listeners=CONTROLLER://node1:9093  # Adjust as necessary
            
            # Other configurations
            num.network.threads=3
            num.io.threads=8
            socket.send.buffer.bytes=102400
            socket.receive.buffer.bytes=102400
            socket.request.max.bytes=104857600
            log.retention.hours=168
            log.segment.bytes=1073741824
            log.retention.check.interval.ms=300000
            zookeeper.connect=localhost:2181  # Leave as-is; not used in KRaft mode but required by the config parser            
        
    

Ensure node.id, listeners, and controller.listeners are unique and correctly configured for each node.

Initialize the KRaft Metadata

On one of the nodes, initialize the KRaft metadata.

        
            bin/kafka-storage.sh format -t <uuid> -c config/server.properties
        
    

Replace <uuid> with a generated UUID (use uuidgen on Unix-based systems).

3. Start the Kafka Brokers

Start the Kafka broker on each node.

        
            bin/kafka-server-start.sh config/server.properties
        
    

Repeat this step for all nodes.

4. Verify Cluster Setup

Check the logs to ensure all brokers have started successfully and have formed a cluster.

        
            tail -f logs/server.log
        
    

You should see messages indicating that the brokers are communicating and electing controllers.

5. Create Topics and Produce/Consume Messages

With the cluster set up, you can create topics and start producing/consuming messages.

        
            # Create a topic
            bin/kafka-topics.sh --create --topic test-topic --bootstrap-server node1:9092 --partitions 3 --replication-factor 3
            
            # Produce messages
            bin/kafka-console-producer.sh --topic test-topic --bootstrap-server node1:9092
            
            # Consume messages
            bin/kafka-console-consumer.sh --topic test-topic --from-beginning --bootstrap-server node1:9092            
        
    

Summary

By following these steps, you’ve set up a multi-node Kafka cluster using KRaft mode. The key aspects include configuring unique node.id and listeners for each broker, initializing the KRaft metadata, and verifying the cluster formation. This setup ensures Kafka can run without ZooKeeper, simplifying management and enhancing scalability.

kafka cluster setup using kraft

multi node kafka setup usiing kf=raft

How To Manage Kafka Programmatically

Managing Kafka programmatically involves interacting with Kafka’s components such as topics, producers, consumers, and configurations using various APIs and tools. Here’s a comprehensive guide to managing Kafka programmatically. The Kafka …

read more

Streamline Data Serialization and Versioning with Confluent Schema Registry …

Using Confluent Schema Registry with Kafka can greatly streamline data serialization and versioning in your messaging system. Here's how you can set it up and utilize it effectively: you can leverage Confluent Schema Registry to streamline data seria …

read more