Setting up a multi-node Kafka cluster using KRaft (Kafka Raft) mode involves several steps. KRaft mode enables Kafka to operate without the need for Apache ZooKeeper, streamlining the architecture and improving management. Here’s a comprehensive guide to setting up a multi-node Kafka cluster using KRaft.
Prerequisites- Java Development Kit (JDK): Ensure you have JDK 8 or later installed on all nodes.
- Kafka Distribution: Download Kafka from the official Kafka website.
- Network Configuration: Ensure all nodes can communicate with each other.
1. Download and Extract Kafka
Download Kafka on each node and extract it.
wget https://downloads.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz
tar -xzf kafka_2.13-3.0.0.tgz
cd kafka_2.13-3.0.0
2. Configure the KRaft Cluster
Broker Configuration
On each node, edit the server.properties
file in the config
directory.
# Unique broker ID for each broker
process.roles=broker,controller
node.id=1 # Change this to a unique ID for each broker
# Cluster ID (same for all brokers)
controller.quorum.voters=1@node1:9093,2@node2:9093,3@node3:9093 # Adjust with your node details
# KRaft metadata log directory
log.dirs=/tmp/kraft-combined-logs
# Broker listeners
listeners=PLAINTEXT://node1:9092
# Controller listeners
controller.listener.names=CONTROLLER
controller.listeners=CONTROLLER://node1:9093 # Adjust as necessary
# Other configurations
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181 # Leave as-is; not used in KRaft mode but required by the config parser
Ensure node.id, listeners, and controller.listeners
are unique and correctly configured for each node.
On one of the nodes, initialize the KRaft metadata.
bin/kafka-storage.sh format -t <uuid> -c config/server.properties
Replace <uuid>
with a generated UUID (use uuidgen
on Unix-based systems).
Start the Kafka broker on each node.
bin/kafka-server-start.sh config/server.properties
Repeat this step for all nodes.
4. Verify Cluster SetupCheck the logs to ensure all brokers have started successfully and have formed a cluster.
tail -f logs/server.log
You should see messages indicating that the brokers are communicating and electing controllers.
5. Create Topics and Produce/Consume MessagesWith the cluster set up, you can create topics and start producing/consuming messages.
# Create a topic
bin/kafka-topics.sh --create --topic test-topic --bootstrap-server node1:9092 --partitions 3 --replication-factor 3
# Produce messages
bin/kafka-console-producer.sh --topic test-topic --bootstrap-server node1:9092
# Consume messages
bin/kafka-console-consumer.sh --topic test-topic --from-beginning --bootstrap-server node1:9092
Summary
By following these steps, you’ve set up a multi-node Kafka cluster using KRaft mode. The key aspects include configuring unique node.id and listeners
for each broker, initializing the KRaft metadata, and verifying the cluster formation. This setup ensures Kafka can run without ZooKeeper, simplifying management and enhancing scalability.