A hands-on exploration of Apache Kafka fundamentals using Java, demonstrating producer-consumer patterns, message streaming, and event-driven architecture concepts.
- Overview
- Features
- Tech Stack
- Project Structure
- Getting Started
- Usage Examples
- Key Concepts Explored
- Kafka UI
- Learning Outcomes
This R&D project explores Apache Kafka's core capabilities through practical Java implementations. It demonstrates fundamental messaging patterns including:
- Producer-Consumer Architecture: Basic message publishing and consumption
- Keyed Messages: Understanding partitioning and message ordering
- Asynchronous Processing: Callback-based message handling
- Containerized Infrastructure: Docker-based Kafka cluster setup
Perfect for developers learning distributed messaging systems or exploring event-driven architecture patterns.
- ✅ Basic Producer - Simple message publishing without keys
- ✅ Keyed Producer - Partition-aware message routing using keys
- ✅ Async Callbacks - Non-blocking message delivery with acknowledgments
- ✅ Batch Processing - Multiple message production in loops
- ✅ Continuous Polling - Real-time message consumption
- ✅ Consumer Groups - Scalable message processing
- ✅ Offset Tracking - Message position monitoring
- ✅ Partition Awareness - Understanding message distribution
- ✅ Docker Compose Setup - One-command Kafka cluster deployment
- ✅ Kafka UI Dashboard - Visual monitoring and management
- ✅ Zookeeper Integration - Cluster coordination
| Technology | Version | Purpose |
|---|---|---|
| Java | 17 | Core programming language |
| Apache Kafka | 4.0.0 | Distributed streaming platform |
| Maven | - | Dependency management & build tool |
| Docker | - | Containerization |
| Kafka UI | Latest | Web-based Kafka management |
| SLF4J | 2.0.17 | Logging framework |
apche/
├── src/
│ └── main/
│ └── java/
│ ├── Producer/
│ │ ├── ProducerInIt.java # Basic producer implementation
│ │ └── ProducerWithKey.java # Keyed message producer
│ ├── Consumer/
│ │ └── ConsumerInIt.java # Consumer implementation
│ └── com/apche/
│ └── Main.java # Entry point
├── compose.yaml # Docker Compose configuration
├── pom.xml # Maven dependencies
└── README.md # This file
- Java 17 or higher
- Maven 3.6+
- Docker & Docker Compose
-
Clone the repository
git clone <repository-url> cd apche
-
Start Kafka infrastructure
docker compose up -d
This starts:
- Zookeeper (port
2181) - Kafka broker (ports
9092,29092) - Kafka UI (port
8080)
- Zookeeper (port
-
Verify services are running
docker compose ps
-
Build the project
mvn clean install
Sends 10 messages to Test_Topic without keys:
mvn exec:java -Dexec.mainClass="Producer.ProducerInIt"Expected Output:
✅ Sent to topic
Test_Topic partition
0 offset
42
Sends 10 keyed messages (ensures same key → same partition):
mvn exec:java -Dexec.mainClass="Producer.ProducerWithKey"Key Concept: Messages with the same key always go to the same partition, maintaining order.
Continuously polls and displays messages from Test_Topic:
mvn exec:java -Dexec.mainClass="Consumer.ConsumerInIt"Expected Output:
KEY:ID_5, VALUE:VAL_5
Partitions:1, Offset:23
ProducerRecord<String, String> record =
new ProducerRecord<>("Test_Topic", "hello world");- Messages distributed across partitions in round-robin fashion
- No ordering guarantees
ProducerRecord<String, String> record =
new ProducerRecord<>("Test_Topic", "ID_5", "VAL_5");- Same key → same partition
- Ordering guaranteed within partition
producer.send(record, (metadata, exception) -> {
if (exception != null) {
logger.info("❌ Error: " + exception.getMessage());
} else {
logger.info("✅ Sent to partition " + metadata.partition());
}
});- Multiple consumers can share workload
- Each partition consumed by only one consumer in a group
- Enables horizontal scaling
- Tracks message position in partition
- Enables replay and fault tolerance
Access the Kafka UI dashboard at http://localhost:8080
Features:
- 📊 View topics, partitions, and messages
- 🔍 Search and filter messages
- 📈 Monitor consumer lag
- ⚙️ Manage broker configurations
Through this project, I explored:
✅ Kafka Architecture - Brokers, topics, partitions, and replicas
✅ Producer API - Synchronous vs asynchronous sending
✅ Consumer API - Polling, offsets, and consumer groups
✅ Message Ordering - Key-based partitioning strategies
✅ Docker Orchestration - Multi-container Kafka setup
✅ Monitoring - Using Kafka UI for operational insights
| Property | Value | Description |
|---|---|---|
KAFKA_BROKER_ID |
1 | Unique broker identifier |
KAFKA_ZOOKEEPER_CONNECT |
zookeeper:2181 | Zookeeper connection |
KAFKA_ADVERTISED_LISTENERS |
PLAINTEXT://kafka:9092 PLAINTEXT_HOST://localhost:29092 |
Internal & external listeners |
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR |
1 | Offset topic replication |
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:29092");
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "earliest");docker compose downTo remove volumes as well:
docker compose down -vThis is an R&D project for learning purposes. Feel free to fork and experiment!
This project is open source and available for educational purposes.
Built with ☕ and curiosity
