HowTo: Kafka with Vert.x in Docker

The following article deals with Apache Kafka, and how to use it with Docker and Vert.x. We will develop a small example and send data over a queue from a REST endpoint to a listener endpoint.

What’s Kafka

Apache Kafka is an open source software that provides data streams over a distributed streaming platform. It provides various interfaces to write and read in a Kafka cluster. It was originally developed by LinkedIn as a news broker and spread rapidly. It is now used in many large systems because it can process and make available large amounts of data in real-time. The biggest advantage here is that the data can be stored and thus the queues can hold the data for a finite to infinite time. If, for example, several microservices communicate via Kafka queues, one microservice can fail and the data is not lost because it is held in the queue until the microservice has processed it. Even under load. Here, a slower service can process the data over time without slowing down the other services. More detailed articles that deal with Kafka can be found here.

Take Kafka to Docker

To start Kafka with Docker, we can either use the much-used image of Wurstmeister, or we can write our own. For practice purposes, we will write our own Docker image that contains Kafka.

To use Kafka, we also need zookeepers. Kafka’s .zip contains both Zookeeper and Kafka. A description of how to use kafka is here.

So we can build the image now.

We use a default ubuntu image, download wget and install Java. Then we download the kafka.zip and unzip it. We also copy the configuration file for Kafka. Here we adapt the following things:

Here we set the listener to 0.0.0.0 so it listens to everything in the docker. The advertised.listeners is set to kafka, so it listens to its own URL. Furthermore, we set the zookeeper.connect propertie to zookeeper:2181, which is the URL we later specify in the docker-compose for zookeeper.

Now we create the docker-compose.yml to merge everything and can start Kafka already. It looks like this:

Here we define ourselves Kafka, Zookepper and our server. How we build it is shown in the following section.

Build Vert.x with Docker

Vert.x itself we can easily build with maven. So we’ll use it to write an image containing maven and java to build and run the application. This will look like this:

We install java again and then maven. Then we copy all necessary files and build the service with maven. As start value we simply define a java -jar with the built jar.

So that’s it.

Server

The server is developed with Kotlin and Vert.x in the example. You can find the detailed source code as always on my Github.

To register for a queue, all we need to do is give us the config and the dependency on Kafka. Since Vert.x already has a library for Kafka, it makes it very easy for us:

Here we enter the parameters where the Kafka service is located and on which queue we want to subscribe. Then we can already listen and get the events into the handler when there is something new in the queue.

If we want to write into the queue, it looks quite similar:

We will show you a configuration and a producer. With this we can write records into the queue.

That’s it to write and read queues with Vert.x in Kafka.

Start the environment

Now we can simply start the environment with a docker-compose up. This will build the containers and lift them up. After that we should be able to execute the following commands:

curl localhost:8080/kafka

This should simply be a blocking request that we don’t see yet. Let’s start a second terminal and send content with curl -d 'my test data' -H "Content-Type: application/json" -X POST localhost:8080/event the following answer should appear at the first window:

The data now goes through a queue in Kafka and is sent to the registered clients.

Summary and evaluation

As we have seen, it is quite easy to write a service that registers to a Kafka queue and sends or reads the data it contains. The data will not be lost and can be retrieved at a later time. The data is also piped through in real time from the sender to the receiver. This allows us to build a robust messaging system in a short time, which is error tolerant against service failures and can handle large amounts of data. It becomes exciting when the services are developed in different languages and communicate with each other via Kafka queues. In this way, we create a system that makes independent development easier and more stable.

All in all, Kafka is a very useful and now well established system for processing large quantities of events. Especially in combination with Microservices, stable event handling becomes meaningful and important. But not only here, but also with systems that generate large amounts of event data, it can serve as an upstream system. For example, Kafka can record the event data on machines, filter it and then only make the relevant data available in queues for slower downstream processing.

If you have any questions, feel free to ask them. If you liked the article, leave me some applause.

Originally published at http://github.com.

--

--

--

☕️ →👨‍💻 // Developer @ SAP

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

2021 Complete Python Bootcamp From Zero to Hero in Python| Review

A Brief Introduction to UNIX Commands in Linux

Turn-Based Tactical Command System, Part 2

The Beginner’s Guide To Understand Cloud Storage Technology

Spark & Tables

Leetcode Problem : Longest Substring Without Repeating Characters

6 Tips to improve Coding of Swift

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Auryn Engel

Auryn Engel

☕️ →👨‍💻 // Developer @ SAP

More from Medium

Design Patterns — Decorator Pattern

import Json dependency in JAVA

Pitfall When Using Java Okta SDK JWT Verifier

Here Again with Another Apache Solr Contribution