Documentation
Cluster
Clusters

Overview

The Clusters feature provides various capabilities:

  • The ability to manage multiple Kafka Clusters.
  • Produce and Consume records from one or multiple topics.
  • Manage topics, consumer groups, acls, delegation tokens, quotas and more ...

If you're regularly interacting with kafka clusters, you should really check out this talk (opens in a new tab) done by the genius Nikoleta Verbeck from Confluent.

If you want some tips about managing kafka clusters in production check out this talk (opens in a new tab) done by the amazing Jun Rao from Confluent.

Create Plaintext Cluster

  • Creating a local Kafka Cluster is a simple process. You'll need to provide client configuration such as bootstrap.servers...
  • You can link your cluster to a Schema Registry to be able to use it when Producing/Consuming records.
  • If you want to opt for JMX metrics, you'll need to provide the full JMX URL.

Beautified Configuration

Create Plaintext Cluster Beautified Configuration Image

Raw Configuration

Create Plaintext Cluster Raw Configuration Image

Schema Registry

Create Plaintext Cluster Schema Registry Configuration Image

JMX Configuration

Create Plaintext Cluster JMX Configuration Image

YAML Configuration

bootstrap.servers: http://broker1:29092

Create SASL Plain Cluster

The same goes for a SASL Plain Cluster, but in addition to bootstrap.servers, you'll need to configure jaas authentication through the admin client configuration.

Main Configuration

Create SASL Plain Cluster Main Configuration Image

JAAS Configuration

Create SASL Plain Cluster JAAS Configuration Image

YAML Configuration

bootstrap.servers: http://broker1:29093
security.protocol: SASL_PLAINTEXT
sasl.mechanism: PLAIN
sasl.jaas.config: |
    org.apache.kafka.common.security.plain.PlainLoginModule
        required
        username="superadmin"
        password="superadmin-secret";

Create SASL SCRAM Cluster

The same goes for a SASL SCRAM Cluster, but in addition to bootstrap.servers, you'll need to configure jaas authentication through the admin client configuration.

Main Configuration

Create SASL SCRAM Cluster Main Configuration Image

JAAS Configuration

Create SASL SCRAM Cluster JAAS Configuration Image

YAML Configuration

bootstrap.servers: http://broker1:29094
security.protocol: SASL_PLAINTEXT
sasl.mechanism: SCRAM-SHA-256
sasl.jaas.config: |
    org.apache.kafka.common.security.scram.ScramLoginModule
        required
        username="superadmin"
        password="superadmin-secret";
        tokenauth="true";

Create Upstash Cluster

Upstash is a Serverless platform providing various services, one of them is of course an Apache Kafka® offering. What I love most about Upstash is how simple it is to get started with a cluster which makes it so convenient for new developers and great for startups.

Creating a new cluster in Upstash is literally two clicks away.

After that it's super easy to grab the configuration and use it to create a new Kafka Cluster connection in Blazing KRaft.

Make sure to not copy the \ since Blazing KRaft interprets configuration values as strings and not as properties. Therefore, the \ is interpreted as an escape character and not as a line break.

Upstash Clusters

Upstash Clusters Image

Upstash Create Cluster

Upstash Create Cluster Image

Upstash Client Configuration

Upstash Client Configuration Image

Main Configuration

Create Upstash Cluster Main Configuration Image

SASL Configuration

Create Upstash Cluster SASL Configuration Image

YAML Configuration

bootstrap.servers: ***
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-256
sasl.jaas.config: |
    org.apache.kafka.common.security.scram.ScramLoginModule
        required
        username="***"
        password="***";

Create Confluent Cluster

Confluent Platform is a full-scale data streaming platform that enables you to easily provision and manage any component of Apache Kafka® ecosystem. What I love the most about Confluent Cloud is that it is founded by the creators of Apache Kafka®, so you can be sure that you're getting the best from the best.

Creating a new cluster in Confluent Cloud is super easy. just Select a region and launch a cluster, and don't forget to claim the free 400$ credit, its a great way to get started with Kafka, Kafka Connect, Schema Registry and KsqlDb.

After that it's staightforward to create an API Key, grab the configuration and use it to create a new Kafka Cluster connection in Blazing KRaft.

Confluent Create Cluster

Confluent Create Cluster Image

Confluent Region

Confluent Cluster Region Image

Confluent Launch Cluster

Confluent Cluster Launch Image

Confluent Client Properties

Confluet Cluster Properties Image

Main Configuration

Create Confluent Cluster Main Configuration Image

SASL Configuration

Create Confluent Cluster SASL Configuration Image

YAML Configuration

bootstrap.servers: ***
security.protocol: SASL_SSL
sasl.mechanism: PLAIN
sasl.jaas.config: |
    org.apache.kafka.common.security.plain.PlainLoginModule
        required
        username="***"
        password="***";

Create Aiven Cluster

Aiven is a database as a service platform, they provide practically every database you can think of, including Apache Kafka. What I love the most about Aiven is the simplicity with which you can get started and their flat pricing model, meaning that you're getting your own brokers and therefore having the ability to manage your Quotas, ACLs and Delegation Tokens, giving you full control over your cluster.

Aiven enforces strict security out of the box by provisioning a new kafka service and either opting in for the default two way SSL Authentication or by enabling SASL.

Both of the SSL and SASL options require the truststore configuration, which is a best practice since it gives the assurence that you're connecting to the right brokers.

Aiven gives you a quick connect guide that helps generate the keystore and truststore files.

After that it's super easy to grab the configuration and use it to create a new Kafka Cluster connection in Blazing KRaft.

Aiven Create Service

Aiven Create Service Image

Aiven Create Kafka Service

Aiven Create Kafka Service Image

Aiven Finalize Cluster

Aiven Finalize Cluster Image

Aiven Enable SASL

Aiven Enable SASL Image

Aiven Client Configuration

Aiven Client Configuration Image

Keystore & Truststore

Aiven Keystore and Truststore Image

SASL Main Configuration

Aiven SASL Main Configuration Image

SASL Configuration

Aiven SASL Configuration Image

SSL Main Configuration

Aiven SSL Main Configuration Image

SSL Configuration

Aiven SSL Configuration Image

YAML Configuration

# SASL Authentication
bootstrap.servers: ***
security.protocol: SASL_SSL
sasl.mechanism: SCRAM-SHA-256
sasl.jaas.config: |
    org.apache.kafka.common.security.scram.ScramLoginModule
        required
        username="***"
        password="***";
ssl.truststore.location: ***
ssl.truststore.password: ***
 
# SSL Authentication
bootstrap.servers: ***
security.protocol: SSL
ssl.truststore.location: ***
ssl.truststore.password: ***
ssl.keystore.location: ***
ssl.keystore.password: ***
ssl.keystore.type: PKCS12

Clusters Listing

After registering a Kafka Cluster, you'll be able to view the list of the registered Clusters.

Clusters Listing Image

Edit Cluster

You can edit a Kafka Cluster configuration.

Edit Cluster Image

Delete Cluster

You can easily unregister a Kafka Cluster directly from the list or from the details page.

Delete Cluster Image

Miscellaneous

FYI, you'll notice that a lot of kafka PaaS providers use the term serverless meaning that they provide a pay as you go pricing model, which is only possible by having you share brokers with other clients in a multi-tenancy architecture, and Apache Kafka® is perfect for this use-case as it gives them the possibiliy to assign client quotas to restrict rogue clients and ACLs to limit clients access to their own resources (topics, consumer groups ...).

For this reason you'll discover that some providers limit the admin client or completely disable it, and you'll have to use their UI/API to manage your topics, which is understandable considering the fact that they need to assign ACLs to your client as you create resources.

If you find these concepts interesting and would like to know how providers are implementing them, it'll really be so beneficial for you to check out this talk (opens in a new tab) done by the amazing Ali Hamidi from Heroku.

For the admin, producer and consumer client configuration, I did override some default values:

  • default.api.timeout.ms, request.timeout.ms. To allow you to fail fast and not wait for the 60 seconds.
  • reconnect.backoff.ms, reconnect.backoff.max.ms. To remove the tight loop when trying to reconnect to the brokers.

You can of course revert them back to their initial values if you ever need to customize them.

What makes Blazing KRaft blazingly fast is mainly the fact that clients are managed in memory, therefore if you're deploying multiple instances of Blazing KRaft, you need to make sure to restart the instances after Creating, Editing or Deleting a Kafka Cluster, Producer or Consumer. This is why we highly recommend that you register all your clients before horizontally scaling. But if you're only deploying one instance, you don't need to worry about that as it is already handled by the server.