Documentation
Cluster
Consumer

Consume Records

The Blazing KRaft's Consumer feature is the first one to give you the possibility to consume records from multiple topics at once and many more capabilities through a great UI.

The Key/Value Deserializers are either configured globally (which makes them read-only) or can be configured per request.

The Settings section allows you to specify the timezone in which the metadata timestamps are displayed and with which the filter start and end timestamps are formatted. Plus deciding how many records you want displayed and what data to display in the list.

The Filters section is so powerful, it covers all the use cases, you can filter by Time, Text, Partitions, Offsets, Group Id and Javascript Code at the same time.

When you encounter an error when consuming records, there are two possible reasons, either the kafka cluster is unavailable or a problem occured when deserializing the record. Fortunately, Blazing KRaft gives a preview of the error message.

Successfull Records

Cluster Consumer Successfull Records Image

Failed Records

Cluster Consumer Failed Records Image

Settings

Cluster Consumer Settings Image

Time Filter

Cluster Consumer Time Filter Image

Text Filter

Cluster Consumer Text Filter Image

Partitions Filter

Cluster Consumer Partitions Filter Image

Offsets Filter

Cluster Consumer Offsets Filter Image

Group ID Filter

Cluster Consumer Group ID Filter Image

Javascript Filter

Cluster Consumer Javascript Filter Image

Edit Configuration

The consumer configuration is a combination of common admin configuration and custom consumer configuration. The common admin configuration is read only.

You can enforce records consumption using a specific data type by specifying key/value deserializers, or you can allow consumption of any data type using thePer Request deserializer.

Available deserializers are:Per Request Long Double String Json Json Schema Avro Avro Schema Protobuf Protobuf Schema

If your cluster is linked to a schema registry, three more deserializers are available: Json Schema Registry Avro Schema Registry Protobuf Schema Registry, and you can customize their serdes configuration.

Per Request Serializer

Cluster Consumer Configuration Per Request Serializer Image

Beautified Configuration

Cluster Consumer Beautified Configuration Image

Raw Configuration

Cluster Consumer Raw Configuration Image

Key Schema Registry Configuration

Cluster Consumer Key Schema Registry Configuration Image

Value Schema Registry Configuration

Cluster Consumer Value Schema Registry Configuration Image

Configuration Details

The consumer configuration details page allows to view the consumer, admin and deserializers configurations.

Cluster Consumer Configuration Details Image

Miscellaneous

The records consumption in the java client library is implemented using long polling, this means that the consumer will keep polling the broker for new records until it receives a response or a timeout occurs. The timeout is set to 1200ms by default, and can be overriden in the edit consumer client configuration page.

Beware that the default 1200ms timeout is not enough for some Apache Kafka® cloud providers, for example I tested this with a provider and the records consumption returned no records because the 1200ms is elapsed before the brokers could upload any records to the consumer (The minimum that worked for me was 1500ms). Therefore, I recommend tuning the value as needed.

For local clusters, 400ms should be more than enough.

The consumer is implemented using Sockjs which gives you the ability to provide fallback options for browsers that don't support WebSockets. The following transports are explicitly supported by Blazing KRaft and are ordered by priority: websocket eventsource xhr-streaming xdr-streaming