Consume Records
The Blazing KRaft's Consumer feature is the first one to give you the possibility to consume records from multiple topics at once and many more capabilities through a great UI.
The Key/Value Deserializers are either configured globally (which makes them read-only) or can be configured per request.
The Settings section allows you to specify the timezone in which the metadata timestamps are displayed and with which the filter start and end timestamps are formatted. Plus deciding how many records you want displayed and what data to display in the list.
The Filters section is so powerful, it covers all the use cases, you can filter by Time, Text, Partitions, Offsets, Group Id and Javascript Code at the same time.
When you encounter an error when consuming records, there are two possible reasons, either the kafka cluster is unavailable or a problem occured when deserializing the record. Fortunately, Blazing KRaft gives a preview of the error message.
Successfull Records
Failed Records
Settings
Time Filter
Text Filter
Partitions Filter
Offsets Filter
Group ID Filter
Javascript Filter
Edit Configuration
The consumer configuration is a combination of common admin configuration and custom consumer configuration. The common admin configuration is read only.
You can enforce records consumption using a specific data type by specifying key/value deserializers, or you can allow consumption of any data type using thePer Request deserializer.
Available deserializers are:Per Request
Long
Double
String
Json
Json Schema
Avro
Avro Schema
Protobuf
Protobuf Schema
If your cluster is linked to a schema registry, three more deserializers are available:
Json Schema Registry
Avro Schema Registry
Protobuf Schema Registry
, and you can customize their serdes configuration.
Per Request Serializer
Beautified Configuration
Raw Configuration
Key Schema Registry Configuration
Value Schema Registry Configuration
Configuration Details
The consumer configuration details page allows to view the consumer, admin and deserializers configurations.
Miscellaneous
The records consumption in the java client library is implemented using long polling, this means that the consumer will keep polling the broker for new records until it receives a response or a timeout occurs. The timeout is set to 1200ms by default, and can be overriden in the edit consumer client configuration page.
Beware that the default 1200ms timeout is not enough for some Apache Kafka® cloud providers, for example I tested this with a provider and the records consumption returned no records because the 1200ms is elapsed before the brokers could upload any records to the consumer (The minimum that worked for me was 1500ms). Therefore, I recommend tuning the value as needed.
For local clusters, 400ms should be more than enough.
The consumer is implemented using Sockjs which gives you the ability to provide fallback options for browsers
that don't support WebSockets. The following transports are explicitly supported by Blazing KRaft
and are ordered by priority: websocket
eventsource
xhr-streaming
xdr-streaming