Produce Records
The Producer feature gives you the ability to produce records while preserving a history of already produced records.
You can choose a topic and a partition, if you want to use the configured partitioner you select -1 as the partition value.
The Key/Value Serializers are either configured globally (which makes them read-only) or can be configured per request.
The Settings section allows you to specify the timezone with which the metadata timestamps are displayed and to select which infos are to be displayed in records list.
When you encounter an error when producing a record, there are two possible reasons, either the kafka cluster is unavailable or the a problem occured when serializing the record. Fortunately, Blazing KRaft gives a preview of the error message.
Settings
Successfull Records
Successfull Preview
Failed Records
Failure Preview
Edit Configuration
The producer configuration is a combination of common admin configuration and custom producer configuration. The common admin configuration is read only.
You can enforce records production using a specific data type by specifying key/value serializers, or you can allow production of any data type using thePer Request serializer.
Available serializers are: Per Request
Long
Double
String
Json
Json Schema
Avro
Avro Schema
Protobuf
Protobuf Schema
When using the Avro
or Protobuf
serializers, the blazing kraft server will determine the schema
definition based on the content.
If your cluster is linked to a schema registry, three more serializers are available:
Json Schema Registry
Avro Schema Registry
Protobuf Schema Registry
, and you can customize the
serdes configuration.
Per Request Serializer
Beautified Configuration
Raw Configuration
Key Schema Registry Configuration
Value Schema Registry Configuration
Configuration Details
The producer configuration details page allows to view the producer, admin and serializers configurations.