KafkaCommit
Incubation
Short Description |
Ports |
KafkaCommit Attributes |
Details |
Compatibility |
See also |
Short Description
KafkaCommit commits processed offsets for Kafka topics.
It must be used in combination with a KafkaReader component to commit offsets explicitly, after all the consumed events processing is complete.
Component | Data source | Input ports | Output ports | Each to all outputs | Different to different outputs | Transformation | Transf. req. | Java | CTL | Auto-propagated metadata |
---|---|---|---|---|---|---|---|---|---|---|
KafkaCommit | 1 | 0 | ⨯ | ⨯ | ✓ | ⨯ | ⨯ | ✓ | ⨯ |
Ports
Port type | Number | Required | Description | Metadata |
---|---|---|---|---|
Input | 0 | ⨯ | Offsets to be committed | KafkaCommitInput |
Metadata
Table 65.1. KafkaCommitInput
Field number | Field name | Data type | Description |
---|---|---|---|
1 | topic | string | Event topic |
2 | partition | string | Event partition |
3 | offset | long | Event offset |
The metadata are used in the component's input mapping. The metadata fields are required to identify the event offset to be committed.
KafkaCommit Attributes
Attribute | Req | Description | Possible values |
---|---|---|---|
Basic | |||
Kafka reader component |
A reference to a reader component which consumes the events to be committed. With only a single KafkaReader in the graph, it is auto-detected. With multiple KafkaReader components, this attribute is required. | e.g. KafkaReader (id:KAFKA_READER) | |
Commit interval | The interval, in which incoming offsets are committed. | e.g. 500ms | 3s | 1m | |
Input mapping | Defines the mapping of input metadata fields to KafkaCommit fields. |
Details
In Kafka, reading normally starts from the last committed offset. Committing an offset means that the event with this offset is marked as already consumed/processed.
The default behavior of KafkaReader is to auto-commit consumed events periodically.
To have a better control over when the offsets are actually committed, you may use the KafkaCommit component.
KafkaCommit has to be used in pair with a reader component, as it is the reader that consumes the events to be committed. To identify the reader, it has to be specified in the Kafka reader component property. In case there is only one reader component in the graph, it is auto-detected.
If the input port is not connected, consumed offsets in the paired reader can be committed in a one-time fashion. In this case, the KafkaCommit component should be in a later graph phase than the reader itself.
When the input port is connected, the offsets (uniquely identified by topic, partition and offset) sent over the input edge are committed periodically.
Notes and Limitations
It is not possible to pair KafkaCommit with a reader component from another graph (job).
Similarily to KafkaReader, it is not possible to achieve atomic committing of each incoming offset, as committing is performed periodically. It can be beneficial to lower the commit interval, but keeping it too low is not recommended as it vastly increases read/write load on the Kafka cluster.
Compatibility
Version | Compatibility Notice |
---|---|
5.9.0 |
KafkaCommit is available since 5.9.0 in incubation mode. It uses Kafka Consumer API version 2.6.0. |