Kafka doesn’t support exactly once delivery of messages out of the box, but there are ways developers can guarantee exactly once delivery that we’ll discuss next. One approach is for the consumer to write the results to a system that supports unique keys. Any key-value store can be used for this purpose (e.g. relational databases or ElasticSearch). The records can come with their own unique keys and if the records don’t have keys, we can create one using the combination of topic + partition + offset for the message. Every message is uniquely identified using this combination. Thereafter, even if a record has been duplicated, we’ll simply overwrite the same value for the key. This pattern is known as idempotent write.
The other approach is to rely on an external system that offers transactions. We store the message as well as its offset in a single transaction. After a crash or when the consumer starts up the first time, it can query the external store and retrieve the offset of the last record read and start consuming records from that offset onwards.