@@ -12,8 +12,12 @@ Kafka Sink Connector Guide
1212 :depth: 2
1313 :class: singlecol
1414
15- Apache Kafka uses a sink connector to consume records from a topic and
16- save the data to a datastore.
15+
16+ Overview
17+ --------
18+
19+ The MongoDB Kafka Sink Connector consumes records from a Kafka topic and
20+ saves the data to a MongoDB database.
1721
1822This section of the guide covers the configuration settings necessary to
1923set up a Kafka Sink connector.
@@ -27,6 +31,33 @@ set up a Kafka Sink connector.
2731 :manual:`use an index to support these queries
2832 <tutorial/create-indexes-to-support-queries/>`.
2933
34+
35+ Message Delivery Guarantee
36+ --------------------------
37+
38+ The Sink Connector guarantees "at-least-once" message delivery by default.
39+ If there is an error while processing data from a topic, the connector
40+ retries the write.
41+
42+ An "exactly-once" message delivery guarantee can be achieved using an
43+ idempotent operation such as insert or update. Configure the connector to
44+ ensure messages include a value for the ``_id`` field.
45+
46+ .. note::
47+
48+ You can configure the :ref:`DocumentIdAdder post processor <config-document-id-adder>`
49+ to define a custom behavior and a value for a ``_id`` field. By default,
50+ the sink connector uses the ``BsonOidStrategy`` which generates a
51+ new :manual:`BSON ObjectId </reference/bson-types/#objectid>` for the
52+ ``_id`` field if one does not exist.
53+
54+ If you need an "exactly-once" message delivery guarantee, configure the
55+ connector to ensure messages include a value for the ``_id`` field. For
56+ example, you can specify the :ref:`DocumentIdAdder post processor <config-document-id-adder>`
57+ to add a value for the ``_id`` field.
58+
59+ The sink connector does not support the "at-most-once" guarantee.
60+
3061.. toctree::
3162 :titlesonly:
3263 :hidden:
0 commit comments