This section is a reference for Replicator configuration options.
Connector-specific configuration properties are described below.
Source Topics
Important
The __consumer_timestamps
topic will not be copied from source to destination as this topic is relevant only to Replicator offset translation. For more information regarding offset translation, see Understanding Consumer Offset Translation.
topic.regex
Regex of topics to replicate to the destination cluster.
- Type: string
- Default: null
- Importance: high
Important
Replicator will not copy the internal __consumer_offsets
or __transaction_state
topics from the source cluster even if matched in topic.regex
. To copy these topics, list them in topic.whitelist
.
topic.whitelist
Whitelist of topics to be replicated.
- Type: list
- Default: “”
- Importance: high
Important
At startup, Replicator lists the topics to replicate. Replicator will fail if any topic in topic.whitelist
cannot be listed, either because the topic does not exist or because there are insufficient ACLs for it. The Replicator principal must have permissions to describe ACLs on every topic in the list. For more information, see how to configure Security on Replicator.
Important
From Replicator 5.5.0 onwards, topics listed in topic.whitelist
must exist in the source cluster. If you want to topics to be created after starting Replicator, use topic.regex
for these instead.
topic.blacklist
Topics to exclude from replication.
- Type: list
- Default: “”
- Importance: high
topic.poll.interval.ms
How often to poll the source cluster for new topics matching topic.whitelist or topic.regex.
- Type: int
- Default: 120000
- Valid Values: [0,…]
- Importance: low
Important
The topic.regex
, topic.whitelist
and topic.blacklist
configurations apply in the following order:
- Any topics specified in
topic.blacklist
will not be replicated.
- Any topics specified in
topic.whitelist
or matched by topic.regex
will be replicated (whitelisted topics do not have to match the regex).
If all 3 properties are left with their default values then no topics will be replicated.
Source Kafka
The following configuration options are common properties that are used across Kafka clients. Replicator uses these options
to connect with the source cluster (consumer, adminclient). Valid client properties that use the src.kafka
prefix will
be forwarded to clients that connect to the source cluster.
src.kafka.bootstrap.servers
A list of host and port pairs to use for establishing the initial connection to the source Kafka cluster. The client will use all servers, irrespective of which servers are designated here for bootstrapping. This list only controls the initial hosts used to discover the full set of servers. This list must be in the form host1:port1,host2:port2,...
. These servers are only used for the initial connection, to discover the full cluster membership. You don’t need to specify the full set of servers, but you may want to specify more than one, in case of failover.
- Type: list
- Importance: high
src.kafka.client.id
An ID string to pass to the server when making requests. This string is used to track the source of requests. It allows a logical application name to be included in server-side request logging.
- Type: string
- Default: “”
- Importance: low
src.kafka.request.timeout.ms
Specifies the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses, the client will resend, or fail the request if retries are exhausted.
- Type: int
- Default: 305000
- Valid Values: [0,…]
- Importance: medium
src.kafka.retry.backoff.ms
Specifies the amount of time to wait before attempting to retry a failed request to a topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.
- Type: long
- Default: 100
- Valid Values: [0,…]
- Importance: low
src.kafka.connections.max.idle.ms
Specifies the amount of time (milliseconds) before idle connections are closed.
- Type: long
- Default: 540000
- Importance: medium
src.kafka.reconnect.backoff.ms
Specifies the amount of time to wait before attempting to reconnect to a host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.
- Type: long
- Default: 50
- Valid Values: [0,…]
- Importance: low
src.kafka.metric.reporters
A list of classes to use as metrics reporters. Implementing the MetricReporter
interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.
- Type: list
- Default: “”
- Importance: low
src.kafka.metrics.num.samples
The number of samples maintained to compute metrics.
- Type: int
- Default: 2
- Valid Values: [1,…]
- Importance: low
src.kafka.metrics.sample.window.ms
The window of time a metrics sample is computed over.
- Type: long
- Default: 30000
- Valid Values: [0,…]
- Importance: low
src.kafka.send.buffer.bytes
The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the operating system default will be used.
- Type: int
- Default: 131072
- Valid Values: [-1,…]
- Importance: medium
src.kafka.receive.buffer.bytes
The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the operating system default will be used.
- Type: int
- Default: 65536
- Valid Values: [-1,…]
- Importance: medium
src.kafka.timestamps.topic.replication.factor
Replication factor for the consumer timestamps topic.
- Type: int
- Default: 3
- Importance: high
src.kafka.timestamps.topic.num.partitions
Number of partitions for the consumer timestamps topic.
- Type: int
- Default: 50
- Importance: high
Source Kafka: Consumer
The following configuration options are properties that are specific to the Kafka consumer. These options will be combined
with the src.kafka
properties and forwarded to consumers that connect to the source cluster.
src.consumer.allow.auto.create.topics
Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using auto.create.topics.enable
broker configuration. This configuration must be set to false
when using brokers older than 0.11.0.
- Type: boolean
- Default: false
- Importance: medium
Note
Normally, this config in consumers defaults to true
. However, for Replicator this config internally defaults to
false
so as not to allow topic creation by consumers. To delete topics while Replicator is running, you can do
so in one of two ways if you have topic.auto.create
set to true
.
- Stop Replicator and delete all the topics.
OR
- Set
auto.create.topics.enable
to false
on the Kafka clusters.
src.consumer.interceptor.classes
A list of classes to use as interceptors. Implementing the ConsumerInterceptor
interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.
- Type: list
- Default: null
- Importance: low
src.consumer.fetch.max.wait.ms
The maximum amount of time the server will block before answering the fetch request if there isn’t sufficient data to immediately satisfy the requirement given by fetch.min.bytes.
- Type: int
- Default: 500
- Valid Values: [0,…]
- Importance: low
src.consumer.fetch.min.bytes
The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.
- Type: int
- Default: 1
- Valid Values: [0,…]
- Importance: high
src.consumer.fetch.max.bytes
The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes
(broker config) or max.message.bytes
(topic config). Note that the consumer performs multiple fetches in parallel.
- Type: int
- Default: 52428800
- Valid Values: [0,…]
- Importance: medium
src.consumer.max.partition.fetch.bytes
The maximum amount of data per-partition the server will return. If the first message in the first non-empty partition of the fetch is larger than this limit, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes
(broker config) or max.message.bytes
(topic config). See fetch.max.bytes for limiting the consumer request size
- Type: int
- Default: 1048576
- Valid Values: [0,…]
- Importance: high
src.consumer.max.poll.records
The maximum number of records returned in a single call to poll().
- Type: int
- Default: 500
- Valid Values: [1,…]
- Importance: medium
src.consumer.check.crcs
Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.
- Type: boolean
- Default: true
- Importance: low
Cluster ID and Group ID
The parameter cluster.id
is required when running Replicator as an executable. This defines a
unique identifier for the cluster that is formed when several instances of Replicator Executable start with the same
--cluster.id
, and is a property on the executable only. The cluster.id
does not have a default value; it must be specified at runtime.
Replicator executables that have the same cluster.id
will automatically discover each other and form a cluster.
For non-executable deployments (using Connect workers), the parameter group.id
is a unique string specified in the Connect worker that identifies the cluster group the worker belongs to. The default is connect-cluster
. Distributed workers that have the same group.id
will automatically discover each other and form a cluster.
If you want more than one cluster, you must specify different IDs appropriately, depending on the deployment type:
Tip
group.id
is a property on the Connect worker, not on Replicator. It is mentioned here because it serves the same purpose as cluster.id
for non-executable Replicator deployments.
Confluent Platform license
Important
From 5.5.0 onwards, Replicator will fail immediately after the license key expires, even if Replicator is running.
confluent.topic.bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
- Type: list
- Importance: high
confluent.topic
Name of the Kafka topic used for Confluent Platform configuration, including licensing information.
- Type: string
- Default: _confluent-command
- Importance: low
confluent.topic.replication.factor
The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
- Type: int
- Default: 3
- Importance: low
Confluent license properties
Note
By default, license-related properties (confluent.topic.*
) are inherited from dest.kafka.*
connector properties.
confluent.license
Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and
paste as the value for confluent.license
. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.
If you are a subscriber, please contact Confluent Support for more information.
- Type: string
- Default: “”
- Valid Values: Confluent Platform license
- Importance: high
confluent.topic.ssl.truststore.location
The location of the trust store file.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.truststore.password
The password for the trust store file. If a password is not set access to the truststore is still available, but
integrity checking is disabled.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.keystore.location
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.key.password
The password of the private key in the key store file. This is optional for client.
- Type: password
- Default: null
- Importance: high
confluent.topic.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: “PLAINTEXT”
- Importance: medium
License topic configuration
A Confluent enterprise license is stored in the _confluent-command
topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license
property.
Note
No public keys are stored in Kafka topics.
The following describes how the default _confluent-command
topic is
generated under different scenarios:
- A 30-day trial license is automatically generated for the
_confluent command
topic if you do not add the confluent.license
property or leave this property empty (for example, confluent.license=
).
- Adding a valid license key (for example,
confluent.license=<valid-license-key>
) adds a valid license in the _confluent-command
topic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command
topic using the
confluent.topic
property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic.
.
License topic ACLs
The _confluent-command
topic contains the license that corresponds to the
license key supplied through the confluent.license
property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
- CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
- DESCRIBE, READ, and WRITE on the
_confluent-command
topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command
topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
--add --allow-principal User:<principal> \
--operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the _confluent-command
topic:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
--add --allow-principal User:<principal> \
--operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Overriding Default Configuration Properties
You can override the replication factor using
confluent.topic.replication.factor
. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor
property to 1
.
You can override producer-specific properties by using the
confluent.topic.producer.
prefix and consumer-specific properties by using
the confluent.topic.consumer.
prefix.
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id
property defaults to the name of the connector
with -licensing
suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a
single partition and is compacted. Also, do not specify serializers and
deserializers using this prefix; they are ignored if added.