Connector-specific configuration properties are described below.
Connector Parameters
format.class
The format class to use when writing data to the store.
- Type: class
- Importance: high
flush.size
Number of records written to store before invoking file commits.
- Type: int
- Importance: high
Important
Rotate strategy logic: The logic to flush files to storage is triggered when a new record arrives, after the defined interval or scheduled interval time. Flushing files is also triggered periodically by the offset.flush.interval.ms
setting defined in the Connect worker configuration. The offset.flush.interval.ms
setting defaults to 60000 ms (60 seconds). If you enable the properties rotate.interval.ms
or rotate.schedule.interval.ms
and ingestion rate is low, you should set offset.flush.interval.ms
to a smaller value so that records flush at the rotation interval (or close to the interval) . Leaving the offset.flush.interval.ms
set to the default 60 seconds may cause records to stay in an open file for longer than expected, if no new records get processed that trigger rotation. For detailed information about rotation strategies, see Azure Blob Storage Object Uploads.
rotate.interval.ms
The time interval in milliseconds to invoke file commits. This configuration is useful when data ingestion rate is low and the connector didn’t write enough messages to commit files. The default value -1 means that this feature is disabled.
- Type: long
- Default: -1
- Importance: high
rotate.schedule.interval.ms
The time interval in milliseconds to periodically invoke file commits. Time of commit will be adjusted to 00:00 of selected timezone. Commit will be performed at scheduled time, all other factors considered (see Important note above). This configuration is useful when you have to commit your data based on current server time, like at the beginning of every hour. You must have the partitioner parameter timezone
configured (defaults to an empty string) when using this configuration property, otherwise the connector fails with an exception. The default value -1 means that this feature is disabled.
- Type: long
- Default: -1
- Importance: medium
The following Avro converter properties can be used in the connector
configuration:
schema.cache.config
The size of the schema cache used in the Avro converter.
- Type: int
- Default: 1000
- Importance: low
enhanced.avro.schema.support
Enable enhanced Avro schema support in the Avro Converter. When set to true
, this property preserves Avro schema package information and Enums when going from Avro schema to Connect schema. This information is added back in when going from Connect schema to Avro schema.
- Type: boolean
- Default: false
- Importance: low
connect.meta.data
Allow the Connect converter to add its metadata to the output schema.
- Type: boolean
- Default: true
- Importance: low
The connect.meta.data
property preserves the following Connect schema metadata when going from Connect schema to Avro schema. The following metadata is added back in when going from Avro schema to Connect schema.
- doc
- version
- parameters
- default value
- name
- type
For detailed information and configuration examples for Avro converters listed
above, see Using Kafka Connect with Schema Registry.
retry.backoff.ms
The retry backoff in milliseconds. This config is used to notify Kafka connect to retry delivering a message batch or performing recovery in case of transient exceptions.
- Type: long
- Default: 5000
- Importance: low
filename.offset.zero.pad.width
Width to zero-pad offsets in store’s filenames if offsets are too short in order to provide fixed-width filenames that can be ordered by simple lexicographic sorting.
- Type: int
- Default: 10
- Valid Values: [0,…]
- Importance: low
avro.codec
The Avro compression codec to be used for output files. Available values: null, deflate, snappy and bzip2 (CodecSource is org.apache.avro.file.CodecFactory)
- Type: string
- Default: null
- Valid Values: [null, deflate, snappy, bzip2]
- Importance: low
parquet.codec
The Parquet compression codec to be used for output files.
- Type: string
- Default: snappy
- Valid Values: [none, snappy, gzip, brotli, lz4, lzo, zstd]
- Importance: low
Confluent Platform license
confluent.topic.bootstrap.servers
A list of host/port pairs to use for establishing the initial connection to the Kafka cluster used for licensing. All servers in the cluster will be discovered from the initial connection. This list should be in the form <code>host1:port1,host2:port2,…</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).
- Type: list
- Importance: high
confluent.topic
Name of the Kafka topic used for Confluent Platform configuration, including licensing information.
- Type: string
- Default: _confluent-command
- Importance: low
confluent.topic.replication.factor
The replication factor for the Kafka topic used for Confluent Platform configuration, including licensing information. This is used only if the topic does not already exist, and the default of 3 is appropriate for production use. If you are using a development environment with less than 3 brokers, you must set this to the number of brokers (often 1).
- Type: int
- Default: 3
- Importance: low
Confluent license properties
Tip
While it is possible to include license-related properties in the connector
configuration, starting with Confluent Platform version 6.0, you can now put
license-related properties in the Connect worker configuration instead of in each connector configuration.
Note
This connector is proprietary and requires a license. The license information
is stored in the _confluent-command
topic. If the broker requires SSL for
connections, you must include the security-related confluent.topic.*
properties as described below.
confluent.license
Confluent issues enterprise license keys to each subscriber. The license key is text that you can copy and
paste as the value for confluent.license
. A trial license allows using the connector for a 30-day trial period. A developer license allows using the connector indefinitely for single-broker development environments.
If you are a subscriber, please contact Confluent Support for more information.
- Type: string
- Default: “”
- Valid Values: Confluent Platform license
- Importance: high
confluent.topic.ssl.truststore.location
The location of the trust store file.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.truststore.password
The password for the trust store file. If a password is not set access to the truststore is still available, but
integrity checking is disabled.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.keystore.location
The location of the key store file. This is optional for client and can be used for two-way authentication for client.
- Type: string
- Default: null
- Importance: high
confluent.topic.ssl.keystore.password
The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.
- Type: password
- Default: null
- Importance: high
confluent.topic.ssl.key.password
The password of the private key in the key store file. This is optional for client.
- Type: password
- Default: null
- Importance: high
confluent.topic.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: “PLAINTEXT”
- Importance: medium
License topic configuration
A Confluent enterprise license is stored in the _confluent-command
topic.
This topic is created by default and contains the license that corresponds to
the license key supplied through the confluent.license
property.
Note
No public keys are stored in Kafka topics.
The following describes how the default _confluent-command
topic is
generated under different scenarios:
- A 30-day trial license is automatically generated for the
_confluent command
topic if you do not add the confluent.license
property or leave this property empty (for example, confluent.license=
).
- Adding a valid license key (for example,
confluent.license=<valid-license-key>
) adds a valid license in the _confluent-command
topic.
Here is an example of the minimal properties for development and testing.
You can change the name of the _confluent-command
topic using the
confluent.topic
property (for instance, if your environment has strict
naming conventions). The example below shows this change and the configured
Kafka bootstrap server.
confluent.topic=foo_confluent-command
confluent.topic.bootstrap.servers=localhost:9092
The example above shows the minimally required bootstrap server property that
you can use for development and testing. For a production environment, you add
the normal producer, consumer, and topic configuration properties to the
connector properties, prefixed with confluent.topic.
.
License topic ACLs
The _confluent-command
topic contains the license that corresponds to the
license key supplied through the confluent.license
property. It is created
by default. Connectors that access this topic require the following ACLs
configured:
- CREATE and DESCRIBE on the resource cluster, if the connector needs to create the topic.
- DESCRIBE, READ, and WRITE on the
_confluent-command
topic.
You can provide access either individually for each principal that will
use the license or use a wildcard entry to
allow all clients. The following examples show commands that you can use to
configure ACLs for the resource cluster and _confluent-command
topic.
Set a CREATE and DESCRIBE ACL on the resource cluster:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
--add --allow-principal User:<principal> \
--operation CREATE --operation DESCRIBE --cluster
Set a DESCRIBE, READ, and WRITE ACL on the _confluent-command
topic:
kafka-acls --bootstrap-server localhost:9092 --command-config adminclient-configs.conf \
--add --allow-principal User:<principal> \
--operation DESCRIBE --operation READ --operation WRITE --topic _confluent-command
Overriding Default Configuration Properties
You can override the replication factor using
confluent.topic.replication.factor
. For example, when using a Kafka cluster
as a destination with less than three brokers (for development and testing) you
should set the confluent.topic.replication.factor
property to 1
.
You can override producer-specific properties by using the
confluent.topic.producer.
prefix and consumer-specific properties by using
the confluent.topic.consumer.
prefix.
You can use the defaults or customize the other properties as well. For example,
the confluent.topic.client.id
property defaults to the name of the connector
with -licensing
suffix. You can specify the configuration settings for
brokers that require SSL or SASL for client connections using this prefix.
You cannot override the cleanup policy of a topic because the topic always has a
single partition and is compacted. Also, do not specify serializers and
deserializers using this prefix; they are ignored if added.