SPOOL DIR SOURCE CONNECTORS
This connector is used to stream JSON files from a directory while also converting the data based on the schema supplied in the configuration.
To use this connector, use a connector configuration that specifies the name of this connector class in the connector.class configuration property:
connector.class
connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceConnector
The other connector-specific configuration properties are described below.
This example follows the same steps as the Quick Start. Review the Quick Start for help running the Confluent Platform and installing the Spool Dir connectors.
Generate a JSON dataset using the command below:
curl "https://api.mockaroo.com/api/17c84440?count=500&key=25fd9c80" > "json-spooldir-source.json"
Create a spooldir.properties file with the following contents:
spooldir.properties
name=JsonSpoolDir tasks.max=1 connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirJsonSourceConnector input.path=/path/to/data input.file.pattern=json-spooldir-source.json error.path=/path/to/error finished.path=/path/to/finished halt.on.error=false topic=spooldir-json-topic
Load the SpoolDir JSON Source Connector using the Confluent CLI confluent local services connect connector load command.
Caution
You must include a double dash (--) between the topic name and your flag. For more information, see this post.
--
confluent local services connect connector load spooldir --config spooldir.properties
Important
Don’t use the confluent local commands in production environments.
name=SchemaLessJsonSpoolDir tasks.max=1 connector.class=com.github.jcustenborder.kafka.connect.spooldir.SpoolDirSchemaLessJsonSourceConnector input.path=/path/to/data input.file.pattern=json-spooldir-source.json error.path=/path/to/error finished.path=/path/to/finished halt.on.error=false topic=spooldir-schemaless-json-topic value.converter=org.apache.kafka.connect.storage.StringConverter
Load the SpoolDir Schemaless JSON Source Connector.
Don’t use the Confluent CLI in production environments.
topic
The Kafka topic to write the data to.
batch.size
The number of records that should be returned with each batch.
empty.poll.wait.ms
The amount of time to wait if a poll returns an empty list of records.
metadata.field
The name of the field in the value where the metadata will be stored.
metadata.location
Location that metadata about the input file will be stored. FIELD - Metadata about the file will be stored in a field in the value of the record. HEADERS - Metadata about the input file will be stored as headers on the record. NONE - no metadata about the input file will be stored.
NONE
HEADERS
FIELD
For more information about Auto topic creation, see Configuring Auto Topic Creation for Source Connectors.
Note
Configuration properties accept regular expressions (regex) that are defined as Java regex.
topic.creation.groups
A list of group aliases that are used to define per-group topic configurations for matching topics. A default group always exists and matches all topics.
default
topic.creation.$alias.replication.factor
The replication factor for new topics created by the connector. This value must not be larger than the number of brokers in the Kafka cluster. If this value is larger than the number of Kafka brokers, an error occurs when the connector attempts to create a topic. This is a required property for the default group. This property is optional for any other group defined in topic.creation.groups. Other groups use the Kafka broker default value.
>= 1
-1
topic.creation.$alias.partitions
The number of topic partitions created by this connector. This is a required property for the default group. This property is optional for any other group defined in topic.creation.groups. Other groups use the Kafka broker default value.
topic.creation.$alias.include
A list of strings that represent regular expressions that match topic names. This list is used to include topics with matching values, and apply this group’s specific configuration to the matching topics. $alias applies to any group defined in topic.creation.groups. This property does not apply to the default group.
$alias
topic.creation.$alias.exclude
A list of strings representing regular expressions that match topic names. This list is used to exclude topics with matching values from getting the group’s specfic configuration. $alias applies to any group defined in topic.creation.groups. This property does not apply to the default group. Note that exclusion rules override any inclusion rules for topics.
topic.creation.$alias.${kafkaTopicSpecificConfigName}
Any of the Changing Broker Configurations Dynamically for the version of the Kafka broker where the records will be written. The broker’s topic-level configuration value is used if the configuration is not specified for the rule. $alias applies to the default group as well as any group defined in topic.creation.groups.
error.path
The directory to place files that have errors. This directory must exist and be writable by the user running Kafka Connect.
input.file.pattern
Regular expression to check input file names against. This expression must match the entire filename. The equivalent of Matcher.matches().
Matcher.matches()
input.path
The directory where Kafka Connect reads files that are processed. This directory must exist and be writable by the user running Connect.
finished.path
The directory where Connect puts files that are successfully processed. This directory must exist and be writable by the user running Connect.
halt.on.error
Sets whether the task halts when it encounters an error or continues to the next file.
cleanup.policy
Determines how the connector should cleanup the files that have been successfully processed. NONE leaves the files in place which could cause them to be reprocessed if the connector is restarted. DELETE removes the file from the filesystem. MOVE will move the file to a finished directory. MOVEBYDATE will move the file to a finished directory with subdirectories by date.
DELETE
MOVE
MOVEBYDATE
task.partitioner
The task partitioner implementation is used when the connector is configured to use more than one task. This is used by each task to identify which files will be processed by that task. This ensures that each file is only assigned to one task.
ByName
file.buffer.size.bytes
The size of buffer for the BufferedInputStream that will be used to interact with the file system.
file.minimum.age.ms
The amount of time in milliseconds after the file was last written to before the file can be processed.
files.sort.attributes
The attributes each file will use to determine the sort order. Name is name of the file. Length is the length of the file preferring larger files first. LastModified is the LastModified attribute of the file preferring older files first.
NameAsc
NameDesc
LengthAsc
LengthDesc
LastModifiedAsc
LastModifiedDesc
processing.file.extension
Before a file is processed, it is renamed to indicate that it is currently being processed. This setting is appended to the end of the file.
key.schema
The schema for the key written to Kafka.
value.schema
The schema for the value written to Kafka.
schema.generation.enabled
Flag to determine if schemas should be dynamically generated. If set to true, key.schema and value.schema can be omitted, but schema.generation.key.name and schema.generation.value.name must be set.
schema.generation.key.name
schema.generation.value.name
schema.generation.key.fields
The field(s) to use to build a key schema. This is only used during schema generation.
The name of the generated key schema.
The name of the generated value schema.
timestamp.field
The field in the value schema that contains the parsed timestamp for the record. This field cannot be marked as optional and must be a Timestamp.
timestamp.mode
Determines how the connector sets the timestamp for the ConnectRecord. If set to Field, the timestamp is read from a field in the value. This field cannot be optional and must be a Timestamp. Specify the field in timestamp.field. If set to FILE_TIME, the last time the file was modified is used. If set to PROCESS_TIME (the default), the time the record is read is used.
Field
FILE_TIME
PROCESS_TIME
parser.timestamp.date.formats
The date formats that are expected in the file. This is a list of strings that are used to parse the date fields in order. The most accurate date format should be first in the list. See the Java documentation for more information.
parser.timestamp.timezone
The time zone used for all parsed dates.