CLOUD
Note
If you are installing the connector locally for Confluent Platform, see Debezium MySQL Source Connector for Confluent Platform.
The Kafka Connect MySQL Change Data Capture (CDC) Source (Debezium) connector for Confluent Cloud can obtain a snapshot of the existing data in a MySQL database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services. Note that deleted records are not captured.
Important
After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.
The MySQL CDC Source (Debezium) connector provides the following features:
<database.server.name>.<schemaName>.<tableName>
topic.creation.default.partitions=1
topic.creation.default.replication.factor=3
true
database.server.id is set to a random number between 5400 and 6400.
database.server.id
You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.
For more information, see the Confluent Cloud connector limitations.
Caution
Preview connectors are not currently supported and are not recommended for production use.
Use this quick start to get up and running with the MySQL CDC Source (Debezium) connector. The quick start provides the basics of selecting the connector and configuring it to obtain a snapshot of the existing data in a MySQL database and then monitoring and recording all subsequent row-level changes.
Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
The Confluent Cloud CLI installed and configured for the cluster. See Install the Confluent Cloud CLI.
Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
Public access may be required for your database. See Internet Access to Resources for details. The example below shows the AWS Management Console when setting up a MySQL database.
Public access enabled
Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for details. The example below shows the AWS Management Console when setting up security group rules for the VPC.
0.0.0.0/0
Open inbound traffic
See your specific cloud platform documentation for how to configure security rules for your VPC.
Kafka cluster credentials. You can use one of the following ways to get credentials:
An ACL to create a topic prefix is required. Note that the prefix is the database server name (example, the server name is cdc in the configuration property "database.server.name": "cdc"). See ccloud kafka acl create for the CLI command reference.
cdc
"database.server.name": "cdc"
ccloud kafka acl create --allow --service-account "<service-account-id>" --operation "CREATE" --prefix --topic "<database.server.name>"
ccloud kafka acl create --allow --service-account "<service-account-id>" --operation "WRITE" --prefix --topic "<database.server.name>"
Update the following settings for the MySQL database.
Turn on backup for the database.
Create a new parameter group and set the following parameters:
binlog_format=ROW binlog_row_image=full
Apply the new parameter group to the database.
Reboot the database.
The following example screens are from Amazon RDS:
See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.
Click Connectors. If you already have connectors in your cluster, click Add connector.
Click the MySQL CDC Source connector icon.
Complete the following and click Continue.
Enter a connector name.
Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.
Add the connection details for the database.
Do not include jdbc:xxxx:// in the Connection host field. The example below shows a sample host address.
jdbc:xxxx://
Add the Database details for your database. Review the following notes for more information about field selections.
<database-name>
schemaName.tableName
initial
never
when_needed
Enter values for the following properties:
Select the values for the following properties:
Output message format: (data coming from the connector): AVRO, JSON (schemaless), JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
After-state only: (Optional) Defaults to true, which results in the Kafka record having only the record state from change events applied. Select false to maintain the prior record states after applying the change events.
JSON output decimal format: (Optional) Defaults to BASE64.
Enter the number of tasks in use by the connector. Refer to Confluent Cloud connector limitations for additional information.
Verify the connection details and click Launch.
The status for the connector should go from Provisioning to Running. It may take a few minutes.
After the connector is running, verify that messages are populating your Kafka topic.
A topic named dbhistory.<databas.server.name>.<connect-id> is automatically created. This topic is created based on the database.history.kafka.topic property (that may be configured). This topic has one partition.
dbhistory.<databas.server.name>.<connect-id>
database.history.kafka.topic
For additional information about this connector, see Debezium MySQL Source Connector for Confluent Platform. Note that not all Confluent Platform connector features are provided in the Confluent Cloud connector.
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.
Complete the following steps to set up and run the connector using the Confluent Cloud CLI.
Make sure you have all your prerequisites completed.
Enter the following command to list available connectors:
ccloud connector-catalog list
Enter the following command to show the required connector properties:
ccloud connector-catalog describe <connector-catalog-name>
For example:
ccloud connector-catalog describe MySqlCdcSource
Example output:
Following are the required configs: connector.class: MySqlCdcSource name kafka.api.key kafka.api.secret database.hostname database.user database.server.name output.data.format tasks.max
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{ "connector.class": "MySqlCdcSource", "name": "MySqlCdcSourceConnector_0", "kafka.api.key": "****************", "kafka.api.secret": "****************************************************************", "database.hostname": "database-2.<host-ID>.us-west-2.rds.amazonaws.com", "database.port": "3306", "database.user": "admin", "database.password": "**********", "database.server.name": "mysql", "database.whitelist": "employee", "table.whitelist":"employees.departments, "snapshot.mode": "initial", "output.data.format": "AVRO", "tasks.max": "1" }
Note the following property definitions:
"connector.class"
"name"
"table.whitelist"
"snapshot.mode"
"output.data.format"
"after.state.only"
"json.output.decimal.format"
"tasks.max"
Enter the following command to load the configuration and start the connector:
ccloud connector create --config <file-name>.json
ccloud connector create --config mysql-cdc-source.json
Created connector MySqlCdcSourceConnector_0 lcc-ix4dl
Enter the following command to check the connector status:
ccloud connector list
ID | Name | Status | Type +-----------+-----------------------------+---------+-------+ lcc-ix4dl | MySqlCdcSourceConnector_0 | RUNNING | source