CLOUD
Note
If you are installing the connector locally for Confluent Platform, see Debezium SQL Server Source Connector for Confluent Platform.
The Kafka Connect Microsoft SQL Server Change Data Capture (CDC) Source (Debezium) connector for Confluent Cloud can obtain a snapshot of the existing data in a Microsoft SQL Server database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services. Note that deleted records are not captured.
Important
After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.
The Microsoft SQL Server CDC Source (Debezium) connector provides the following features:
<database.server.name>.<schemaName>.<tableName>
topic.creation.default.partitions=1
topic.creation.default.replication.factor=3
true
You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.
For more information, see the Confluent Cloud connector limitations.
Caution
Preview connectors are not currently supported and are not recommended for production use.
Use this quick start to get up and running with the Confluent Cloud Microsoft SQL Server CDC Source (Debezium) connector. The quick start provides the basics of selecting the connector and configuring it to obtain a snapshot of the existing data in a Microsoft SQL Server database and then monitoring and recording all subsequent row-level changes.
Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
The Confluent Cloud CLI installed and configured for the cluster. See Install the Confluent Cloud CLI.
Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
SQL Server configured for change data capture (CDC). See Setting up SQL Server.
Public access may be required for your database. See Internet Access to Resources for details. The example below shows the AWS Management Console when setting up a Microsoft SQL Server database.
Public access enabled
Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for details. The following example shows the AWS Management Console when setting up security group rules for the VPC.
0.0.0.0/0
Open inbound traffic
See your specific cloud platform documentation for how to configure security rules for your VPC.
Kafka cluster credentials. You can use one of the following ways to get credentials:
An ACL to create a topic prefix is required. Note that the prefix is the database server name (for example, if the server name is cdc in the configuration property "database.server.name": "cdc"). See ccloud kafka acl create for the CLI command reference.
cdc
"database.server.name": "cdc"
ccloud kafka acl create --allow --service-account "<service-account-id>" --operation "CREATE" --prefix --topic "<database.server.name>"
ccloud kafka acl create --allow --service-account "<service-account-id>" --operation "WRITE" --prefix --topic "<database.server.name>"
See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.
Click Connectors. If you already have connectors in your cluster, click Add connector.
Click the Microsoft SQL Server CDC Source connector icon.
Complete the following and click Continue.
Enter a connector name.
Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.
Add the connection details for the database.
Do not include jdbc:xxxx:// in the Connection host field. The example below shows a sample host address.
jdbc:xxxx://
Add the Database details for your database. Review the following notes for more information about field selections.
schemaName.tableName
initial
schema_only
Enter values for the following properties:
Select the values for the following properties:
Output message format: (data coming from the connector): AVRO, JSON (schemaless), JSON_SR (JSON Schema), or PROTOBUF. A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
After-state only: (Optional) Defaults to true, which results in the Kafka record having only the record state from change events applied. Select false to maintain the prior record states after applying the change events.
JSON output decimal format: (Optional) Defaults to BASE64.
Enter the number of tasks in use by the connector. Refer to Confluent Cloud connector limitations for additional information.
Verify the connection details and click Launch.
The status for the connector should go from Provisioning to Running. It may take a few minutes.
After the connector is running, verify that messages are populating your Kafka topic.
A topic named dbhistory.<databas.server.name>.<connect-id> is automatically created. This topic is created based on the database.history.kafka.topic property (that may be configured). This topic has one partition.
dbhistory.<databas.server.name>.<connect-id>
database.history.kafka.topic
For additional information about this connector, see Debezium SQL Server Source Connector for Confluent Platform. Note that not all Confluent Platform connector features are provided in the Confluent Cloud connector.
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.
Complete the following steps to set up and run the connector using the Confluent Cloud CLI.
Make sure you have all your prerequisites completed.
Enter the following command to list available connectors:
ccloud connector-catalog list
Enter the following command to show the required connector properties:
ccloud connector-catalog describe <connector-catalog-name>
For example:
ccloud connector-catalog describe MicrosoftSqlServerSource
Example output:
Following are the required configs: connector.class: SqlServerCdcSource name kafka.api.key kafka.api.secret database.hostname database.port database.user database.password database.dbname database.server.name output.data.format tasks.max
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{ "connector.class": "SqlServerCdcSource", "name": "SqlServerCdcSourceConnector_0", "kafka.api.key": "****************", "kafka.api.secret": "****************************************************************", "database.hostname": "connect-sqlserver-cdc.<host-id>.us-west-2.rds.amazonaws.com", "database.port": "1433", "database.user": "admin", "database.password": "************", "database.dbname": "database-name", "database.server.name": "sql", "table.whitelist":"public.passengers", "snapshot.mode": "initial", "output.data.format": "JSON", "tasks.max": "1" }
Note the following property definitions:
"connector.class": Identifies the connector plugin name.
"connector.class"
"name": Sets a name for your new connector.
"name"
"table.whitelist": (Optional) Enter a comma-separated list of fully-qualified table identifiers for the connector to monitor. By default, the connector monitors all non-system tables. A fully-qualified table name is in the form schemaName.tableName.
"table.whitelist"
"snapshot.mode": Specifies the criteria for performing a database snapshot when the connector starts.
"snapshot.mode"
schema.only
"output.data.format": Sets the output message format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
"output.data.format"
"after.state.only": (Optional) Defaults to true, which results in the Kafka record having only the record state from change events applied. Enter false to maintain the prior record states after applying the change events.
"after.state.only"
"json.output.decimal.format": (Optional) Defaults to BASE64. Specify the JSON/JSON_SR serialization format for Connect DECIMAL logical type values with two allowed literals:
"json.output.decimal.format"
"tasks.max": Enter the number of tasks in use by the connector. Refer to Confluent Cloud connector limitations for additional information.
"tasks.max"
Configuration properties that are not listed use the default values. For default values and property definitions, see SQL Server Source Connector (Debezium) Configuration Properties.
Enter the following command to load the configuration and start the connector:
ccloud connector create --config <file-name>.json
ccloud connector create --config microsoft-sql-cdc-source.json
Created connector SqlServerCdcSourceConnector_0 lcc-ix4dl
Enter the following command to check the connector status:
ccloud connector list
ID | Name | Status | Type +-----------+--------------------------------+---------+-------+ lcc-ix4dl | SqlServerCdcSourceConnector_0 | RUNNING | source