Limitations

Refer to the following for specific Confluent Cloud connector limitations.

Supported Connectors

Amazon Kinesis Source Connector

There are no current limitations for the Amazon Kinesis Source Connector for Confluent Cloud.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Amazon Redshift Sink Connector

The following are limitations for the Amazon Redshift Sink Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering.
  • The Confluent Cloud cluster and the target Redshift cluster must be in the same AWS region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Amazon S3 Sink Connector

The following are limitations for the Amazon S3 Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target S3 bucket must be in the same AWS region.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

AWS Lambda Sink Connector

The following are limitations for the AWS Lambda Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and your AWS Lambda project should be in the same AWS region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Azure Blob Storage Sink Connector

The following are limitations for the Azure Blob Storage Sink Connector for Confluent Cloud.

  • The Azure Blob Storage Container should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Blob storage in different regions.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Azure Data Lake Storage Gen2 Sink Connector

The following are limitations for the Azure Data Lake Storage Gen2 Sink Connector for Confluent Cloud.

  • Azure Data Lake storage should be in the same region as your Confluent Cloud cluster. If you use a different region, be aware that you may incur additional data transfer charges. Contact Confluent Support if you need to use Confluent Cloud and Azure Data Lake storage in different regions.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for the preview version of this connector.

  • Input format JSON to output format AVRO does not work for the preview connector.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Azure Event Hubs Source Connector

The following are limitations for the Azure Event Hubs Source Connector for Confluent Cloud.

  • max.events: 499 is the maximum number of events allowed. Defaults to 50.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Azure Functions Sink Connector

There is one limitation for the Azure Functions Sink Connector for Confluent Cloud.

The target Azure Function should be in the same region as your Confluent Cloud cluster.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Datagen Source Connector

There are no current limitations for the Datagen Source Connector for Confluent Cloud.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Elasticsearch Service Sink Connector

The following are limitations for the Elasticsearch Service Sink Connector for Confluent Cloud.

  • The connector only works with the Elasticsearch Service from Elastic Cloud.
  • The Confluent Cloud cluster and the target Elasticsearch deployment must be in the same region.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Google BigQuery Sink Connector

The following are limitations for the Google BigQuery Sink Connector for Confluent Cloud.

  • One task can handle up to 100 partitions.
  • Source topic names must comply with BigQuery naming conventions even if sanitizeTopics is set to true in the connector configuration.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Configuration properties that are not shown in the Confluent UI use default values. See Google BigQuery Sink Connector Configuration Properties for all connector properties.
  • Topic names are mapped to BigQuery table names. For example, if you have a topic named pageviews, a topic named visitors, and a dataset named website, the result is two tables in BigQuery; one named pageviews and one named visitors under the website dataset.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Google Functions Sink Connector

There is one limitation for the Google Cloud Functions Sink Connector for Confluent Cloud.

The target Google Function should be in the same region as your Confluent Cloud cluster.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Google Pub/Sub Source Connector

There are no current limitations for the Google Pub/Sub Source Connector for Confluent Cloud.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Google Cloud Spanner Sink Connector

The following are limitations for the Google Cloud Spanner Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Google Spanner cluster must be in the same GCP region.
  • A valid schema must be available in Confluent Cloud Schema Registry to use Avro, JSON Schema, or Protobuf.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Google Cloud Storage Sink Connector

The following are limitations for the Google Cloud Storage Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Google Cloud Storage (GCS) bucket must be in the same Google Cloud Platform region.

  • One task can handle up to 100 partitions.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Microsoft SQL Server Source Connector

The following are limitations for the Microsoft SQL Server Source Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • SSL is not supported and should be turned off.
  • A timestamp column must not be nullable and should be datetime2
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

MySQL Sink Connector

The following are limitations for the MySQL Sink Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for the VPC where the database is located, unless the environment is configured for VPC peering.
  • SSL is not supported and should be turned off.
  • The database and Kafka cluster should be in the same region.
  • The connector cannot handle tombstone records.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

MySQL Source Connector

The following are limitations for the MySQL Source Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • SSL is not supported and should be turned off.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Oracle Database Source Connector

The following are limitations for the Oracle Database Source Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • SSL is not supported and should be turned off.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

PostgresSQL Source Connector

The following are limitations for the PostgreSQL Source Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Clients from Azure Virtual Networks are not allowed to access the server by default. Please make sure your Azure Virtual Network is correctly configured and enable “Allow access to Azure Services”.
  • A topic or topics to which the connector can write records must exist before creating the connector.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See JDBC Connector Source Connector Configuration Properties for property definitions and default values.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Snowflake Sink Connector

The following are limitations for the Snowflake Sink Connector for Confluent Cloud.

  • The Snowflake database and Kafka cluster should be in the same region.
  • The Snowflake Sink connector does not remove Snowflake pipes when a connector is deleted. For instructions to manually clean up Snowflake pipes, see Dropping Pipes.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Preview Connectors

Caution

Preview connectors are not currently supported and are not recommended for production use.

Google Cloud Dataproc Sink Connector

The following are limitations for the Google Cloud Dataproc Sink Connector for Confluent Cloud.

  • The Confluent Cloud cluster and the target Dataproc cluster must be in a VPC peering configuration.

    Note

    For a non-VPC peered environment, public inbound traffic access (0.0.0.0/0) must be allowed to the VPC where the Dataproc cluster is located. You must also make configuration changes to allow public access to the Dataproc cluster while retaining the private IP addresses for the Dataproc master and worker nodes (HDFS NameNode and DataNodes). For configuration details, see Configuring a non-VPC peering environment.

  • The Dataproc image version must be 1.4 (or later). See Cloud Dataproc Image version list.

  • One task can handle up to 100 partitions.

  • Input format JSON to output format AVRO does not work for the preview connector.

  • Partitioning (hourly or daily) is based on Kafka record time.

  • flush.size defaults to 1000. The default value can be increased if needed. The default value can be lowered if you are running a Dedicated Confluent Cloud cluster.

    The following scenarios describe a couple of ways records may be flushed to storage:

    • You use the default setting of 1000 and your topic has six partitions. Files start to be created in storage after more than 1000 records exist in each partition.

    • You use the default setting of 1000 and the partitioner is set to Hourly. 500 records arrive at one partition from 2:00pm to 3:00pm. At 3:00pm, an additional 5 records arrive at the partition. You will see 500 records in storage at 3:00pm.

      Note

      The properties rotate.schedule.interval.ms and rotate.interval.ms can be used with flush.size to determine when files are created in storage. These parameters kick in and files are stored based on which condition is met first.

      For example: You have one topic partition. You set flush.size=1000 and rotate.schedule.interval.ms=600000 (10 minutes). 500 records arrive at the topic partition from 12:01 to 12:10. 500 additional records arrive from 12:11 to 12:20. You will see two files in the storage bucket with 500 records in each file. This is because the 10 minute rotate.schedule.interval.ms condition tripped before the flush.size=1000 condition was met.

  • schema.compatibility is set to NONE.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector. Use of this connector is free for a limited time.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

Microsoft SQL Server Sink Connector

The following are limitations for the Microsoft SQL Server Sink Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for the preview version of this connector.
  • SSL should be turned off.
  • The database and Kafka cluster should be in the same region. If you use a different region, be aware that you may incur additional data transfer charges.
  • The connector cannot handle tombstone records.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

Microsoft SQL Server CDC Source Connector (Debezium)

The following are limitations for the Microsoft SQL Server CDC Source (Debezium) Connector for Confluent Cloud.

  • Change data capture (CDC) is only available in the Enterprise, Developer, Enterprise Evaluation, and Standard editions.
  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • SSL is not supported and should be turned off.
  • Bulk and Incrementing are not supported.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

MongoDB Atlas Sink Connector

The following are limitations for the MongoDB Atlas Sink Connector for Confluent Cloud.

  • This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database.

  • Document post processing configuration properties are not supported. These include:

    • post.processor.chain
    • key.projection.type
    • value.projection.type
    • field.renamer.mapping
  • Public inbound traffic access (0.0.0.0/0) must be allowed for the preview version of this connector.

  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.

  • The MongoDB database and Kafka cluster should be in the same region.

  • Customers with a VPC-peered Kafka cluster in Confluent Cloud on AWS should consider configuring a PrivateLink Connection between MongoDB Atlas and the AWS VPC.

  • You cannot use a dot in a field name (for example, Client.Email). The error shown below is displayed if a field name includes a dot. You should also not use $ in a field name. For additional information, see Field Names.

    Your record has an invalid BSON field name. Please check Mongo documentation for details.
    
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

MongoDB Atlas Source Connector

The following are limitations for the MongoDB Atlas Source Connector for Confluent Cloud.

  • This connector supports MongoDB Atlas only. This connector will not work with a self-managed MongoDB database.
  • Configuration properties for aggregation pipeline are not supported.
  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • Customers with a VPC-peered Kafka cluster in Confluent Cloud on AWS should consider configuring a PrivateLink Connection between MongoDB Atlas and the AWS VPC.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

MySQL CDC Source Connector (Debezium)

The following are limitations for the MySQL CDC Source (Debezium) Connector for Confluent Cloud.

  • Change data capture (CDC) is only available in the Enterprise, Developer, Enterprise Evaluation, and Standard editions.
  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • SSL is not supported and should be turned off.
  • Bulk and Incrementing are not supported.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

PostgreSQL Sink Connector

The following are limitations for the PostgreSQL Sink Connector for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) must be allowed for the preview version of this connector.
  • SSL should be turned off.
  • The database and Kafka cluster should be in the same region. If you use a different region, be aware that you may incur additional data transfer charges.
  • The connector cannot handle tombstone records.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

PostgresSQL CDC Source (Debezium) Connector

The following are limitations for the PostgreSQL CDC Source Connector (Debezium) for Confluent Cloud.

  • Public inbound traffic access (0.0.0.0/0) may be required for the VPC where the database is located, unless the environment is configured for VPC peering. See Internet Access to Resources for additional details.
  • Public access may be required for your database. See Internet Access to Resources for details.
  • For Azure, you must use a general purpose or memory-optimized PostgreSQL database. You cannot use a basic database.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • Clients from Azure Virtual Networks are not allowed to access the server by default. Please make sure your Azure Virtual Network is correctly configured and enable "Allow access to Azure Services".
  • SSL is not supported and should be turned off.
  • A timestamp column must not be nullable.
  • Bulk and Incrementing are not supported.
  • The partition and replication factor properties are set to topic.creation.default.partitions=1 and topic.creation.default.replication.factor=3 for the preview connector.
  • Configuration properties that are not shown in the Confluent Cloud UI use the default values. See PostgreSQL Source Connector (Debezium) Configuration Properties for property definitions and default values. (Note that these docs are for the self-managed connector. Some properties may not be applicable to the cloud connector.)
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.

Salesforce CDC Source Connector

The following are limitations for the Salesforce CDC Source Connector for Confluent Cloud.

  • Change data capture (CDC) is only available in the Enterprise, Developer, Enterprise Evaluation, and Standard editions.
  • A valid schema must be available in Confluent Cloud Schema Registry to use a schema-based message format, like Avro.
  • For Confluent Cloud and Confluent Cloud Enterprise, organizations are limited to one task and one connector.

Important

After this connector becomes generally available, Confluent Cloud Enterprise customers will need to contact their Confluent Account Executive for more information about using this connector.