CLOUD
The Confluent Cloud Datagen Source connector is used to generate mock data for development and testing. The connector supports Avro, JSON Schema, Protobuf, and JSON (schemaless) output formats. The mock source data is provided through GitHub from datagen resources. This connector is not suitable for production use.
Important
If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.
Refer to Cloud connector limitations for additional information.
Use this quick start to get up and running with the Confluent Cloud Datagen source connector. The quick start provides the basics of selecting the connector and configuring it to use for testing and development. This connector is not suitable for production use.
See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.
Click Connectors. If you already have connectors in your cluster, click Add connector.
Click the Datagen Source connector icon.
Complete the following and click Continue.
Note
Verify the connection details and click Launch.
The status for the connector should go from Provisioning to Running. It may take a few minutes.
After the connector is running, verify that messages are populating your Kafka topic.
You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.
Complete the following steps to set up and run the connector using the Confluent Cloud CLI.
Make sure you have all your prerequisites completed.
Enter the following command to list available connectors:
ccloud connector-catalog list
Enter the following command to show the required connector properties:
ccloud connector-catalog describe <connector-catalog-name>
For example:
ccloud connector-catalog describe DatagenSource
Example output:
Following are the required configs: connector.class: DatagenSource name kafka.api.key kafka.api.secret kafka.topic output.data.format quickstart tasks.max
Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.
{ "name" : "<datagen-connector-name", "connector.class": "DatagenSource", "kafka.api.key": "<my-kafka-api-key>", "kafka.api.secret" : "<my-kafka-api-secret", "kafka.topic" : "topic1, topic2", "output.data.format" : "JSON", "quickstart" : "PAGEVIEWS", "tasks.max" : "1" }
Note the following property definitions:
"name": Sets a name for your new connector.
"name"
"connector.class": Identifies the connector plugin name.
"connector.class"
"kafka.topic": Enter one topic or multiple comma-separated topics.
"kafka.topic"
"output.data.format": Sets the output message format (data coming from the connector). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON (schemaless). You must have Confluent Cloud Schema Registry configured if using a schema-based output message format (for example, Avro).
"output.data.format"
"quickstart": Enter one of the following Quick Start schemas:
"quickstart"
To view the sample data and schema specifications, see datagen resources.
Enter the following command to load the configuration and start the connector:
ccloud connector create --config <file-name>.json
ccloud connector create --config datagen-source-config.json
Created connector confluent-datagen-source lcc-ix4dl
Enter the following command to check the connector status:
ccloud connector list
ID | Name | Status | Type +-----------+--------------------------+---------+-------+ lcc-ix4dl | confluent-datagen-source | RUNNING | source
Follow the steps in the Quick Start for Apache Kafka using Confluent Cloud to stream sample data to Kafka using the Datagen Source connector for Confluent Cloud.
See also
For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.
Blog post: Creating a Serverless Environment for Testing Your Apache Kafka Applications