Google Cloud Spanner Sink Connector for Confluent Cloud

The Kafka Connect Google Cloud Spanner Sink connector for Confluent Cloud moves data from Apache Kafka® to a Google Cloud Spanner database. It writes data from a topic in Kafka to a table in the specified Spanner database. Table auto-creation and limited auto-evolution are supported.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

The Google Cloud Spanner sink connector provides the following features:

  • The connector inserts and upserts Kafka records into a Google Cloud Spanner database.
  • The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) input data formats. Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • auto.create and auto-evolve are supported. If tables or columns are missing, they can be created automatically.
  • PK modes supported are kafka, none, and record_value. Used in conjunction with the PK Fields property.

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

Refer to Cloud connector limitations for additional information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Google Cloud Spanner sink connector. The quick start provides the basics of selecting the connector and configuring it to stream events to a Spanner database.

Prerequisites
  • Kafka cluster credentials. You can use one of the following ways to get credentials:
    • Create a Confluent Cloud API key and secret. To create a key and secret, go to Kafka API keys in your cluster or you can autogenerate the API key and secret directly in the UI when setting up the connector.
    • Create a Confluent Cloud service account for the connector.

Using the Confluent Cloud GUI

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

Click Connectors. If you already have connectors in your cluster, click Add connector.

Step 3: Select your connector.

Click the Google Cloud Spanner Sink connector icon.

Google Cloud Spanner Sink Connector Icon

Step 4: Set up the connection.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

Complete the following and click Continue.

  1. Select one or more topics.
  2. Enter a Connector Name.
  3. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.
  4. Select an Input message format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, or JSON (schemaless). A valid schema must be available in Schema Registry to use a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  5. Upload your GCP credentials JSON file.
  6. Enter the Spanner instance ID.
  7. Enter the database ID where topic tables are located or will be created.
  8. Select one of the following insert modes:
    • INSERT: Use the standard INSERT row function. An error occurs if the row already exists in the table.
    • UPDATE: Use the standard UPDATE row function. An error occurs if the row does not exist in the table.
    • UPSERT: This mode is similar to INSERT. However, if the row already exists, the UPSERT function overwrites column values with the new values provided.
  9. Enter the maximum size for batched records. A typical entry here is 1000.
  10. Select whether to automatically create a table or column if it is missing relative to the input record schema.
  11. Select a PK mode. Supported modes are listed below:
    • kafka: Kafka coordinates are used as the primary key. Must be used with the PK Fields property.
    • none: No primary keys used.
    • record_value: Fields from the Kafka record value are used. This must be a struct type.
  12. Enter the PK fields values. This is a list of comma-separated primary key field names. The runtime interpretation of this property depends on the PK mode selected. Options are listed below:
    • kafka: Must be three values representing the Kafka coordinates. If left empty, the coordinates default to __connect_topic,__connect_partition,__connect_offset.
    • none: PK Fields not used.
    • record_value: Used to extract fields from the record value. If left empty, all fields from the value struct are used.
  13. Enter the maximum number of tasks the connector can run. See Confluent Cloud connector limitations for additional task information.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Launch the connector

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running.

Step 7: Check the results in Spanner.

  1. From the Google Cloud Console, go to your Spanner project.
  2. Verify that new records are being added to the Spanner database.

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe SpannerSink

Example output:

Following are the required configs:
connector.class: SpannerSink
name
kafka.api.key
kafka.api.secret
topics
input.data.format
gcp.spanner.credentials.json
gcp.spanner.instance.id
gcp.spanner.database.id
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties:

{
  "connector.class": "SpannerSink",
  "name": "spanner-sink-connector",
  "kafka.api.key": "<my-kafka-api-key?",
  "kafka.api.secret": "<my-kafka-api-secret>",
  "topics": "pageviews",
  "input.data.format": "AVRO",
  "gcp.spanner.credentials.json": "<my-gcp-credentials>",
  "gcp.spanner.instance.id": "<my-spanner-instance-id>",
  "gcp.spanner.database.id": "<my-spanner-dabase-id>",
  "auto.create": "true",
  "auto.evolve": "true",
  "tasks.max": "1"
 }

Note the following property definitions:

  • "name": Sets a name for your new connector.
  • "connector.class": Identifies the connector plugin name.
  • "topics": Identifies the topic name or a comma-separated list of topic names.
  • "input.data.format": Sets the input message format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR, PROTOBUF, or JSON. You must have Confluent Cloud Schema Registry configured if using a schema-based message format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).
  • "gcp.spanner.credentials.json": This contains the contents of the downloaded JSON file. See Formatting GCP credentials for details about how to format and use the contents of the downloaded credentials file.
  • "tasks.max": Maximum number of tasks the connector can run. See Confluent Cloud connector limitations for additional task information.

Optional

  • "auto.create" (tables) and "auto-evolve" (columns): Sets whether to automatically create tables or columns if they are missing relative to the input record schema. If not entered in the configuration, both default to false.
  • "pk.mode": (Optional) Supported modes are listed below:
    • kafka: Kafka coordinates are used as the primary key. Must be used with the PK Fields property.
    • none: No primary keys used.
    • record_value: Fields from the Kafka record value are used. This must be a struct type.
  • "pk.fields": A list of comma-separated primary key field names. The runtime interpretation of this property depends on the pk.mode selected. Options are listed below:
    • kafka: Must be three values representing the Kafka coordinates. If left empty, the coordinates default to __connect_topic,__connect_partition,__connect_offset.
    • none: PK Fields not used.
    • record_value: Used to extract fields from the record value. If left empty, all fields from the value struct are used.
Formatting GCP credentials

The contents of the downloaded credentials file must be converted to string format before it can be used in the connector configuration.

  1. Convert the JSON file contents into string format. You can use an online converter tool to do this. For example: JSON to String Online Converter.

  2. Add the \ escape character before all \n entries in the Private Key section so that each section begins with \\n (see the highlighted lines below). The example below has been formatted so that the \\n entries are easier to see. Most of the credentials key has been omitted.

    Tip

    A script is available that converts the credentials to a string and also adds additional escape \ characters where needed. See Stringify GCP Credentials.

       {
         "connector.class": "SpannerSink",
         "name": "spanner-sink-connector",
         "kafka.api.key": "<my-kafka-api-key?",
         "kafka.api.secret": "<my-kafka-api-secret>",
         "topics": "pageviews",
         "input.data.format": "AVRO",
         "gcp.spanner.credentials.json": "{\"type\":\"service_account\",\"project_id\":\"connect-
         1234567\",\"private_key_id\":\"omitted\",
         \"private_key\":\"-----BEGIN PRIVATE KEY-----
         \\nMIIEvAIBADANBgkqhkiG9w0BA
         \\n6MhBA9TIXB4dPiYYNOYwbfy0Lki8zGn7T6wovGS5pzsIh
         \\nOAQ8oRolFp\rdwc2cC5wyZ2+E+bhwn
         \\nPdCTW+oZoodY\\nOGB18cCKn5mJRzpiYsb5eGv2fN\/J
         \\n...rest of key omitted...
         \\n-----END PRIVATE KEY-----\\n\",
         \"client_email\":\"pub-sub@connect-123456789.iam.gserviceaccount.com\",
         \"client_id\":\"123456789\",\"auth_uri\":\"https:\/\/accounts.google.com\/o\/oauth2\/
         auth\",\"token_uri\":\"https:\/\/oauth2.googleapis.com\/
         token\",\"auth_provider_x509_cert_url\":\"https:\/\/
         www.googleapis.com\/oauth2\/v1\/
         certs\",\"client_x509_cert_url\":\"https:\/\/www.googleapis.com\/
         robot\/v1\/metadata\/x509\/pub-sub%40connect-
         123456789.iam.gserviceaccount.com\"}",
         "gcp.spanner.instance.id": "<my-spanner-instance-id>",
         "gcp.spanner.database.id": "<my-spanner-dabase-id>",
         "auto.create": "true",
         "auto.evolve": "true",
         "tasks.max": "1"
       }
    
  3. Add all the converted string content to the credentials section of your configuration file as shown in the example above.

Step 4: Load the configuration file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config spanner-sink-config.json

Example output:

Created connector spanner-sink-connector lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |       Name              | Status  | Type
+-----------+-------------------------+---------+------+
lcc-ix4dl   | spanner-sink-connector  | RUNNING | sink

Step 7: Check the results in Spanner.

  1. From the Google Cloud Console, go to your Spanner project.
  2. Verify that new records are being added to the Spanner database.

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png