AWS Lambda Sink Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see AWS Lambda Sink Connector for Confluent Platform.

The AWS Lambda function can be invoked by this connector either synchronously or asynchronously.

  • In synchronous mode, records within a topic and partition are processed sequentially. Records within different topic partitions can be processed in parallel. If configured, the response from AWS Lambda can be written to a Kafka topic. If an error occurs during Lambda execution, the connector can be configured to either ignore the error and proceed, log the error, or stop the connector completely. For additional details about Lambda invocation, see Synchronous invocation.
  • In asynchronous mode, the connector operates in a fire-and-forget mode. Records are processed on a best-effort, sequential basis. The connector does not attempt any retries. AWS Lambda automatically retries up to two times, after which AWS Lambda can move the request to a dead letter queue. For additional details about Lambda invocation, see Ansynchronous invocation.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

The AWS Lambda Sink connector provides the following features:

  • Synchronous and Asynchronous Lambda function invocation: The AWS Lambda function can be invoked by this connector either synchronously or asynchronously.

  • Results topics: In synchronous mode, AWS Lambda results are stored in the following topics:

    • success-<connector-id>
    • error-<connector-id>

  • Input Data Format with or without a Schema: The connector supports input data from Kafka topics in Avro, JSON Schema (JSON_SR), Protobuf, JSON (schemaless), or Bytes format. Schema Registry must be enabled to use a Schema Registry-based format.

    Note

    If no schema is defined, values are encoded as plain strings. For example, "name": "Kimberley Human" is encoded as name=Kimberley Human.

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

Refer to Confluent Cloud connector limitations for additional information.

Quick Start

Use this quick start to get up and running with the Confluent Cloud AWS Lambda Sink connector. The quick start provides the basics of selecting the connector and configuring it to send records to AWS Lambda.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on AWS.

  • The Confluent Cloud CLI installed and configured for the cluster. See Install the Confluent Cloud CLI.

  • Schema Registry must be enabled to use a Schema Registry-based format (for example, Avro, JSON_SR (JSON Schema), or Protobuf).

    Note

    If no schema is defined, values are encoded as plain strings. For example, "name": "Kimberley Human" is encoded as name=Kimberley Human.

  • Your AWS Lambda project should be in the same region as your Confluent Cloud cluster where you are running the connector.

  • An AWS account configured with Access Keys.

  • You need to configure a Lambda IAM policy for the account to allow lambda:InvokeFunction and lambda:GetFunction. The following shows a JSON example for setting this policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "lambda:InvokeFunction",
                    "lambda:GetFunction"
                ],
                "Resource": "*"
            }
        ]
    }
    
  • Kafka cluster credentials. You can use one of the following ways to get credentials:
    • Create a Confluent Cloud API key and secret. To create a key and secret, go to Kafka API keys in your cluster or you can autogenerate the API key and secret directly in the UI when setting up the connector.
    • Create a Confluent Cloud service account for the connector.

Using the Confluent Cloud GUI

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

Click Connectors. If you already have connectors in your cluster, click Add connector.

Step 3: Select your connector.

Click the AWS Lambda Sink connector icon.

AWS Lambda Sink Connector Icon

Step 4: Enter the connector details.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.

Complete the following and click Continue.

  1. Select one or more topics.

  2. Enter a Connector Name.

  3. Select an Input message format (data coming from the Kafka topic): AVRO, JSON_SR (JSON Schema), PROTOBUF, JSON (schemaless), or BYTES. A valid schema must be available in Schema Registry to use a schema-based message format.

    Note

    If no schema is defined, values are encoded as plain strings. For example, "name": "Kimberley Human" is encoded as name=Kimberley Human.

  4. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.

  5. Enter your AWS credentials. For information about how to set these up, see Access Keys.

  6. Enter the Lambda function to invoke. For additional information, see the What is AWS Lambda.

  7. Select the Lambda invocation type:

    • sync: Records within a topic and partition are processed sequentially. Records within different topic partitions can be processed in parallel. If configured, the response from AWS Lambda can be written to a Kafka topic. If an error occurs during Lambda execution, the connector can be configured to either ignore the error and proceed, log the error, or stop the connector completely. For additional details about Lambda invocation, see Synchronous invocation.
    • async: The connector operates in a fire-and-forget mode. Records are processed on a best-effort, sequential basis. The connector does not attempt any retries. AWS Lambda automatically retries up to two times, after which AWS Lambda can move the request to a dead letter queue. For additional details about Lambda invocation, see Ansynchronous invocation.

  8. Enter the maximum number of records to send as a batch for a single Lambda function invokation. The default is 20 records.

  9. Enter the number of tasks in use by the connector.

Step 6: Launch the connector.

Validate the connector properties and click Launch.

Step 7: Check the connector status.

The status for the connector should go from Provisioning to Running.

Step 8: Check the Lambda function metrics.

Got to the AWS Lambda console, open the Lambda function, and verify that records are processing. You can check for processing on the Monitoring tab on the Lambda function page. In synchronous mode, AWS Lambda results are stored in the following topics:

  • success-<connector-id>
  • error-<connector-id>

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

For additional information about the AWS Lambda Sink connector see AWS Lambda Sink Connector for Confluent Platform. Note that not all Confluent Platform Lamdba Sink connector features are provided in the Confluent Cloud Lambda Sink connector.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Note

Make sure you have all your prerequisites completed.

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe LambdaSink

Example output:

Following are the required configs:
connector.class: LambdaSink
name
topics
input.data.format
connector.class
kafka.api.key
kafka.api.secret
aws.access.key.id
aws.secret.access.key
aws.lambda.function.name
aws.lambda.invocation.type
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows the required connector properties.

{
  "connector.class": "LambdaSink",
  "name": "LambdaSinkConnector_0",
  "topics": "users",
  "input.data.format": "JSON",
  "connector.class": "LambdaSink",
  "kafka.api.key": "****************",
  "kafka.api.secret": "*************************************************",
  "aws.access.key.id": "****************",
  "aws.secret.access.key": "********************************************",
  "aws.lambda.function.name": "myLambdaFunction",
  "aws.lambda.invocation.type": "sync",
  "tasks.max": "1"
}

Note the following required property definitions:

  • "connector.class": Identifies the connector plugin name.

  • "name": Sets a name for your new connector.

  • "topics": Identifies the topic name or a comma-separated list of topic names.

  • "input.data.format": Sets the input message format (data coming from the Kafka topic). Valid entries are AVRO, JSON_SR (JSON Schema), PROTOBUF, JSON (Schemaless), or BYTES. You must have Confluent Cloud Schema Registry configured if using a schema-based message format.

    Note

    If no schema is defined, values are encoded as plain strings. For example, "name": "Kimberley Human" is encoded as name=Kimberley Human.

  • "aws.access.key.id" and "aws.secret.access.key": Enter the AWS Access Key ID and Secret. For information about how to set these up, see Access Keys.

  • "aws.lambda.function.name": Enter the Lambda function to invoke. For additional information, see the What is AWS Lambda.

  • "aws.lambda.invocation.type":

    • "sync": Records within a topic and partition are processed sequentially. Records within different topic partitions can be processed in parallel. If configured, the response from AWS Lambda can be written to a Kafka topic. If an error occurs during Lambda execution, the connector can be configured to either ignore the error and proceed, log the error, or stop the connector completely. For additional details about Lambda invocation, see Synchronous invocation.
    • "async": The connector operates in a fire-and-forget mode. Records are processed on a best-effort, sequential basis. The connector does not attempt any retries. AWS Lambda automatically retries up to two times, after which AWS Lambda can move the request to a dead letter queue. For additional details about Lambda invocation, see Ansynchronous invocation.

  • "tasks.max": Enter the number of tasks in use by the connector. Refer to Confluent Cloud connector limitations for additional information.

Step 3: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config lambda-sink-config.json

Example output:

Created connector LambdaSinkConnector_0 lcc-ix4dl

Step 4: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |       Name            | Status  | Type
+-----------+-----------------------+---------+------+
lcc-ix4dl   | LambdaSinkConnector_0 | RUNNING | sink

Step 5: Check the Lambda function metrics.

Got to the AWS Lambda console, open the Lambda function, and verify that records are processing. You can check for processing on the Monitoring tab on the Lambda function page. In synchronous mode, AWS Lambda results are stored in the following topics:

  • success-<connector-id>
  • error-<connector-id>

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

Tip

When you launch a connector, a Dead Letter Queue topic is automatically created. See Dead Letter Queue for details.

For additional information about the AWS Lambda Sink connector see AWS Lambda Sink Connector for Confluent Platform. Note that not all Confluent Platform features are provided in the Confluent Cloud connector.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png