Azure Event Hubs Source Connector for Confluent Cloud

Note

If you are installing the connector locally for Confluent Platform, see Azure Event Hubs Source Connector for Confluent Platform.

The Kafka Connect Azure Event Hubs Source connector for Confluent Cloud is used to poll data from Azure Event Hubs and persist the data to an Apache Kafka® topic. For additional information about Azure Event Hubs, see the Azure Event Hubs documentation.

Important

If you are still on Confluent Cloud Enterprise, please contact your Confluent Account Executive for more information about using this connector.

Features

The Azure Event Hubs source connector provides the following features:

  • Fetches records from Azure Event Hubs through a subscription.
  • Select configuration properties:
    • azure.eventhubs.partition.starting.position
    • azure.eventhubs.consumer.group
    • azure.eventhubs.transport.type
    • azure.eventhubs.offset.type
    • max.events (defaults to 50 with a maximum of 499 events)

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

Quick Start

Use this quick start to get up and running with the Confluent Cloud Azure Event Hubs Source connector.

Prerequisites
  • Authorized access to a Confluent Cloud cluster on Amazon Web Services (AWS), Microsoft Azure (Azure), or Google Cloud Platform (GCP).
  • At least one topic must exist before creating the connector.
  • The Confluent Cloud CLI installed and configured for the cluster. See Install and Configure the Confluent Cloud CLI.
  • An Azure account with an existing Event Hubs Namespace, Event Hub, and Consumer Group.
  • An Azure Event Hubs Shared Access Policy with its policy name and key.
  • Kafka cluster credentials. You can use one of the following ways to get credentials:
    • Create a Confluent Cloud API key and secret. To create a key and secret, go to Kafka API keys in your cluster or you can autogenerate the API key and secret directly in the UI when setting up the connector.
    • Create a Confluent Cloud service account for the connector.

Using the Confluent Cloud GUI

Step 1: Launch your Confluent Cloud cluster.

See the Quick Start for Apache Kafka using Confluent Cloud for installation instructions.

Step 2: Add a connector.

Click Connectors. If you already have connectors in your cluster, click Add connector.

Step 3: Select your connector.

Click the Azure Event Hubs Source connector icon.

Azure Event Hubs Source Connector Icon

Step 4: Set up the connection.

Complete the following and click Continue.

Note

  • Make sure you have all your prerequisites completed.
  • An asterisk ( * ) designates a required entry.
  1. Enter a connector name.
  2. Enter your Kafka Cluster credentials. The credentials are either the API key and secret or the service account API key and secret.
  3. Enter the Kafka topic name where you want data sent.
  4. Enter your Azure Event Hubs details.
  5. Enter your Connection details.
    • Select the starting position in the Event Hub if no offsets are stored and a reset occurs.
    • Select the transport type for communicating with Event Hubs. Event Hubs supports the following two types:
      • AMQP: AMQP over TCP (uses port 5671)
      • AMQP_WEB_SOCKETS: AMQP over web sockets (uses port 443)
    • Select the offset type used to keep track of events. Event Hubs supports the following two types:
      • OFFSET: The Azure Event Hubs offset for the event.
      • SEQ_NUM: The sequence number of the event.
    • The maximum number of events to read when polling an Event Hub partition. 50 events is typical. 499 is the maximum number of events.
  6. Enter the maximum number of tasks for the connector. Refer to Confluent Cloud connector limitations for additional information.

Configuration properties that are not shown in the Confluent Cloud UI use the default values. For default values and property definitions, see Azure Event Hubs Source Connector Configuration Properties.

Step 5: Launch the connector.

Verify the connection details and click Launch.

Launch the connector

Step 6: Check the connector status.

The status for the connector should go from Provisioning to Running. It may take a few minutes.

Check the connector status

Step 7: Check the Kafka topic.

After the connector is running, verify that messages are populating your Kafka topic.

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

For additional information about this connector, see Azure Event Hubs Source Connector for Confluent Platform. Note that not all Confluent Platform connector features are provided in the Confluent Cloud connector.

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png

Using the Confluent Cloud CLI

Complete the following steps to set up and run the connector using the Confluent Cloud CLI.

Important

  • Make sure you have all your prerequisites completed.

  • You must create topic names before before creating and launching this connector. Use the command below to create a topic using the Confluent Cloud CLI.

    ccloud kafka topic create <topic-name>
    

Step 1: List the available connectors.

Enter the following command to list available connectors:

ccloud connector-catalog list

Step 2: Show the required connector configuration properties.

Enter the following command to show the required connector properties:

ccloud connector-catalog describe <connector-catalog-name>

For example:

ccloud connector-catalog describe AzureEventHubsSource

Example output:

Following are the required configs:
connector.class: AzureEventHubsSource
name
kafka.api.key
kafka.api.secret
azure.eventhubs.sas.keyname
azure.eventhubs.sas.key
azure.eventhubs.namespace
azure.eventhubs.hub.name
kafka.topic
tasks.max

Step 3: Create the connector configuration file.

Create a JSON file that contains the connector configuration properties. The following example shows required and optional connector properties.

{
    "connector.class": "AzureEventHubsSource",
    "name": "azure-eventhubs-source",
    "kafka.api.key": "<my-kafka-api-key>",
    "kafka.api.secret": "<my-kafka-api-secret>",
    "azure.eventhubs.sas.keyname": "<-my-shared-access-policy name->",
    "azure.eventhubs.sas.key": "<my-shared-access-key>",
    "azure.eventhubs.namespace": "<my-eventhubs-namespace>",
    "azure.eventhubs.hub.name": "<my-eventhub-name>",
    "azure.eventhubs.consumer.group": "<my-eventhub-consumer-group>",
    "kafka.topic": "<my-topic-name>",
    "azure.eventhubs.partition.starting.position": "START_OF_STREAM",
    "azure.eventhubs.transport.type": "AMQP",
    "azure.eventhubs.offset.type": "OFFSET",
    "max.events": "50",
    "tasks.max": "1"
  }

Note the following property definitions:

  • "name": Sets a name for your new connector.

  • "connector.class": Identifies the connector plugin name.

  • "azure.eventhubs.partition.starting.position": (Optional) Sets the starting position in the Event Hub if no offsets are stored and a reset occurs. The value can be START_OF_STREAM or END_OF_STREAM. If no property is entered, the configuration defaults to START_OF_STREAM.

  • "azure.eventhubs.transport.type": (Optional) Sets the transport type for communicating with Azure Event Hubs. The value can be AMQP or AMQP_WEB_SOCKETS. AMQP (over TCP) uses port 5671. AMQP over web sockets uses port 443. If no proerty is entered, the configuration defaults to AMQP.

  • "azure.eventhubs.offset.type": (Optional) Sets the offset type used to keep track of events. The value can be OFFSET (the Azure Event Hubs offset for the event) or SEQ_NUM (the sequence number of the event). If no property is entered, the configuration defaults to OFFSET.

  • "max.events": (Optional) The maximum number of events to read from an Event Hub partition when polling. If no property is entered, the configuration defaults to 50. 499 is the maximum number events.

    Note

    Configuration properties that are not listed use the default values. For default values and property definitions, see Azure Event Hubs Source Connector Configuration Properties.

Step 4: Load the properties file and create the connector.

Enter the following command to load the configuration and start the connector:

ccloud connector create --config <file-name>.json

For example:

ccloud connector create --config az-event-hubs.json

Example output:

Created connector azure-eventhubs-source lcc-ix4dl

Step 5: Check the connector status.

Enter the following command to check the connector status:

ccloud connector list

Example output:

ID          |           Name           | Status  |  Type
+-----------+--------------------------+---------+--------+
lcc-ix4dl   | azure-eventhubs-source   | RUNNING | source

Step 6: Check the Kafka topic.

After the connector is running, verify that messages are populating your Kafka topic.

You can manage your full-service connector using the Confluent Cloud API. For details, see the Confluent Cloud API documentation.

For additional information about this connector, see Azure Event Hubs Source Connector for Confluent Platform. Note that not all Confluent Platform connector features are provided in the Confluent Cloud connector.

Next Steps

See also

For an example that shows fully-managed Confluent Cloud connectors in action with Confluent Cloud ksqlDB, see the Cloud ETL Demo. This example also shows how to use Confluent Cloud CLI to manage your resources in Confluent Cloud.

../_images/topology.png