Confluent CLI: Command Example for Apache Kafka®

In this tutorial, you will use the Confluent CLI to produce messages to and consumes messages from an Apache Kafka® cluster.

After you run the tutorial, use the provided source code as a reference to develop your own Kafka client application.

Note

The Confluent CLI is meant for development purposes only and isn’t suitable for a production environment.

Prerequisites

Client

  • Download Confluent Platform 6.1.0, which includes the Confluent CLI.

Kafka Cluster

  • You can use this tutorial with a Kafka cluster in any environment:
  • If you are running on Confluent Cloud, you must have access to a Confluent Cloud cluster with an API key and secret.

Setup

  1. Clone the confluentinc/examples GitHub repository and check out the 6.1.0-post branch.

    git clone https://github.com/confluentinc/examples
    cd examples
    git checkout 6.1.0-post
    
  2. Change directory to the example for Confluent CLI.

    cd clients/cloud/confluent-cli/
    
  3. Create a local file (for example, at $HOME/.confluent/java.config) with configuration parameters to connect to your Kafka cluster. Starting with one of the templates below, customize the file with connection information to your cluster. Substitute your values for {{ BROKER_ENDPOINT }}, {{CLUSTER_API_KEY }}, and {{ CLUSTER_API_SECRET }} (see Configure Confluent Cloud Clients for instructions on how to manually find these values, or use the ccloud-stack Utility for Confluent Cloud to automatically create them).

    • Template configuration file for Confluent Cloud

      # Required connection configs for Kafka producer, consumer, and admin
      bootstrap.servers={{ BROKER_ENDPOINT }}
      security.protocol=SASL_SSL
      sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
      sasl.mechanism=PLAIN
      # Required for correctness in Apache Kafka clients prior to 2.6
      client.dns.lookup=use_all_dns_ips
      
      # Best practice for Kafka producer to prevent data loss 
      acks=all
      
    • Template configuration file for local host

      # Kafka
      bootstrap.servers=localhost:9092
      

Basic Producer and Consumer

In this example, the producer application writes Kafka data to a topic in your Kafka cluster. If the topic does not already exist in your Kafka cluster, the producer application will use the Kafka Admin Client API to create the topic. Each record written to Kafka has a key representing a username (for example, alice) and a value of a count, formatted as json (for example, {"count": 0}). The consumer application reads the same Kafka topic and keeps a rolling sum of the count as it processes each record.

Produce Records

  1. Create the Kafka topic.

    kafka-topics --bootstrap-server `grep "^\s*bootstrap.server" $HOME/.confluent/java.config | tail -1` --command-config $HOME/.confluent/java.config --topic test1 --create --replication-factor 3 --partitions 6
    
  2. Run the Confluent CLI producer, writing messages to topic test1, passing in arguments for:

    • --cloud: write messages to a Confluent Cloud cluster
    • --config: file with Confluent Cloud connection info
    • --property parse.key=true --property key.separator=,: pass key and value, separated by a comma
    confluent local services kafka produce test1 --cloud --config $HOME/.confluent/java.config --property parse.key=true --property key.separator=,
    
  3. At the > prompt, type a few messages, using a , as the separator between the message key and value:

    alice,{"count":0}
    alice,{"count":1}
    alice,{"count":2}
    
  4. When you are done, press Ctrl-D.

  5. View the producer code.

Consume Records

  1. Run the Confluent CLI consumer, reading messages from topic test1, passing in additional arguments:

    • --cloud: write messages to a Confluent Cloud cluster
    • --config: file with Confluent Cloud connection info
    • --property print.key=true: print key and value (by default, it only prints value)
    • --from-beginning: print all messages from the beginning of the topic
    confluent local services kafka consume test1 --cloud --config $HOME/.confluent/java.config --property print.key=true --from-beginning
    
  2. Verify the consumer received all the messages. You should see:

    alice   {"count":0}
    alice   {"count":1}
    alice   {"count":2}
    
  3. When you are done, press Ctrl-C.

  4. View the consumer code.

Avro And Confluent Cloud Schema Registry

This example is similar to the previous example, except the value is formatted as Avro and integrates with the Confluent Cloud Schema Registry.

Before using Confluent Cloud Schema Registry, check its availability and limits.

  1. As described in the Quick Start for Schema Management on Confluent Cloud in the Confluent Cloud GUI, enable Confluent Cloud Schema Registry and create an API key and secret to connect to it.

  2. Verify that your VPC can connect to the Confluent Cloud Schema Registry public internet endpoint.

  3. Update your local configuration file (for example, at $HOME/.confluent/java.config) with parameters to connect to Schema Registry.

    • Template configuration file for Confluent Cloud

      # Required connection configs for Kafka producer, consumer, and admin
      bootstrap.servers={{ BROKER_ENDPOINT }}
      security.protocol=SASL_SSL
      sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username='{{ CLUSTER_API_KEY }}' password='{{ CLUSTER_API_SECRET }}';
      sasl.mechanism=PLAIN
      # Required for correctness in Apache Kafka clients prior to 2.6
      client.dns.lookup=use_all_dns_ips
      
      # Best practice for Kafka producer to prevent data loss 
      acks=all
      
      # Required connection configs for Confluent Cloud Schema Registry
      schema.registry.url=https://{{ SR_ENDPOINT }}
      basic.auth.credentials.source=USER_INFO
      basic.auth.user.info={{ SR_API_KEY }}:{{ SR_API_SECRET }}
      
    • Template configuration file for local host

      # Kafka
      bootstrap.servers=localhost:9092
      
      # Confluent Schema Registry
      schema.registry.url=http://localhost:8081
      
  4. Verify your Confluent Cloud Schema Registry credentials by listing the Schema Registry subjects. In the following example, substitute your values for {{ SR_API_KEY }}, {{ SR_API_SECRET }}, and {{ SR_ENDPOINT }}.

    curl -u {{ SR_API_KEY }}:{{ SR_API_SECRET }} https://{{ SR_ENDPOINT }}/subjects
    

Produce Avro Records

  1. Verify your Confluent Cloud Schema Registry credentials work from your host. In the output below, substitute your values for <SR API KEY>, <SR API SECRET>, and <SR ENDPOINT>.

    # View the list of registered subjects
    curl -u <SR API KEY>:<SR API SECRET> https://<SR ENDPOINT>/subjects
    
    # Same as above, as a single bash command to parse the values out of $HOME/.confluent/java.config
    curl -u $(grep "^schema.registry.basic.auth.user.info" $HOME/.confluent/java.config | cut -d'=' -f2) $(grep "^schema.registry.url" $HOME/.confluent/java.config | cut -d'=' -f2)/subjects
    
  2. View your local Confluent Cloud configuration file ($HOME/.confluent/java.config):

    cat $HOME/.confluent/java.config
    
  3. In the configuration file, substitute values for <SR API KEY>, <SR API SECRET>, and <SR ENDPOINT> as displayed in the following example:

    ...
    basic.auth.credentials.source=USER_INFO
    schema.registry.basic.auth.user.info=<SR API KEY>:<SR API SECRET>
    schema.registry.url=https://<SR ENDPOINT>
    ...
    
  4. Create the Kafka topic.

    kafka-topics --bootstrap-server `grep "^\s*bootstrap.server" $HOME/.confluent/java.config | tail -1` --command-config $HOME/.confluent/java.config --topic test2 --create --replication-factor 3 --partitions 6
    
  5. Run the Confluent CLI producer, writing messages to topic test2, passing in arguments for:

    • --value-format avro: use Avro data format for the value part of the message
    • --property value.schema: define the schema
    • --property schema.registry.url: connect to the Confluent Cloud Schema Registry endpoint http://<SR ENDPOINT>
    • --property basic.auth.credentials.source: specify USER_INFO
    • --property schema.registry.basic.auth.user.info: <SR API KEY>:<SR API SECRET>

    Important

    You must pass in the additional Schema Registry parameters as properties instead of a properties file due to https://github.com/confluentinc/schema-registry/issues/1052.

    confluent local services kafka produce test2 --cloud --config $HOME/.confluent/java.config --value-format avro --property value.schema='{"type":"record","name":"myrecord","fields":[{"name":"count","type":"int"}]}' --property schema.registry.url=https://<SR ENDPOINT> --property basic.auth.credentials.source=USER_INFO --property schema.registry.basic.auth.user.info='<SR API KEY>:<SR API SECRET>'
    
  6. At the > prompt, type the following messages:

    {"count":0}
    {"count":1}
    {"count":2}
    
  7. When you are done, press Ctrl-D.

  8. View the producer Avro code.

Consume Avro Records

  1. Run the Confluent CLI consumer, reading messages from topic test2, passing in arguments for:

    • --value-format avro: use Avro data format for the value part of the message
    • --property schema.registry.url: connect to the Confluent Cloud Schema Registry endpoint http://<SR ENDPOINT>
    • --property basic.auth.credentials.source: specify USER_INFO
    • --property schema.registry.basic.auth.user.info: <SR API KEY>:<SR API SECRET>

    Important

    You must pass in the additional Schema Registry parameters as properties instead of a properties file due to https://github.com/confluentinc/schema-registry/issues/1052.

    confluent local services kafka consume test2 --cloud --config $HOME/.confluent/java.config --value-format avro --property schema.registry.url=https://<SR ENDPOINT> --property basic.auth.credentials.source=USER_INFO --property schema.registry.basic.auth.user.info='<SR API KEY>:<SR API SECRET>' --from-beginning
    
  2. Verify the consumer received all the messages. You should see:

    {"count":0}
    {"count":1}
    {"count":2}
    
  3. When you are done, press Ctrl-C.

  4. View the consumer Avro code.