CONFLUENT PLATFORM
Use this quick start to get up and running with Confluent Platform and Confluent Community components in a development environment using Docker containers.
In this quick start, you create Apache Kafka® topics, use Kafka Connect to generate mock data to those topics, and create ksqlDB streaming queries on those topics.
This quick start leverages the Confluent Platform CLI, the Apache Kafka® CLI, and the ksqlDB CLI. For a rich UI-based experience, try out the Confluent Platform quick start with commercial components.
Clone the confluentinc/cp-all-in-one GitHub repository.
Check out the 6.1.0-post branch:
6.1.0-post
cd cp-all-in-one
git checkout 6.1.0-post
Navigate to the cp-all-in-one-community directory:
cp-all-in-one-community
cd cp-all-in-one-community
Start Confluent Platform specifying the -d option to run in detached mode:
-d
docker-compose up -d
The above command starts Confluent Platform with separate containers for all Confluent Platform components. Your output should resemble the following:
Creating network "cp-all-in-one-community_default" with the default driver Creating zookeeper ... done Creating broker ... done Creating schema-registry ... done Creating rest-proxy ... done Creating connect ... done Creating ksql-datagen ... done Creating ksqldb-server ... done Creating ksqldb-cli ... done
Verify that the services are up and running:
docker-compose ps
You should see the following:
Name Command State Ports ------------------------------------------------------------------------------------------ broker /etc/confluent/docker/run Up 0.0.0.0:29092->29092/tcp, 0.0.0.0:9092->9092/tcp connect /etc/confluent/docker/run Up 0.0.0.0:8083->8083/tcp, 9092/tcp ksqldb-cli ksql http://localhost:8088 Up ksql-datagen bash -c echo Waiting for K ... Up ksqldb-server /etc/confluent/docker/run Up 0.0.0.0:8088->8088/tcp rest-proxy /etc/confluent/docker/run Up 0.0.0.0:8082->8082/tcp schema-registry /etc/confluent/docker/run Up 0.0.0.0:8081->8081/tcp zookeeper /etc/confluent/docker/run Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcp
If the state is not Up, rerun the docker-compose up -d command.
Up
In this step, you create Kafka topics using the Kafka CLI.
Create a topic named users:
users
docker-compose exec broker kafka-topics \ --create \ --bootstrap-server localhost:9092 \ --replication-factor 1 \ --partitions 1 \ --topic users
Create a topic named pageviews:
pageviews
docker-compose exec broker kafka-topics \ --create \ --bootstrap-server localhost:9092 \ --replication-factor 1 \ --partitions 1 \ --topic pageviews
In this step, you use Kafka Connect to run a demo source connector called kafka-connect-datagen that creates sample data for the Kafka topics pageviews and users.
kafka-connect-datagen
Run the first instance of the Kafka Connect Datagen connector to produce Kafka data to the pageviews topic in AVRO format.
curl -L -O -H 'Accept: application/vnd.github.v3.raw' \ https://api.github.com/repos/confluentinc/kafka-connect-datagen/contents/config/connector_pageviews_cos.config
curl -X POST -H 'Content-Type: application/json' \ --data @connector_pageviews_cos.config \ http://localhost:8083/connectors
Run the second instance of the Kafka Connect Datagen connector to produce Kafka data to the users topic in AVRO format.
curl -L -O -H 'Accept: application/vnd.github.v3.raw' \ https://api.github.com/repos/confluentinc/kafka-connect-datagen/contents/config/connector_users_cos.config
curl -X POST -H 'Content-Type: application/json' \ --data @connector_users_cos.config \ http://localhost:8083/connectors
Tip
The Kafka Connect Datagen connector was installed automatically when you started Docker Compose in Step 1: Download and Start Confluent Platform Using Docker. If you encounter issues with the Datagen Connector, refer to the Issue: Cannot locate the Datagen Connector in the Troubleshooting section.
In this step, you create streams, tables, and queries using ksqlDB SQL. For more information about ksqlDB SQL syntax, see ksqlDB Syntax Reference.
Start the ksqlDB CLI in your terminal with this command.
docker-compose exec ksqldb-cli ksql http://ksqldb-server:8088
Important
By default ksqlDB attempts to store its logs in a directory called logs that is relative to the location of the ksql executable. For example, if ksql is installed at /usr/local/bin/ksql, then it would attempt to store its logs in /usr/local/logs. If you are running ksql from the default Confluent Platform location, $CONFLUENT_HOME/bin, you must override this default behavior by using the LOG_DIR variable.
logs
ksql
/usr/local/bin/ksql
/usr/local/logs
$CONFLUENT_HOME/bin
LOG_DIR
Create a stream pageviews from the Kafka topic pageviews, specifying the value_format of AVRO:
value_format
AVRO
CREATE STREAM pageviews (viewtime bigint, userid varchar, pageid varchar) WITH (KAFKA_TOPIC='pageviews', VALUE_FORMAT='AVRO');
Create a table users with several columns from the Kafka topic users, with the value_format of AVRO:
CREATE TABLE users (userid VARCHAR PRIMARY KEY, registertime BIGINT, gender VARCHAR, regionid VARCHAR) WITH (KAFKA_TOPIC='users', VALUE_FORMAT='AVRO');
In this step, you run ksqlDB SQL queries.
Set the auto.offset.reset` query property to ``earliest. This instructs ksqlDB queries to read all available topic data from the beginning. This configuration is used for each subsequent query. For more information, see the ksqlDB Configuration Parameter Reference.
auto.offset.reset` query property to ``earliest
SET 'auto.offset.reset'='earliest';
Create a non-persistent query that returns data from a stream with the results limited to a maximum of three rows:
SELECT pageid FROM pageviews EMIT CHANGES LIMIT 3;
Your output should resemble:
Page_45 Page_38 Page_11 LIMIT reached for the partition. Query terminated
Create a persistent query (as a stream) that filters pageviews stream for female users. The results from this query are written to the Kafka PAGEVIEWS_FEMALE topic:
PAGEVIEWS_FEMALE
CREATE STREAM pageviews_female \ AS SELECT users.userid AS userid, pageid, regionid, gender \ FROM pageviews LEFT JOIN users ON pageviews.userid = users.userid \ WHERE gender = 'FEMALE';
Create a persistent query where the regionid ends with 8 or 9. Results from this query are written to a Kafka topic named pageviews_enriched_r8_r9:
regionid
8
9
pageviews_enriched_r8_r9
CREATE STREAM pageviews_female_like_89 \ WITH (kafka_topic='pageviews_enriched_r8_r9', value_format='AVRO') \ AS SELECT * FROM pageviews_female \ WHERE regionid LIKE '%_8' OR regionid LIKE '%_9';
Create a persistent query that counts the pageviews for each region and gender combination in a tumbling window of 30 seconds when the count is greater than 1. Because the procedure is grouping and counting, the result is now a table, rather than a stream. Results from this query are written to a Kafka topic called PAGEVIEWS_REGIONS:
PAGEVIEWS_REGIONS
CREATE TABLE pageviews_regions \ AS SELECT gender, regionid , COUNT(*) AS numusers \ FROM pageviews LEFT JOIN users ON pageviews.userid = users.userid \ WINDOW TUMBLING (size 30 second) \ GROUP BY gender, regionid \ HAVING COUNT(*) > 1;
List the streams:
SHOW STREAMS;
List the tables:
SHOW TABLES;
View the details of a stream or a table:
DESCRIBE EXTENDED <stream-or-table-name>;
For example, to view the details of the users table:
DESCRIBE EXTENDED users;
List the running queries:
SHOW QUERIES;
Review the query execution plan:
Get a Query ID from the output of SHOW QUERIES and run EXPLAIN to view the query execution plan for the Query ID:
SHOW QUERIES
EXPLAIN
EXPLAIN <Query ID>;
Now you can monitor the running queries created as streams or tables.
The following query returns the page view information of female users:
SELECT * FROM pageviews_female EMIT CHANGES;
The following query returns the page view information of female users in the regions whose regionid ends with 8 or 9:
SELECT * FROM pageviews_female_like_89 EMIT CHANGES;
The following query returns the page view counts for each region and gender combination in a tumbling window of 30 seconds.
SELECT * FROM pageviews_regions EMIT CHANGES;
When you are done working with Docker, you can stop and remove Docker containers and images.
View a list of all Docker container IDs.
docker container ls -a -q
Run the following command to stop the Docker containers for Confluent:
docker container stop $(docker container ls -a -q -f "label=io.confluent.docker")
After stopping the Docker containers, run the following commands to prune the Docker system. Running these commands deletes containers, networks, volumes, and images, freeing up disk space:
docker system prune -a -f --volumes
Remove the filter label for Confluent Docker (-f "label=io.confluent.docker") to clear all Confluent Platform Docker containers from your system.
-f "label=io.confluent.docker"
For more information, refer to the official Docker documentation.
If you encountered any issues while going through the quickstart workflow, review the following resolutions before trying the steps again.
Resolution: Run the build command just for connect if the connect container was not built successfully.
build
docker-compose build --no-cache connect
Building connect ... Completed Removing intermediate container cdb0af3550c8 ---> 36d00047d29b Successfully built 36d00047d29b Successfully tagged confluentinc/kafka-connect-datagen:latest
If the connect container was already built successfully, you will see an output similar to this:
connect uses an image, skipping
Resolution: Check the Connect log for Datagen.
Datagen
docker-compose logs connect | grep -i Datagen
connect | [2019-04-17 20:03:26,137] INFO Loading plugin from: /usr/share/confluent-hub-components/confluentinc-kafka-connect-datagen (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) connect | [2019-04-17 20:03:26,206] INFO Registered loader: PluginClassLoader{pluginLocation=file:/usr/share/confluent-hub-components/confluentinc-kafka-connect-datagen/} (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) connect | [2019-04-17 20:03:26,206] INFO Added plugin 'io.confluent.kafka.connect.datagen.DatagenConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) connect | [2019-04-17 20:03:28,102] INFO Added aliases 'DatagenConnector' and 'Datagen' to plugin 'io.confluent.kafka.connect.datagen.DatagenConnector' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader)
Resolution: Check the Connect log for a warning and reminder to run the docker-compose up -d command properly.
Resolution: Verify the .jar file for kafka-connect-datagen has been added and is present in the lib subfolder.
.jar
lib
docker-compose exec connect ls /usr/share/confluent-hub-components/confluentinc-kafka-connect-datagen/lib/
... kafka-connect-datagen-0.1.0.jar ...
Resolution: Verify the plugin exists in the connector path.
docker-compose exec connect bash -c 'echo $CONNECT_PLUGIN_PATH'
/usr/share/java,/usr/share/confluent-hub-components
Confirm its contents are present:
docker-compose exec connect ls /usr/share/confluent-hub-components/confluentinc-kafka-connect-datagen
assets doc etc lib manifest.json
An error states Stream-Stream joins must have a WITHIN clause specified. This error can occur if you created both pageviews and users as streams by mistake.
WITHIN
Resolution: Ensure that you created a stream for pageviews, and a table for users in Step 4: Create and Write to a Stream and Table using ksqlDB.
Java errors or other severe errors were encountered.
Resolution: Ensure you are on an Operating System currently supported by Confluent Platform.
Resolution: Ensure that the Docker memory was increased to 8 MB. Go to Docker > Preferences > Advanced. If Docker memory is insufficient, other unpredictable issues could occur.
ksqlDB errors were encountered.
Resolution: Review the help in the ksqlDB CLI for successful command tips and links to more documentation.
ksql> help
You must allocate a minimum of 8 GB of Docker memory resource. The default memory allocation on Docker Desktop for Mac is 2 GB and must be changed. Confluent Platform demos and examples running on Docker may fail to work properly if Docker memory allocation does not meet this minimum requirement.
Memory settings on Docker preferences for resources
Learn more about the components shown in this quick start: