Deploy multiple Confluent Platform clusters
To deploy multiple clusters, you deploy the additional Confluent Platform cluster to a
different namespace. Make sure to name the clusters differently than any of your
other clusters in the same Kubernetes cluster. Note that when running multiple
clusters in a single Kubernetes cluster, you do not install additional
Confluent Operator instances.
For example, you have one Operator instance and you want to deploy two Apache Kafka®
clusters, you could name the first Kafka cluster kafka1 and the second Kafka
cluster kafka2. Once this is done, you deploy each one in a different
namespace.
Additionally, since you are not installing a second Operator instance, you need to
make sure the Docker registry secret is installed in the new namespace. To do
this with the Helm install command, you add global.injectPullSecret=true
when you enable the component.
Note
The parameter global.injectPullSecret=true
is only required if the Docker
secret does not exist in the new namespace or if a Docker secret is
actually required to pull images. If you attempt run
global.injectPullSecret=true
to install a new component in the namespace
where a secret exists, Helm returns an error saying that resources are
already found.
Using kafka2
in namespace operator2
as an example, the command would
resemble the following:
helm upgrade --install \
kafka2
--values $VALUES_FILE \
--namespace operator2 \
--set kafka.enabled=true,global.injectPullSecret=true \
./confluent-operator/
Patch the service account once the Docker secret is created in the new
namespace.
kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "confluent-docker-registry" }]}'
If you are using a private or local registry with basic authentication, use the
following command:
kubectl -n operator patch serviceaccount default -p '{"imagePullSecrets": [{"name": "<your registry name here>" }]}'
Use multiple availability zones
To use multiple availability zones (AZs), you first need to configure the zones values block in your configuration file ($VALUES_FILE
). The example below shows three zones (us-central-a, -b, and -c):
provider:
name: gcp
region: us-central1
kubernetes:
deployment:
zones:
- us-central1-a
- us-central1-b
- us-central1-c
Important
If your Kubernetes cluster spans zones a, b, and c and you configure
only zones a and b in the yaml block shown above, Confluent Operator schedules pods in
zones a, b, and c, not just a and b. Additionally, if you do this, storage
disks for all pods are in zones a and b only, but the pods are spread out
over zones a, b, and c.
Note
Kubernetes nodes in public clouds are tagged with their AZs. Kubernetes
automatically attempts to spread pods across these zones. For more
information, see Running in multiple zones.
Use Replicator with multiple Kafka clusters
The following steps guide you through deploying Replicator on multiple Kafka
clusters. This example is useful for testing and development purposes only.
- Prerequisite
- Make sure external DNS is configured and running on your development platform.
- Make sure you are allowed to create DNS names.
The following steps:
- Are based on a GCP environment.
- Use the example
mydevplatform.gcp.cloud
DNS name.
- Use two-way TLS security.
Deploy clusters
Complete the following steps to deploy the clusters. Use the example deployment instructions.
- Deploy Operator in namespace
operator
.
- Deploy the destination Kafka and ZooKeeper clusters in namespace
kafka-dest
. Use the following names:
- ZooKeeper cluster:
zookeeper-dest
- Kafka cluster:
kafka-dest
- Deploy the source Kafka and ZooKeeper clusters in namespace
kafka-src
. Use the following names:
- ZooKeeper cluster:
zookeeper-src
- Kafka cluster:
kafka-src
- Deploy Replicator in namespace
kafka-dest
using the default name replicator
. Set the Replicator endpoint to replicator.dependencies.kafka.bootstrapEndpoint:kafka-dest:9071
. Configure the endpoint for TLS one-way security.
Test Replicator
Complete the following steps to test Replicator.
On your local machine, use kubectl exec to start a bash session on one of the pods in the cluster. The example uses the default pod name kafka-src-0
on a Kafka cluster using the name kafka-src
.
kubectl -n kafka-src exec -it kafka-src-0 bash
On the pod, create and populate a file named kafka.properties
. There is no text editor installed in the containers, so you use the cat command as shown below to create this file. Use CTRL+D
to save the file.
cat <<EOF > kafka.properties
bootstrap.servers=kafka-src.mydevplatform.gcp.cloud:9092
security.protocol=SSL
ssl.keystore.location=/tmp/keystore.jks
ssl.keystore.password=mystorepassword
ssl.key.password=mystorepassword
ssl.truststore.location=/tmp/truststore.jks
ssl.truststore.password=mystorepassword
EOF
Validate the kafka.properties
(make sure your DNS is configured correctly).
kafka-broker-api-versions \
--command-config kafka.properties \
--bootstrap-server kafka-src.mydevplatform.gcp.cloud:9092
Create a topic on the source Kafka cluster. Enter the following command on the Kafka pod.
kafka-topics --create --topic test-topic \
--replication-factor 1 --partitions 4 \
--bootstrap-server kafka-src.mydevplatform.gcp.cloud:9092
Produce in the source Kafka cluster kafka-src
.
seq 10000 | kafka-console-producer --topic test-topic \
--broker-list kafka-src.mydevplatform.gcp.cloud:9092 \
--producer.config kafka.properties
From a new terminal, start a bash session on kafka-dest-0
.
kubectl -n kafka-dest exec -it kafka-dest-0 bash
On the pod, create the following kafka.properties
file.
cat <<EOF > kafka.properties
sasl.mechanism=PLAIN
bootstrap.servers=kafka-dest:9071
security.protocol=PLAINTEXT
EOF
Validate the kafka.properties
(make sure your DNS is configured correctly).
kafka-broker-api-versions --command-config kafka.properties \
--bootstrap-server kafka-dest:9071
Validate that the test-topic is created in kafka-dest
.
kafka-topics --describe --topic test-topic.replica \
--bootstrap-server kafka-dest:9071
Confirm delivery in the destination Kafka cluster kafka-dest
.
kafka-console-consumer --from-beginning --topic test-topic \
--bootstrap-server kafka-dest:9071 \
--consumer.config kafka.properties