Operator supports the plaintext (no encryption) and TLS encryption methods for Confluent Platform
with plaintext being the default.
Network encryption with TLS
Operator supports Transport Layer Security (TLS), an industry-standard encryption
protocol, to protect network communications of Confluent Platform.
Certificates and keys
TLS relies on keys and certificates to establish trusted connections. This
section describes how to manage keys and certificates in preparation to
configure TLS encryption for Confluent Platform.
Generate self-signed certificates for testing
To validate your security configuration, follow the steps below to create a
self-signed certificate for testing. You need to generate your own certificate
for the production deployment.
Create a root key:
openssl genrsa -out rootCA.key 2048
Create a root certificate:
openssl req -x509 -new -nodes \
-key rootCA.key \
-days 3650 \
-out rootCA.pem \
-subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=TestCA"
Create a server private key:
openssl genrsa -out server.key 2048
Generate a CSR for a server certificate:
openssl req -new -key server.key \
-out server.csr \
-subj "/C=US/ST=CA/L=MVT/O=TestOrg/OU=Cloud/CN=*.operator.svc.cluster.local"
Sign the certificate:
openssl x509 -req \
-in server.csr \
-extensions server_ext \
-CA rootCA.pem \
-CAkey rootCA.key \
-CAcreateserial \
-out server.crt \
-days 365 \
-extfile \
<(echo "[server_ext]"; echo "extendedKeyUsage=serverAuth,clientAuth"; echo "subjectAltName=DNS:*.operator.svc.cluster.local")
The above steps create the following files that you map to the corresponding
keys in the tls
section of the config file ($VALUES_FILE
) for cacerts
,
fullchain
, and privkey
:
rootCA.pem
contains cacerts
.
server.crt
contains fullchain
(the full chain).
server.key
contains privkey
(the private key).
SAN attributes for certificates
When creating certificates for use by the Kafka brokers, configure the Subject
Alternative Name (SAN) attribute. SAN allows you to specify multiple hostnames
for a single certificate.
There are two ways to configure SAN attributes for accessing the Kafka clusters
from outside of the Kubernetes.
If you are permitted by your organization to use wildcard domains in your certificate SANs, use the following SAN attributes when generating certificates:
SAN attribute for the external bootstrap and broker addresses:
SAN attribute for the internal addresses:
*.<component-name>.<namespace>.svc.cluster.local
The <component-name>
is the value set in name:
under the component
section in your config file ($VALUES_FILE
).
If wildcards are not permitted, you must provide multiple SAN attributes. Use the following SAN attributes, which are based on the Kafka broker prefix, the bootstrap prefix, and the number of brokers:
SAN attributes for the external bootstrap and broker addresses:
<bootstrap-prefix>.<kafka-domain>
<broker-prefix>0.<kafka-domain>
<broker-prefix>1.<kafka-domain>
...
<broker-prefix><N-1>.<kafka-domain>
For example, if you have the following configuration:
- Your Kafka domain is
confluent-platform.example.com
.
- Your broker prefix is
b
(this is the default if you don’t explicitly set the broker prefix when configuring Kafka).
- Your bootstrap prefix is
kafka
(this is the default).
- You are deploying three Kafka brokers in the
operator
namespace.
Use the following SANs when generating your external address certificates:
b0.confluent-platform.example.com
b1.confluent-platform.example.com
b2.confluent-platform.example.com
kafka.confluent-platform.example.com
SAN attributes for the internal Kafka bootstrap address and the specific internal addresses for each broker:
<kafka-component-name>.<kafka-namespace>.svc.cluster.local
<kafka-component-name>-0.<kafka-component-name>.<kafka-namespace>.svc.cluster.local
<kafka-component-name>-1.<kafka-component-name>.<kafka-namespace>.svc.cluster.local
...
<kafka-component-name>-<N-1>.<kafka-component-name>.<kafka-namespace>.svc.cluster.local
The <kafka-component-name>
is the value set in name:
under the
kafka
component section in your config file ($VALUES_FILE
).
For example, if you have the following configuration:
- Your Kafka component name is
kafka
(this is the default).
- You are deploying three Kafka brokers in the
operator
namespace.
Use the following SANs when generating your internal address certificates:
kafka.operator.svc.cluster.local
kafka-0.kafka.operator.svc.cluster.local
kafka-1.kafka.operator.svc.cluster.local
kafka-2.kafka.operator.svc.cluster.local
SAN attributes for the internal addresses of other components:
<component-name>.<namespace>.svc.cluster.local
The <component-name>
is the value set in name:
under the component
section in your config file ($VALUES_FILE
).
Note that when you scale up your Kafka cluster, your certificate will need to
have DNS SANs for the additional brokers in your cluster. If you want to avoid
having to regenerate new certificates before adding new brokers, you could
consider putting more DNS SANs in your certificate than your initial broker
count. For instance, if you plan to start with 3 brokers but imagine you may
need to scale up to 6 brokers, you could have DNS SANs for 6 (or more) brokers
in your initial certificate so that when it comes time to scale up, you can do
so without creating a new certificate.
If you enable TLS encryption for the JMX and Jolokia endpoints, see
JMX and Jolokia configuration for TLS encryption for the additional SAN requirement for JMX.
Manage credentials outside of the configuration file
For increased security, you can keep credentials, such as private keys, outside
of your config file ($VALUES_FILE
). Furthermore, certificate data can be
large and cumbersome, and if you have this data already in separate files, you
may want to avoid copying and pasting that data into the $VALUES_FILE
and
dealing with YAML syntax.
You can pass the contents of these files when issuing Helm commands rather than
putting them directly in the config file ($VALUES_FILE
). For example:
helm upgrade --install kafka ./confluent-operator \
--values $VALUES_FILE \
--namespace operator \
--set kafka.enabled=true \
--set kafka.tls.enabled=true \
--set kafka.metricReporter.tls.enabled=true \
--set-file kafka.tls.cacerts=/tmp/ca-bundle.pem \
--set-file kafka.tls.fullchain=/tmp/server-cert.pem \
--set-file kafka.tls.privkey=/tmp/server-key.pem
Kafka configuration for TLS encryption
Enable and configure TLS encryption in the kafka
section in your config file
($VALUES_FILE
) as shown in the snippet below:
kafka:
tls:
enabled: true ----- [1]
interbrokerTLS: true ----- [2]
internalTLS: true ----- [3]
fullchain: |- ----- [4]
---BEGIN CERTIFICATE---
... omitted
---END CERTIFICATE---
privkey: |- ----- [5]
---BEGIN RSA PRIVATE KEY---
... omitted
---END RSA PRIVATE KEY---
configOverrides:
server:
- listener.name.internal.ssl.principal.mapping.rules= --- [6]
- listener.name.replication.ssl.principal.mapping.rules= --- [7]
The following should be set:
[1] Set enabled: true
to enable TLS encryption.
[2] Set interbrokerTLS: true
to enable TLS encryption for inter-broker communication and communication between Kafka and Replicator. [1] must be set to true
for this setting to take effect.
If [1] and [2] are set to true
, the internal address must be included in
the Kafka certificate.
For interbrokerTLS: true
, add the inter-broker listener principal mapping
rules to the Kafka server configuration using configOverrides
in [7].
[3] Set internalTLS: true
to enable TLS encryption for inter-component communication. [1] must be set to true
for this setting to take effect.
For internalTLS: true
, add the internal listener principal mapping rules to
the Kafka server configuration using configOverrides
in [6].
[4] Provide a full chain certificate. A fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order.
The external, internal, and inter-broker listeners use the same certificate.
The certificate you provide must have DNS SANs for the external bootstrap and
broker addresses, the internal Kafka bootstrap address, and the specific
internal addresses for each broker. For information on configuring SANs in a
certificate, see SAN attributes for certificates.
[5] Provide the private key of the certificate. The privkey
contains the private key associated with the broker certificate.
[6] Add the following internal listener principal mapping rules when internalTLS: true
.
kafka:
configOverrides:
server:
- "listener.name.internal.ssl.principal.mapping.rules=RULE:^CN=([a-zA-Z0-9.]*).*$/$1/L, DEFAULT"
[7] Add the following internal listener principal mapping rules when interbrokerTLS: true
.
kafka:
configOverrides:
server:
- "listener.name.replication.ssl.principal.mapping.rules=RULE:^CN=([a-zA-Z0-9.]*).*$/$1/L, DEFAULT"
Component configuration for TLS encryption
This section describes how to set up the rest of the Confluent Platform components so that
they (a) successfully communicate with Kafka configured with TLS encryption, and
(b) serve TLS-encrypted traffic themselves.
These are general guidelines and do not provide all security dependencies.
Review the component values.yaml
files in the
<operator-home>/helm/confluent-operator/charts
folder for additional
information about component dependencies.
Configure the components as below in the configuration file ($VALUES_FILE
):
<component>:
tls:
enabled: true ----- [1]
internalTLS: true ----- [2]
cacerts: |- ----- [3]
---BEGIN CERTIFICATE---
... omitted
---END CERTIFICATE-----
fullchain: |- ----- [4]
---BEGIN CERTIFICATE---
... omitted
---END CERTIFICATE---
privkey: |- ----- [5]
-----BEGIN RSA PRIVATE KEY--
... omitted
-----END RSA PRIVATE KEY----
[1] External communication encryption mode. To serve TLS-encrypted traffic on its external listener, set enabled: true
.
[2] Internal communication encryption mode. To serve TLS-encrypted traffic on its internal listener, set internalTLS: true
. [1] must be set to true
for this setting to take effect.
[3] One or more concatenated Certificate Authorities (CAs) for the component to trust the certificates presented by the Kafka brokers.
[4] Provide the full certificate chain that the component will use to serve TLS-encrypted traffic. The fullchain
consists of a root CA, any intermediate CAs, and finally the certificate for the broker, in that order.
Some components such as Replicator, Connect, and Schema Registry can be configured with TLS
for internal traffic. This means traffic between replicas, e.g. Replicator to
Replicator in the same cluster, can now be encrypted with the same
certificates as those used for their external listener.
[5] Provide the private key associated with the broker certificate.
JMX and Jolokia configuration for TLS encryption
Confluent Operator supports TLS encryption for the JMX and Jolokia endpoints of every
Confluent Platform component.
TLS is set up at port 7203 for JMX and at port 7777 for Jolokia.
To enable TLS encryption for JMX and Jolokia endpoints, set the following for
each desired component in the configuration file ($VALUES_FILE
):
<component>:
tls:
jmxTLS: true ----- [1]
fullchain: ----- [2]
privkey: ----- [3]
[1] Set jmxTLS: true
to enable TLS encryption for JMX and Jolokia endpoints.
Setting jmxTLS: true
is only supported in green field installations and is
not supported in upgrades.
With jmxTLS: true
, include the following in the SAN attribute of the Kafka
certificate:
*.<kafka-component-name>.<kafka-namespace>
If a wildcard (*
) is not allowed in your SAN, include the following:
<kafka-component-name>.<kafka-namespace>
<kafka-component-name>-0.<kafka-component-name>.<kafka-namespace>
<kafka-component-name>-1.<kafka-component-name>.<kafka-namespace>
...
<kafka-component-name>-<N-1>.<kafka-component-name>.<kafka-namespace>
[2] Provide the full TLS certificate chain.
[3] Provide the private key for the certificate.
The TLS certificates used to secure the JMX and Jolokia endpoints are the same
that will be used in general by the component. There is no separate fullchain
and privkey
setting just for the JMX and Jolokia endpoints.