CONFLUENT OPERATOR
This guide describes the tasks to prepare your Kubernetes cluster for Confluent Platform deployment with Confluent Operator. The user performing these tasks will need certain Kubernetes cluster-level permissions.
The examples in this guide use the following assumptions:
$VALUES_FILE refers to the configuration file you set up in Create the global configuration file.
$VALUES_FILE
To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file ($VALUES_FILE). However, in your production deployments, use the --set or --set-file option when applying sensitive data with Helm. For example:
--set
--set-file
helm upgrade --install kafka \ --set kafka.services.mds.ldap.authentication.simple.principal="cn=mds\,dc=test\,dc=com" \ --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \ --set kafka.enabled=true
operator is the namespace that Confluent Platform is deployed in.
operator
All commands are executed in the helm directory under the directory Confluent Operator was downloaded to.
helm
Create a Kubernetes namespace to deploy Confluent Platform into:
kubectl create namespace <namespace-name>
To allow a user who does not have cluster-level access to deploy Operator and Confluent Platform in a namespace, perform the following tasks as a Kubernetes cluster admin before deploying Operator and Confluent Platform. The snippets in this section uses the operator namespace.
Pre-install the Confluent Operator CRDs with the following command:
kubectl apply -f <Operator home directory>/resources/crds -n operator
Create the rolebinding.yaml file with the permissions required for a namespaced deployment.
rolebinding.yaml
The content contains the minimum permissions required. Add any other resource permissions you might additionally require.
The role and role binding should be in the same namespace as Operator.
The subject in the role binding must be the user/account existing in the given namespace.
subject
--- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: labels: component: operator name: cc-operator namespace: operator rules: - apiGroups: - cluster.confluent.com resources: - '*' verbs: - '*' - apiGroups: - operator.confluent.cloud resources: - '*' verbs: - '*' - apiGroups: - policy resources: - poddisruptionbudgets verbs: - '*' - apiGroups: - apps resources: - deployments - deployments/scale - deployments/status - replicasets - replicasets/scale - replicasets/status - statefulsets - statefulsets/scale - statefulsets/status verbs: - '*' - apiGroups: - "" resources: - configmaps - endpoints - events - persistentvolumeclaims - pods - pods/exec - secrets - services - serviceaccounts verbs: - '*' --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: labels: component: operator name: operator-cc-operator namespace: operator subjects: - kind: ServiceAccount name: cc-operator namespace: operator roleRef: kind: Role name: cc-operator apiGroup: rbac.authorization.k8s.io
Creates the role and the role binding with the following command to grant the required permissions to the service account named cc-operator. The cc-operator service account will install Operator and Confluent Platform.
cc-operator
kubectl apply -f rolebinding.yaml -n operator
Delete the following files from the <Operator home directory>/helm/confluent-operator/charts/operator/templates directory:
<Operator home directory>/helm/confluent-operator/charts/operator/templates
clusterrole.yaml
clusterrolebinding.yaml
Starting in Confluent Operator 5.5, you can instruct Operator to use a specific StorageClass for all PersistentVolumes it creates.
This is optional in Confluent Operator if you have Kubernetes clusters that are not capable of dynamically provisioning storage and need to statically provision disks up front.
Configure a StorageClass in Kubernetes for local provisioning. For example:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: my-storage-class provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer
For every Kafka broker instance you expect to create, decide:
Create a PersistentVolume with the desired host path and the hostname label of the desired worker node as described here.
apiVersion: v1 kind: PersistentVolume metadata: name: pv-1 ----- [1] spec: capacity: storage: 1Gi ------ [2] volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain ----- [3] storageClassName: my-storage-class ------[4] local: path: /mnt/data/broker-1-data ----- [5] nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-172-20-114-199.ec2.internal --- [6]
[1] Choose a name for the PersistentVolume.
[2] Choose a storage size that is greater than or equal to the storage you’re requesting for each Kafka broker instance.
[3] Choose Retain if you want the data to be retained after you delete the PersitentVolumeClaim that the Operator will eventually create and which Kubernetes will eventually bind to this PersistentVolume.
Retain
Choose Delete if you want this data to be garbage-collected when the PersistentVolumeClaim is deleted.
Delete
[4] The storageClassName must match the global.storage.storageClassName you put in your configuration file ($VALUES_FILE).
storageClassName
global.storage.storageClassName
[5] This is the directory path you want to use on the worker node for the broker as its persistent data volume.
[6] This is the value of the kubernetes.io/hostname label of the worker node you want to host this broker instance. To find this hostname:
kubectl get nodes -o 'custom-columns=NAME:metadata.name,HOSTNAME:metadata.labels.kubernetes.io/hostname' NAME HOSTNAME node-1 ip-172-20-114-199.ec2.internal
Repeat steps 2 and 3 for every broker you intend to create. This is determined by the kafka.replicas field in your configuration file ($VALUES_FILE).
kafka.replicas