Plan for Confluent Operator Installation

This topic contains the prerequisites and recommendations you need to consider when you plan to install and deploy Confluent Platform using Confluent Operator.

The topic uses the Google Kubernetes Engine (GKE) as the example provider environment. Use this as a guide for deploying Operator and Confluent Platform in other supported provider environments.

The examples in this guide use the following assumptions:

  • $VALUES_FILE refers to the configuration file you set up in Create the global configuration file.

  • To present simple and clear examples in the Operator documentation, all the configuration parameters are specified in the config file ($VALUES_FILE). However, in your production deployments, use the --set or --set-file option when applying sensitive data with Helm. For example:

    helm upgrade --install kafka \
     --set kafka.services.mds.ldap.authentication.simple.principal="cn=mds\,dc=test\,dc=com" \
     --set kafka.services.mds.ldap.authentication.simple.credentials=”Developer!” \
     --set kafka.enabled=true
    
  • operator is the namespace that Confluent Platform is deployed in.

  • All commands are executed in the helm directory under the directory Confluent Operator was downloaded to.

General prerequisites

Review and address the following prerequisites before you start the installation process:

  • A Kubernetes cluster conforming to one of the supported environments.
  • Cluster size based on the Sizing Recommendations.
  • kubectl is installed, initialized, with the context set. You also must have the kubeconfig file configured for your cluster.
  • Helm 3 is installed.
  • Access to the Confluent Operator bundle.
  • Storage: StorageClass-based storage provisioner support. This is the default storage class used. You can use other user-provided storage classes. SSD or SSD-like disks are required for persistent storage.
  • Security: TLS certificates for each component required (if using TLS). Default SASL/PLAIN security is used in the example steps. See Configure Security with Confluent Operator for information about how to configure additional security.
  • Networking
    • DNS: DNS support on your platform environment is required for external access to Confluent Platform components after deployment. After deployment, you create DNS entries to enable external access to individual Confluent Platform components. If your organization does not allow external access for development testing, see No-DNS development access.
    • For external access using load balancer:
      • Layer 4 load balancing with passthrough support (terminating at the application) is required for Kafka brokers.
      • Layer 7 load balancing can be used for Operator and all other Confluent Platform components.
    • For static external access with host-based or port-based routing: An Ingress controller that supports TLS passthrough and routes TCP traffic.

Sizing recommendations

Review the following sizing guidelines and recommendations before creating your Kubernetes cluster.

Kubernetes worker node number and compute resource recommendations

The number of nodes required in your cluster depends on whether you are deploying a development testing cluster or a production-ready cluster.

  • Test Cluster: Each node should typically have a minimum of 2 or 4 CPUs and 7 to 16 GB RAM. If you are testing a deployment of Operator and all Confluent Platform components, you can create a 10-node cluster with six nodes for Apache ZooKeeper™ and Apache Kafka® pods (three replicas each) and four nodes for all other components pods.

    Confluent recommends running ZooKeeper and Kafka on individual pods on individual Kubernetes nodes. You can bin pack other components. Bin packing places component tasks on nodes in the cluster that have the least remaining CPU and memory capacity. Bin packing maximizes node utilization and can reduce the number of nodes required.

    Bin packing is disabled by default at the namespace level. You can enable bin packing by setting the oneReplicaPerNode: false parameter to the component section in the configuration file ($VALUES_FILE).

    oneReplicaPerNode replaces the deprecated disableHostPort at the namespace-level.

    Bin packing components is not recommended for production deployments. Also, note that when set to false, the default port used is 28130.

  • Production Cluster: Review the default capacity values in the global provider file, helm/providers/<provider>.yaml. Determine how these values affect your production application and build out your nodes accordingly. You can also use the on-premises System Requirements to determine what is required for your public or private cloud production environment. Note that the on-premises storage information provided is not applicable for cloud environments.

    Note

    If you need information to determine the number of nodes required for your application, contact Confluent Support .

Workflow to deploy Operator and Confluent Platform

At the high level, the workflow to configure and deploy Operator and Confluent Platform is as follows:

  1. Review the General prerequisites and Sizing recommendations and prepare your environment.
  2. Download the Operator bundle.
  3. Prepare your Kubernetes cluster as a cluster admin.
  4. Configure, including storage, external access, and security as needed.
  5. Install.