Admin REST APIs Authentication
You can use HTTP Basic Authentication or mutual TLS (mTLS) authentication for communication between a
client and the Admin REST APIs. You can use SASL or mTLS for communication
between the Admin REST APIs and the brokers the APIs are running on.
Important
Without principal propagation, authentication terminates at the REST Proxy. This means
that all requests to Kafka are made as the REST Proxy user. For more information, see
Principal Propagation.
- License Client Authentication
If you are using principal propagation, you must configure license client authentication for SASL
OAUTHBEARER (RBAC), SASL PLAIN, SASL SCRAM, and mTLS. For more information, see the following
documentation:
- License Client Authorization
If you are using principal propagation, you must configure authorization for RBAC and ACLs.
RBAC authorization
Run this command to add ResourceOwner
for the component user for the Confluent license
topic resource (default name is _confluent-license
).
confluent iam rolebinding create \
--role ResourceOwner \
--principal User:<service-account-id> \
--resource Topic:_confluent-license \
--kafka-cluster-id <kafka-cluster-id>
ACL authorization
Run this command to configure Kafka authorization, where bootstrap server, client configuration,
service account ID is specified. This grants create, read, and write on the _confluent-license
topic.
kafka-acls --bootstrap-server <broker-listener> --command-config <client conf> \
--add --allow-principal User:<service-account-id> --operation Create --operation Read --operation Write \
--topic _confluent-license
HTTP Basic Authentication
With HTTP Basic Authentication you
can authenticate with the Admin REST APIs using a username and password pair. They are presented to
the REST Proxy server using the Authorization
HTTP header.
To enable HTTP Basic Authentication:
Add the following configuration to your Apache Kafka® properties file (etc/kafka/server.properties
):
kafka.rest.authentication.method=BASIC
kafka.rest.authentication.realm=KafkaRest
kafka.rest.authentication.roles=thisismyrole
Create a JAAS configuration file. For an example, see
<path-to-confluent>/etc/kafka/server-jaas.properties
:
KafkaRest {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
debug="true"
file="<path-to-confluent>/etc/kafka/password.properties";
};
Tip
KafkaRest
is in line with the realm specified as kafka.rest.authentication.realm
in kafka.properties
.
Create a password properties file (<path-to-confluent>/etc/kafka/password.properties
).
For example:
thisismyusername: thisismypass,thisismyrole
Start Confluent Server with HTTP Basic auth:
KAFKA_OPTS="-Djava.security.auth.login.config=<path-to-confluent>/etc/kafka/server-jaas.properties" \
bin/kafka-server-start etc/kafka/server.properties
Login to your Confluent Server with the username thisismyusername
and the password
thisismypass
. The password in your password.properties
file can also be hashed. For more
information, see
this link.
Configuration Options
kafka.rest.authentication.method
Indicates the method the Admin REST APIs uses to authenticate requests. Either NONE
or BASIC
.
To activate HTTP Basic Authentication, you must set it to BASIC
.
- Type: string
- Default: “NONE”
- Importance: high
kafka.rest.authentication.realm
If kafka.rest.authentication.method = BASIC
, this configuration tells which section from the
system JAAS config file to use to authenticate HTTP Basic Authentication credentials.
- Type: string
- Default: “”
- Importance: high
kafka.rest.authentication.roles
If kafka.rest.authentication.method = BASIC
, this configuration tells which user roles are
allowed to authenticate with the Admin REST APIs through HTTP Basic Authentication. If set to
*
, any role will be allowed to authenticate.
- Type: string
- Default: “*”
- Importance: medium
Mutual TLS authentication
With mutual TLS (mTLS) authentication, you can
authenticate with a HTTPS enabled Admin REST APIs using a client side X.509 certificate.
To enable mTLS, you must first enable HTTPS on the Admin REST APIs. For
the configuration options you must set, see
Confluent REST API Configuration Options for HTTPS.
After HTTPS is configured, you must configure the Admin REST APIs truststore to verify
the incoming client X.509 certificates. For example, you can configure the Admin REST APIs
truststore to point to a keystore with the root CA certificate used to sign the client certificates
loaded into it.
Finally, you can turn mTLS on by setting
confluent.http.server.ssl.client.auth
to true
.
Configuration Options
confluent.http.server.ssl.client.auth
Used for HTTPS. Whether or not to require the HTTPS client to authenticate using the server’s
trust store. Must be set to true
to enable mTLS.
- Type: boolean
- Default: false
- Importance: medium
confluent.http.server.ssl.truststore.location
Location of the trust store.
- Type: string
- Default: “”
- Importance: high
confluent.http.server.ssl.truststore.password
The password for the trust store file.
- Type: password
- Default: “”
- Importance: high
confluent.http.server.ssl.truststore.type
The type of trust store file.
- Type: string
- Default: JKS
- Importance: medium
Authentication between the Admin REST APIs and Kafka Brokers
The Admin REST APIs running in the Confluent Server communicate with the Kafka broker internally using normal Kafka Java
clients (by default using the inter-broker listener on the same broker). That means if the listener
the client is communicating on is secured, you must configure the security parameters
for the Admin REST APIs Java clients to communicate with Kafka through the aforementioned listener.
SASL Authentication
Kafka SASL configurations are described here.
Note that all of the SASL configurations (for the Admin REST APIs to broker communication) are prefixed
with client.
, or alternatively admin.
.
To enable SASL authentication with the Kafka broker set kafka.rest.client.security.protocol
to
either SASL_PLAINTEXT
or SASL_SSL
.
Then set kafka.rest.client.sasl.jaas.config
with the credentials to be used by the Admin REST APIs
to authenticate with Kafka. For example:
kafka.rest.client.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafkarest" password="kafkarest";
Alternatively you can create a JAAS config file, for example
<path-to-confluent>/etc/kafka/server-jaas.properties
:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="kafkarest"
password="kafkarest";
};
The name of the section in the JAAS file must be KafkaClient
. Then pass it as a JVM
argument:
export KAFKA_OPTS="-Djava.security.auth.login.config=<path-to-confluent>/etc/kafka/server-jaas.properties"
For details about configuring Kerberos see
JDK’s Kerberos Requirements.
Configuration Options
kafka.rest.client.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL,
SASL_PLAINTEXT, SASL_SSL.
- Type: string
- Default: PLAINTEXT
- Importance: high
kafka.rest.client.sasl.jaas.config
JAAS login context parameters for SASL connections in the format used by JAAS
configuration files. JAAS configuration file format is described in Oracle’s
documentation.
The format for the value is: ‘(=)*;’
- Type: string
- Default: null
- Importance: high
kafka.rest.client.sasl.kerberos.service.name
The Kerberos principal name that Kafka runs as. This can be defined either in
Kafka’s JAAS config or in Kafka’s configuration.
- Type: string
- Default: null
- Importance: medium
kafka.rest.client.sasl.mechanism
SASL mechanism used for client connections. This may be any mechanism for which
a security provider is available. GSSAPI is the default mechanism.
- Type: string
- Default: GSSAPI
- Importance: medium
kafka.rest.client.sasl.kerberos.kinit.cmd
Kerberos kinit command path.
- Type: string
- Default: /usr/bin/kinit
- Importance: low
kafka.rest.client.sasl.kerberos.min.time.before.relogin
Login thread sleep time between refresh attempts.
- Type: long
- Default: 60000
- Importance: low
kafka.rest.client.sasl.kerberos.ticket.renew.jitter
Percentage of random jitter added to the renewal time.
- Type: double
- Default: 0.05
- Importance: low
kafka.rest.client.sasl.kerberos.ticket.renew.window.factor
Login thread will sleep until the specified window factor of time from last
refresh to ticket’s expiry has been reached, at which time it will try to renew
the ticket.
- Type: double
- Default: 0.8
- Importance: low
Mutual TLS authentication
Kafka SSL configurations are described here.
Admin REST APIs to Kafka SSL configurations are described
here.
To enable mTLS with the Kafka broker you must set
kafka.rest.client.security.protocol
to SSL
or SASL_SSL
.
If the Kafka broker is configured with ssl.client.auth=required
, and you configure client
certificates for the Admin REST APIs with kafka.rest.client.ssl.keystore.*
, that should make the
Admin REST APIs do SSL authentication with the Kafka broker.
Principal Propagation
This is a commercial component of Confluent Platform.
Principal propagation takes the principal from authentication mechanism configured for client to the
Admin REST APIs and propagates that same principal when making requests to the Kafka broker.
Important
Without principal propagation, authentication terminates at the REST Proxy. This means
that all requests to Kafka are made as the REST Proxy user.
HTTP Basic Authentication to SASL Authentication
To enable HTTP Basic Authentication to SASL Authentication credentials propagation, you must set
kafka.rest.authentication.method
to BASIC
,
kafka.rest.confluent.rest.auth.propagate.method
to JETTY_AUTH
,and
kafka.rest.client.security.protocol
to either SASL_PLAINTEXT
or SASL_SSL
.
Security plugin supports all the sasl.mechanism
supported by Kafka clients. Just like a regular
Kafka client, the plugin also expects a JAAS config file to be configured through
-Djava.security.auth.login.config
. It is required for all the principals to be specified in the
JAAS config file under the section KafkaClient
.
In the JAAS config file, all of the principals must be explicitly specified. The plugin supports
specifying principals using following supported mechanisms: GSSAPI
, PLAIN
, SCRAM-SHA-256
and SCRAM-SHA-512
. Also, the plugin ignores any configured sasl.mechanism
and picks it
automatically based on the LoginModule specified for the principal.
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/restproxy-localhost.keytab"
principal="CN=restproxy/localhost@EXAMPLE.COM";
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client_2.keytab"
principal="kafka-client-2@EXAMPLE.COM";
org.apache.kafka.common.security.plain.PlainLoginModule required
username="alice-plain"
password="alice-secret";
org.apache.kafka.common.security.scram.ScramLoginModule required
username="alice-scram"
password="alice-secret";
org.apache.kafka.common.security.scram.ScramLoginModule required
username="alice-scram-256"
password="alice-secret"
mechanism="SCRAM-SHA-256";
};
Here is the mapping of sasl.mechanism
for the configured login modules:
Principal’s Login Module |
SASL Mechanism |
com.sun.security.auth.module.Krb5LoginModule |
|
org.apache.kafka.common.security.plain.PlainLoginModule |
|
org.apache.kafka.common.security.scram.ScramLoginModule |
For SCRAM-SHA-256 set
mechanism=SCRAM-SHA-256
as an option in
ScramLoginModule
|
All the mechanisms except SCRAM-SHA-256
would be automatically detected by the plugin and
SCRAM-SHA-256
can be explicitly mentioned as an option in the ScramLoginModule.
Configuration Options
kafka.rest.confluent.rest.auth.propagate.method
The mechanism used to authenticate the Admin REST APIs requests. When broker security is enabled,
the principal from this authentication mechanism is propagated to Kafka broker requests. Either
JETTY_AUTH
or SSL
.
- Type: string
- Default: “SSL”
- Importance: low
mTLS to SASL Authentication
To enable mTLS to SASL Authentication, you must set
confluent.http.server.ssl.client.auth
to true
,
kafka.rest.confluent.rest.auth.propagate.method
to SSL
, and client.security.protocol
to
either SASL_PLAINTEXT
or SASL_SSL
.
The incoming X500 principal from the client is used as the principal while interacting with the Kafka
broker. You can use kafka.rest.confluent.rest.auth.ssl.principal.mapping.rules
to map the DN
from the client certificate to a name that can be used for principal propagation. For example, a
rule like RULE:^CN=(.*?)$/$1/
, would strip off the CN=
portion of the DN.
Requires JAAS config file with KafkaClient
section containing all principals along with its
login module and options; configured via -Djava.security.auth.login.config
.
Configuration Options
kafka.rest.confluent.rest.auth.propagate.method
The mechanism used to authenticate the Admin REST APIs requests. When broker security is enabled,
the principal from this authentication mechanism is propagated to Kafka broker requests.
- Type: string
- Default: “SSL”
- Importance: low
kafka.rest.confluent.rest.auth.ssl.principal.mapping.rules
A list of rules for mapping distinguished name (DN) from the client certificate to short name. The
rules are evaluated in order and the first rule that matches a principal name is used to map it to
a short name. Any later rules in the list are ignored. By default, DN of the X.500 certificate is
the principal. Each rule starts with “RULE:” and contains an expression using the formats below.
The default rule returns string representation of the X.500 certificate DN. If the DN matches the
pattern, then the replacement command is run over the name. This also supports lowercase/uppercase
options, to force the translated result to be all lower/uppercase case. This is done by adding a
“/L” or “/U’ to the end of the rule:
- Type: list
- Default: DEFAULT
- Importance: low
SSL Authentication to SSL Authentication
To enable mTLS to mTLS, you must set
confluent.http.server.ssl.client.auth
to true
, and
kafka.rest.confluent.rest.auth.propagate.method
to SSL
.
For SSL propagation to work, it is required to load all the certificates corresponding to the
required principals in a single client keystore file. Once this is done, the plugin would pick
the appropriate certificate alias based on the logged on principal while making requests to Kafka.
Currently, the logged on principal must exactly match the X.509 Principal of the certificate.
For example, if there were two clients integrated to the Admin REST APIs the setup could be as
simple as below:
- Client A authenticates to the Admin REST APIs using its keystore which contains Certificate-A
- Client B authenticates to the Admin REST APIs using its keystore which contains Certificate-B
- The Admin REST APIs’s keystore
kafka.rest.client.ssl.keystore.location
is loaded with
Certificate-A and Certificate-B. The certificate is then chosen by the plugin based on who the
client is.
Configuration Options
kafka.rest.confluent.rest.auth.propagate.method
The mechanism used to authenticate the Admin REST APIs requests. When broker security is enabled,
the principal from this authentication mechanism is propagated to Kafka broker requests.
- Type: string
- Default: “SSL”
- Importance: low
Role-Based Access Control (RBAC)
This is a commercial component of Confluent Platform.
Prerequisites:
To enable token authentication (in the kafka.properties
file) set
kafka.rest.rest.servlet.initializor.classes
to
io.confluent.common.security.jetty.initializer.InstallBearerOrBasicSecurityHandler
and
kafka.rest.kafka.rest.resource.extension.class
to
io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension
.
kafka.rest.rest.servlet.initializor.classes=io.confluent.common.security.jetty.initializer.InstallBearerOrBasicSecurityHandler
kafka.rest.kafka.rest.resource.extension.class=io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension
When token authentication is enabled, the generated token is used to impersonate the API requests.
The Admin REST APIs Kafka clients use the SASL_PLAINTEXT
or SASL_SSL
authentication
mechanism to authenticate with Kafka brokers.
- License Client Authentication
If you are using principal propagation, you must configure license client authentication for SASL
OAUTHBEARER (RBAC), SASL PLAIN, SASL SCRAM, and mTLS. For more information, see the following
documentation:
- License Client Authorization
If you are using principal propagation, you must configure authorization for RBAC and ACLs.
RBAC authorization
Run this command to add ResourceOwner
for the component user for the Confluent license
topic resource (default name is _confluent-license
).
confluent iam rolebinding create \
--role ResourceOwner \
--principal User:<service-account-id> \
--resource Topic:_confluent-license \
--kafka-cluster-id <kafka-cluster-id>
ACL authorization
Run this command to configure Kafka authorization, where bootstrap server, client configuration,
service account ID is specified. This grants create, read, and write on the _confluent-license
topic.
kafka-acls --bootstrap-server <broker-listener> --command-config <client conf> \
--add --allow-principal User:<service-account-id> --operation Create --operation Read --operation Write \
--topic _confluent-license
Configuration Options
kafka.rest.rest.servlet.initializor.classes
List of custom initialization classes for the Admin REST APIs. To use RBAC, set it to
io.confluent.common.security.jetty.initializer.InstallBearerOrBasicSecurityHandler
.
- Type: string
- Default: “”
- Importance: high
kafka.rest.kafka.rest.resource.extension.class
List of custom extension classes for the Admin REST APIs. To use RBAC, set it to
io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension
.
- Type: string
- Default: PLAINTEXT
- Importance: high
kafka.rest.client.security.protocol
Protocol used to communicate with brokers. Valid values are: PLAINTEXT
, SSL
,
SASL_PLAINTEXT
, SASL_SSL
. To use RBAC, set it to either SASL_PLAINTEXT
or
SASL_SSL`.
- Type: string
- Default: “”
- Importance: high
kafka.rest.public.key.path
Location of the PEM encoded public key file to be used for verifying tokens.
- Type: string
- Default: “”
- Importance: high
kafka.rest.confluent.metadata.bootstrap.server.urls
Comma-separated list of bootstrap metadata server URLs to which this REST Proxy
connects. For example: http://localhost:8080,http://localhost:8081
- Type: string
- Default: “”
- Importance: high
kafka.rest.confluent.metadata.basic.auth.user.info
Service user credentials information in the format: user:password
.
- Type: string
- Default: “”
- Importance: high