Skip to main content
Version: main 🚧

Self assessment guide - Control Plane Security Configuration

This section covers security recommendations for the direct configuration of Kubernetes control plane processes, including:

  • API Server configuration and security settings
  • Controller Manager security parameters
  • Scheduler security configurations
  • General control plane security practices

Assessment focus for vCluster: Since vCluster virtualizes the control plane components, verification involves checking the extraArgs configurations in your values.yaml file and ensuring proper security parameters are set for the virtualized API server, controller manager, and scheduler.

note

For auditing each control, create the vCluster using default values as shown below, unless specified otherwise.

vcluster create my-vcluster --connect=false

1.1 Master Node Configuration Files​

1.1.1 Ensure that the API server pod specification file permissions are set to 644 or more restrictive (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-apiserver.yaml) as in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster.

1.1.2 Ensure that the API server pod specification file ownership is set to root:root (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-apiserver.yaml) as in kubeadm-based clusters. Instead, the vCluster API server runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster.

1.1.3 Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-controller-manager.yaml) as in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster.

1.1.4 Ensure that the controller manager pod specification file ownership is set to root:root (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-controller-manager.yaml) as in kubeadm-based clusters. Instead, the vCluster controller manager runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster.

1.1.5 Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-scheduler.yaml) as in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster.

1.1.6 Ensure that the scheduler pod specification file ownership is set to root:root (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/kube-scheduler.yaml) as in kubeadm-based clusters. Instead, the vCluster scheduler runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster.

1.1.7 Ensure that the etcd pod specification file permissions are set to 644 or more restrictive (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/etcd.yaml) as in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file permissions check defined in this control is not applicable in the context of vCluster.

1.1.8 Ensure that the etcd pod specification file ownership is set to root:root (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not rely on static pod manifests stored on the host (e.g., /etc/kubernetes/manifests/etcd.yaml) as in kubeadm-based clusters. In case of embedded etcd, the etcd runs as a separate binary embedded within the same container as the syncer pod. This architecture does not use the static pod mechanism, and thus the file ownership check defined in this control is not applicable in the context of vCluster.

1.1.9 Ensure that the Container Network Interface file permissions are set to 644 or more restrictive (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster’s CNI plugin. As a result, there are no CNI configuration files (e.g., /etc/cni/net.d/*.conf) present within the vCluster container. This control should be evaluated on the underlying host cluster and is not applicable in vCluster environments.

1.1.10 Ensure that the Container Network Interface file ownership is set to root:root (Automated)​

Result: NOT APPLICABLE

Remediation: vCluster does not configure or manage Container Network Interface (CNI) settings. Networking is handled entirely by the host (parent) cluster’s CNI plugin. As a result, there are no CNI configuration files (e.g., /etc/cni/net.d/*.conf) present within the vCluster container. This control should be evaluated on the underlying host cluster and is not applicable in vCluster environments.

1.1.11 Ensure that the etcd data directory permissions are set to 700 or more restrictive (Automated)​

Result: PASS

Remediation: Get the etcd data directory, passed as an argument to --data-dir, from the command

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep etcd

Run the audit command to verify the permissions on the data directory. If they do not match the expected result only then run the below command to set the appropriate permissions.

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- chmod 700 /data/etcd

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/etcd

Verify that the etcd data directory permissions are set to 700 or more restrictive.

Expected Result:

permissions has value 700, expected 700 or more restrictive

Returned Value:

permissions=700

1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated)​

Result: NOT APPLICABLE

Remediation: This control recommends that the etcd data directory be owned by the etcd user and group (etcd:etcd) to follow least privilege principles. However, in vCluster, etcd is embedded and runs as root within the syncer container. There is no separate etcd user present in the container. Thus the directory ownership check defined in this control is not applicable in the context of vCluster.

1.1.13 Ensure that the admin.conf file permissions are set to 600 or more restrictive (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/admin.conf

Verify that the admin.conf file permissions are 600 or more restrictive.

Expected Result:

permissions has value 600, expected 600 or more restrictive

Returned Value:

permissions=600

1.1.14 Ensure that the admin.conf file ownership is set to root:root (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/admin.conf

Verify that the admin.conf file ownership is set to root:root.

Expected Result:

root:root

Returned Value:

root:root

1.1.15 Ensure that the scheduler.conf file permissions are set to 600 or more restrictive (Automated)​

Result: PASS

Remediation: Get the scheduler kubeconfig file, passed as an argument to --kubeconfig, from the command

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/scheduler.conf

Verify that the scheduler kubeconfig file permissions are set to 600 or more restrictive.

Expected Result:

permissions has value 600, expected 600 or more restrictive

Returned Value:

permissions=600

1.1.16 Ensure that the scheduler.conf file ownership is set to root:root (Automated)​

Result: PASS

Remediation: Get the scheduler kubeconfig file, passed as an argument to --kubeconfig, from the command

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/scheduler.conf

Verify that the scheduler kubeconfig file ownership is set to root:root.

Expected Result:

root:root

Returned Value:

root:root

1.1.17 Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive (Automated)​

Result: PASS

Remediation: Get the controller-manager kubeconfig file, passed as an argument to --kubeconfig, from the command

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c permissions=%a /data/pki/controller-manager.conf

Verify that the controller-manager kubeconfig file permissions are set to 600 or more restrictive.

Expected Result:

permissions has value 600, expected 600 or more restrictive

Returned Value:

permissions=600

1.1.18 Ensure that the controller-manager.conf file ownership is set to root:root (Automated)​

Result: PASS

Remediation: Get the controller-manager kubeconfig file, passed as an argument to --kubeconfig, from the command

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- stat -c %U:%G /data/pki/controller-manager.conf

Verify that the controller-manager kubeconfig file ownership is set to root:root.

Expected Result:

root:root

Returned Value:

root:root

1.1.19 Ensure that the Kubernetes PKI directory and file ownership is set to root:root (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- find /data/pki -not -user root -o -not -group root | wc -l | grep -q '^0$' && echo "All files owned by root" || echo "Some files not owned by root"

Verify that the ownership of all files and directories in this hierarchy is set to root:root.

Expected Result:

All files owned by root

Returned Value:

All files owned by root

1.1.20 Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive (Automated)​

Result: PASS

Remediation: Run the audit command to verify the permissions on the certifcate files. If they do not match the expected result only then run the below command to set the appropriate permissions.

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec chmod 600 {} \;"                                                            

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.crt' -exec stat -c permissions=%a {} \;"

Verify that the permissions on all the certificate files are 600 or more restrictive.

Expected Result:

permissions on all the certificate files are 600 or more restrictive

Returned Value:

permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600

1.1.21 Ensure that the Kubernetes PKI key file permissions are set to 600 (Automated)​

Result: PASS

Remediation: Run the audit command to verify the permissions on the key files. If they do not match the expected result only then run the below command to set the appropriate permissions.

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec chmod 600 {} \;"                                                            

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- sh -c "find /data/pki -iname '*.key' -exec stat -c permissions=%a {} \;"

Verify that the permissions on all the key files are 600 or more restrictive.

Expected Result:

permissions on all the key files are 600 or more restrictive

Returned Value:

permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600
permissions=600

1.2 API Server​

1.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --anonymous-auth=false

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --anonymous-auth argument is set to false.

Expected Result:

'--anonymous-auth' is equal to 'false'

Returned Value:

41 root      0:07 /binaries/kube-apiserver --service-cluster-ip-range=10.96.0.0/16 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --profiling=false --advertise-address=127.0.0.1 --endpoint-reconciler-type=none --anonymous-auth=false

1.2.2 Ensure that the --token-auth-file parameter is not set (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --token-auth-file parameter is not set.

Expected Result:

'--token-auth-file' is not present

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100

1.2.3 Ensure that the --DenyServiceExternalIPs is set (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --enable-admission-plugins=DenyServiceExternalIPs

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the `DenyServiceExternalIPs' argument exist as a string value in --enable-admission-plugins.

Expected Result:

'DenyServiceExternalIPs' argument exist as a string value in the --enable-admission-plugins list.

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100

1.2.4 Ensure that the --kubelet-client-certificate and --kubelet-client-key arguments are set as appropriate (Automated)​

Result: NOT APPLICABLE

Remediation: This control recommends to set up certificate-based kubelet authentication to ensure that the apiserver authenticates itself to kubelets when submitting requests. However, since vCluster does not interact directly with kubelets running on the host cluster these client certificates are not required. Thus the check defined in this control is not applicable in the context of vCluster.

1.2.5 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated)​

Result: NOT APPLICABLE

Remediation: This control recommends to ensure that the kubelet certificate authority is set appropriately. However, since vCluster does not interact directly with kubelets running on the host cluster, verifying certificates using certifcate authority is not required. Thus the check defined in this control is not applicable in the context of vCluster.

1.2.6 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --authorization-mode argument exists and is not set to AlwaysAllow.

Expected Result:

'AlwaysAllow' argument does not exist as a string value in the --authorization-mode list.

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages,DenyServiceExternalIPs,NodeRestriction --request-timeout=300s --encryption-provider-config=/etc/encryption/encryption-config.yaml --audit-policy-file=/etc/kubernetes/audit-policy.yaml --audit-log-path=/var/log/audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100

1.2.7 Ensure that the --authorization-mode argument includes Node (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --authorization-mode=Node

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --authorization-mode argument exists and is set to a value to include Node.

Expected Result:

'Node' argument exists as a string value in the --authorization-mode list.

Returned Value:

47 root      0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node

1.2.8 Ensure that the --authorization-mode argument includes RBAC (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --authorization-mode argument exists and is set to a value to include RBAC.

Expected Result:

'RBAC' argument exists as a string value in the --authorization-mode list.

Returned Value:

47 root      0:10 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --authorization-mode=Node

1.2.9 Ensure that the admission control plugin EventRateLimit is set (Manual)​

Result: PASS

Remediation: Follow the Kubernetes documentation and set the desired limits in a configuration file. Create a config map in the vCluster namespace that contains the configuration file.

admission-control.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: admission-control
namespace: vcluster-my-vcluster
data:
admission-control.yaml: |
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: EventRateLimit
configuration:
apiVersion: eventratelimit.admission.k8s.io/v1alpha1
kind: Configuration
limits:
- type: Server
qps: 50
burst: 100

Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --enable-admission-plugins=EventRateLimit
- --admission-control-config-file=/etc/kubernetes/admission-control.yaml
statefulSet:
persistence:
addVolumes:
- name: admission-control
configMap:
name: admission-control
addVolumeMounts:
- name: admission-control
mountPath: /etc/kubernetes

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --enable-admission-plugins argument is set to a value that includes EventRateLimit.

Expected Result:

'EventRateLimit' argument exist as a string value in the --enable-admission-plugins list.

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml

1.2.10 Ensure that the admission control plugin AlwaysAdmit is not set (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that if the --enable-admission-plugins argument is set, its value does not include AlwaysAdmit.

Expected Result:

'AlwaysAdmit' argument does not exist as a string value in the --enable-admission-plugins list.

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=EventRateLimit --admission-control-config-file=/etc/kubernetes/admission-control.yaml

1.2.11 Ensure that the admission control plugin AlwaysPullImages is set (Manual)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --enable-admission-plugins=AlwaysPullImages

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --enable-admission-plugins argument is set to a value that includes AlwaysPullImages.

Expected Result:

'AlwaysPullImages' argument exist as a string value in the --enable-admission-plugins list.

Returned Value:

45 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=AlwaysPullImages

1.2.12 Ensure that the admission control plugin ServiceAccount is set (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --disable-admission-plugins argument is set to a value that does not includes ServiceAccount.

Expected Result:

--disable-admission-plugins is not set.

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.13 Ensure that the admission control plugin NamespaceLifecycle is set (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --disable-admission-plugins argument is set to a value that does not include NamespaceLifecycle.

Expected Result:

--disable-admission-plugins is not set.

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.14 Ensure that the admission control plugin NodeRestriction is set (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --enable-admission-plugins=NodeRestriction

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --enable-admission-plugins argument is set to a value that includes NodeRestriction.

Expected Result:

'NodeRestriction' argument exist as a string value in the --enable-admission-plugins list.

Returned Value:

44 root      0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --enable-admission-plugins=NodeRestriction

1.2.15 Ensure that the --profiling argument is set to false (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --profiling argument is set to false.

Expected Result:

'--profiling' is equal to 'false'

Returned Value:

45 root      0:04 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.16 Ensure that the --audit-log-path argument is set (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --audit-log-path=/var/log/audit.log
statefulSet:
persistence:
addVolumes:
- name: audit-log
emptyDir: {}
addVolumeMounts:
- name: audit-log
mountPath: /var/log

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --audit-log-path argument is set as appropriate.

Expected Result:

'--audit-log-path' is present

Returned Value:

45 root      0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log

1.2.17 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --audit-log-path=/var/log/audit.log
- --audit-log-maxage=30
statefulSet:
persistence:
addVolumes:
- name: audit-log
emptyDir: {}
addVolumeMounts:
- name: audit-log
mountPath: /var/log

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --audit-log-maxage argument is set to 30 or as appropriate.

Expected Result:

'--audit-log-maxage' is greater or equal to 30

Returned Value:

45 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxage=30

1.2.18 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --audit-log-path=/var/log/audit.log
- --audit-log-maxbackup=10
statefulSet:
persistence:
addVolumes:
- name: audit-log
emptyDir: {}
addVolumeMounts:
- name: audit-log
mountPath: /var/log

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --audit-log-maxbackup argument is set to 10 or as appropriate.

Expected Result:

'--audit-log-maxbackup' is greater or equal to 10

Returned Value:

44 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxbackup=10

1.2.19 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --audit-log-path=/var/log/audit.log
- --audit-log-maxsize=100
statefulSet:
persistence:
addVolumes:
- name: audit-log
emptyDir: {}
addVolumeMounts:
- name: audit-log
mountPath: /var/log

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --audit-log-maxsize argument is set to 100 or as appropriate.

Expected Result:

'--audit-log-maxsize' is greater or equal to 100

Returned Value:

43 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --audit-log-path=/var/log/audit.log --audit-log-maxsize=100

1.2.20 Ensure that the --request-timeout argument is set as appropriate (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --request-timeout=300s

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --request-timeout argument is either not set or set to an appropriate value.

Expected Result:

'--request-timeout' is set to 300s

Returned Value:

43 root      0:03 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --request-timeout=300s

1.2.21 Ensure that the --service-account-lookup argument is set to true (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --service-account-lookup=true

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that if the --service-account-lookup argument exists it is set to true.

Expected Result:

'--service-account-lookup' is not present OR '--service-account-lookup' is equal to 'true'

Returned Value:

43 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --service-account-lookup=true

1.2.22 Ensure that the --service-account-key-file argument is set as appropriate (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --service-account-key-file argument exists and is set as appropriate.

Expected Result:

'--service-account-key-file' argument exists and is set appropriately

Returned Value:

45 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.23 Ensure that the --etcd-certfile and --etcd-keyfile arguments are set as appropriate (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
backingStore:
etcd:
embedded:
enabled: true

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --etcd-certfile and --etcd-keyfile arguments exist and they are set as appropriate.

Expected Result:

'--etcd-certfile' is present AND '--etcd-keyfile' is present

Returned Value:

47 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.24 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --tls-cert-file and --tls-private-key-file arguments exist and they are set as appropriate.

Expected Result:

'--tls-cert-file' is present AND '--tls-private-key-file' is present

Returned Value:

45 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.25 Ensure that the --client-ca-file argument is set as appropriate (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --client-ca-file argument exists and it is set as appropriate.

Expected Result:

'--client-ca-file' is present

Returned Value:

45 root      0:01 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.26 Ensure that the --etcd-cafile argument is set as appropriate (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
backingStore:
etcd:
embedded:
enabled: true

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --etcd-cafile argument exists and it is set as appropriate.

Expected Result:

'--etcd-cafile' is present

Returned Value:

47 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/data/pki/etcd/ca.crt --etcd-certfile=/data/pki/apiserver-etcd-client.crt --etcd-keyfile=/data/pki/apiserver-etcd-client.key --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false

1.2.27 Ensure that the --encryption-provider-config argument is set as appropriate (Manual)​

Result: PASS

Remediation: Follow the Kubernetes documentation and configure a EncryptionConfig file. Generate a 32-bit key using the below command

head -c 32 /dev/urandom | base64

Create an encryption configuration file with base64 encoded key created previously.

encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {}

Create a secret in the vCluster namespace from the configuration file.

kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n vcluster-my-vcluster

Finally, create the vCluster referring the secret as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --encryption-provider-config=/etc/encryption/encryption-config.yaml
statefulSet:
persistence:
addVolumes:
- name: encryption-config
secret:
secretName: encryption-config
addVolumeMounts:
- name: encryption-config
mountPath: /etc/encryption
readOnly: true

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --encryption-provider-config argument is set to a EncryptionConfig file. Additionally, ensure that the EncryptionConfig file has all the desired resources covered especially any secrets.

Expected Result:

'--encryption-provider-config' is present

Returned Value:

45 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --encryption-provider-config=/etc/encryption/encryption-config.yaml

1.2.28 Ensure that encryption providers are appropriately configured (Automated)​

Result: PASS

Remediation: Follow the Kubernetes documentation and configure a EncryptionConfig file. Generate a 32-bit key using the below command

head -c 32 /dev/urandom | base64

Create an encryption configuration file with base64 encoded key created previously.

encryption-config.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {}

Create a secret in the vCluster namespace from the configuration file.

kubectl create secret generic encryption-config --from-file=encryption-config.yaml -n vcluster-my-vcluster

Finally, create the vCluster referring the secret as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --encryption-provider-config=/etc/encryption/encryption-config.yaml
statefulSet:
persistence:
addVolumes:
- name: encryption-config
secret:
secretName: encryption-config
addVolumeMounts:
- name: encryption-config
mountPath: /etc/encryption
readOnly: true

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- cat /etc/encryption/encryption-config.yaml

Verify that aescbc, kms, or secretbox is set as the encryption provider for all the desired resources.

Expected Result:

aescbc is set as the encryption provider for the configured resources

Returned Value:

apiVersion: apiserver.config.k8s.io/v1
kind: EncryptionConfiguration
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: <base64-encoded-32-byte-key>
- identity: {}

1.2.29 Ensure that the API Server only makes use of Strong Cryptographic Ciphers (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the API Server while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
apiServer:
extraArgs:
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-apiserver

Verify that the --tls-cipher-suites argument is set with one of the cipher suites listed below

TLS_AES_128_GCM_SHA256
TLS_AES_256_GCM_SHA384
TLS_CHACHA20_POLY1305_SHA256
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305
TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256
TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
TLS_RSA_WITH_3DES_EDE_CBC_SHA
TLS_RSA_WITH_AES_128_CBC_SHA
TLS_RSA_WITH_AES_128_GCM_SHA256
TLS_RSA_WITH_AES_256_CBC_SHA
TLS_RSA_WITH_AES_256_GCM_SHA384

Expected Result:

'--tls-cipher-suites' contains valid elements from 'TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384'

Returned Value:

43 root      0:02 /binaries/kube-apiserver --advertise-address=127.0.0.1 --service-cluster-ip-range=10.96.0.0/12 --bind-address=127.0.0.1 --allow-privileged=true --authorization-mode=RBAC --client-ca-file=/data/pki/client-ca.crt --enable-bootstrap-token-auth=true --etcd-servers=unix:///data/kine.sock --proxy-client-cert-file=/data/pki/front-proxy-client.crt --proxy-client-key-file=/data/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/pki/sa.pub --service-account-signing-key-file=/data/pki/sa.key --tls-cert-file=/data/pki/apiserver.crt --tls-private-key-file=/data/pki/apiserver.key --endpoint-reconciler-type=none --profiling=false --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256

1.3 Controller Manager​

1.3.1 Ensure that the --terminated-pod-gc-threshold argument is set as appropriate (Automated)​

Result: PASS

Remediation: Pass the below configuration while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
controllerManager:
extraArgs:
- --terminated-pod-gc-threshold=12500

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Verify that the --terminated-pod-gc-threshold argument is set as appropriate.

Expected Result:

'--terminated-pod-gc-threshold' is present

Returned Value:

98 root      0:01 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --terminated-pod-gc-threshold=12500

1.3.2 Ensure that the --profiling argument is set to false (Automated)​

Result: PASS

Remediation: Pass the below configuration while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
controllerManager:
extraArgs:
- --profiling=false

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Verify that the --profiling argument is set to false.

Expected Result:

'--profiling' is equal to 'false'

Returned Value:

98 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl --profiling=false

1.3.3 Ensure that the --use-service-account-credentials argument is set to true (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Verify that the --use-service-account-credentials argument is set to true.

Expected Result:

'--use-service-account-credentials' is not equal to 'false'

Returned Value:

102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

1.3.4 Ensure that the --service-account-private-key-file argument is set as appropriate (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Verify that the --service-account-private-key-file argument is set as appropriate.

Expected Result:

'--service-account-private-key-file' is present

Returned Value:

102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

1.3.5 Ensure that the --root-ca-file argument is set as appropriate (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Verify that the --root-ca-file argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate.

Expected Result:

'--root-ca-file' is present

Returned Value:

102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

1.3.6 Ensure that the RotateKubeletServerCertificate argument is set to true (Automated)​

Result: NOT APPLICABLE

Remediation: This control recommends enabling the RotateKubeletServerCertificate feature-gate which ensures that kubelet server certificates are automatically rotated by the Kubernetes control plane. However, vCluster does not run real kubelets; it operates entirely within the host cluster and abstracts away node-level operations. Since, vCluster has no control over kubelet configuration or certificate rotation, enforcing this setting must be done at the host cluster level, where the actual kubelets are running. Thus the check defined in this control is not applicable in the context of vCluster.

1.3.7 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)​

Result: PASS

Audit: Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-controller-manager

Verify that the --bind-address argument is set to 127.0.0.1

Expected Result:

'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present

Returned Value:

102 root      0:00 /binaries/kube-controller-manager --service-cluster-ip-range=10.96.0.0/12 --authentication-kubeconfig=/data/pki/controller-manager.conf --authorization-kubeconfig=/data/pki/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/data/pki/client-ca.crt --cluster-name=kubernetes --cluster-signing-cert-file=/data/pki/server-ca.crt --cluster-signing-key-file=/data/pki/server-ca.key --horizontal-pod-autoscaler-sync-period=60s --kubeconfig=/data/pki/controller-manager.conf --node-monitor-grace-period=180s --node-monitor-period=30s --pvclaimbinder-sync-period=60s --requestheader-client-ca-file=/data/pki/front-proxy-ca.crt --root-ca-file=/data/pki/server-ca.crt --service-account-private-key-file=/data/pki/sa.key --use-service-account-credentials=true --leader-elect=false --controllers=*,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle,-ttl

1.4 Scheduler​

1.4.1 Ensure that the --profiling argument is set to false (Automated)​

Result: PASS

Remediation: Pass the below configuration as arguments to the scheduler while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
scheduler:
extraArgs:
- --profiling=false
advanced:
virtualScheduler:
enabled: true
sync:
fromHost:
nodes:
enabled: true

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

Verify that the --profiling argument is set to false.

Expected Result:

'--profiling' is equal to 'false'

Returned Value:

 98 root      0:01 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false --profiling=false

1.4.2 Ensure that the --bind-address argument is set to 127.0.0.1 (Automated)​

Result: PASS

Remediation: Pass the below configuration while creating the vCluster as:

vcluster.yaml
controlPlane:
distro:
k8s:
enabled: true
advanced:
virtualScheduler:
enabled: true
sync:
fromHost:
nodes:
enabled: true

Audit: Create the vCluster using the above values file.

vcluster create my-vcluster -f vcluster.yaml --connect=false

Run the following command against the vCluster pod:

kubectl exec -n vcluster-my-vcluster my-vcluster-0 -c syncer -- ps -ef | grep kube-scheduler

Verify that the --bind-address argument is set to 127.0.0.1

Expected Result:

'--bind-address' is equal to '127.0.0.1' OR '--bind-address' is not present

Returned Value:

88 root      0:00 /binaries/kube-scheduler --authentication-kubeconfig=/data/pki/scheduler.conf --authorization-kubeconfig=/data/pki/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/data/pki/scheduler.conf --leader-elect=false