This hardening guide is intended to be used with specific versions of the CIS Kubernetes Benchmark, Kubernetes, and Rancher:

This document provides prescriptive guidance for hardening a production installation of Rancher v2.4 with Kubernetes v1.15. It outlines the configurations required to address Kubernetes benchmark controls from the Center for Information Security (CIS).

For more detail about evaluating a hardened cluster against the official CIS benchmark, refer to the CIS Benchmark Rancher Self-Assessment Guide - Rancher v2.4.

Known Issues

  • Rancher exec shell and view logs for pods are not functional in a CIS 1.5 hardened setup when only public IP is provided when registering custom nodes. This functionality requires a private IP to be provided when registering the custom nodes.
  • When setting the default_pod_security_policy_template_id: to restricted Rancher creates RoleBindings and ClusterRoleBindings on the default service accounts. The CIS 1.5 5.1.5 check requires the default service accounts have no roles or cluster roles bound to it apart from the defaults. In addition the default service accounts should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

Configure Kernel Runtime Parameters

The following sysctl configuration is recommended for all nodes type in the cluster. Set the following parameters in /etc/sysctl.d/90-kubelet.conf:

Run sysctl -p /etc/sysctl.d/90-kubelet.conf to enable the settings.

create etcd user and group

To create the etcd group run the following console commands.

The commands below use 52034 for uid and gid are for example purposes. Any valid unused uid or gid could also be used in lieu of 52034.

  1. groupadd --gid 52034 etcd
  2. useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd

Update the RKE config.yml with the uid and gid of the etcd user:

  1. services:
  2. etcd:
  3. gid: 52034
  4. uid: 52034

Set automountServiceAccountToken to false for default service accounts

Kubernetes provides a default service account which is used by cluster workloads where no specific service account is assigned to the pod. Where access to the Kubernetes API from a pod is required, a specific service account should be created for that pod, and rights granted to that service account. The default service account should be configured such that it does not provide a service account token and does not have any explicit rights assignments.

For each namespace including default and kube-system on a standard RKE install the default service account must include this value:

Save the following yaml to a file called account_update.yaml

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: default
  5. automountServiceAccountToken: false

Create a bash script file called account_update.sh. Be sure to chmod +x account_update.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -o custom-columns=NAME:.metadata.name --no-headers); do
  3. kubectl patch serviceaccount default -n ${namespace} -p "$(cat account_update.yaml)"
  4. done

Ensure that all Namespaces have Network Policies defined

Network Policies are namespace scoped. When a network policy is introduced to a given namespace, all traffic not allowed by the policy is denied. However, if there are no network policies in a namespace all traffic will be allowed into and out of the pods in that namespace. To enforce network policies, a CNI (container network interface) plugin must be enabled. This guide uses canal to provide the policy enforcement. Additional information about CNI providers can be found

Once a CNI provider is enabled on a cluster a default network policy can be applied. For reference purposes a permissive example is provide below. If you want to allow all traffic to all pods in a namespace (even if policies are added that cause some pods to be treated as “isolated”), you can create a policy that explicitly allows all traffic in that namespace. Save the following yaml as default-allow-all.yaml. Additional documentation about network policies can be found on the Kubernetes site.

This NetworkPolicy is not recommended for production use

Create a bash script file called apply_networkPolicy_to_all_ns.sh. Be sure to chmod +x apply_networkPolicy_to_all_ns.sh so the script has execute permissions.

  1. #!/bin/bash -e
  2. for namespace in $(kubectl get namespaces -A -o json | jq -r '.items[].metadata.name'); do
  3. kubectl apply -f default-allow-all.yaml -n ${namespace}
  4. done

Execute this script to apply the default-allow-all.yaml the permissive NetworkPolicy to all namespaces.

The reference cluster.yml is used by the RKE CLI that provides the configuration needed to achieve a hardened install of Rancher Kubernetes Engine (RKE). Install is provided with additional details about the configuration items. This reference cluster.yml does not include the required nodes directive which will vary depending on your environment. Documentation for node configuration can be found here: https://rancher.com/docs/rke/latest/en/config-options/nodes

  1. # If you intend to deploy Kubernetes in an air-gapped environment,
  2. # please consult the documentation on how to configure custom RKE images.
  3. kubernetes_version: "v1.15.9-rancher1-1"
  4. enable_network_policy: true
  5. default_pod_security_policy_template_id: "restricted"
  6. # the nodes directive is required and will vary depending on your environment
  7. # documentation for node configuration can be found here:
  8. # https://rancher.com/docs/rke/latest/en/config-options/nodes
  9. nodes:
  10. services:
  11. etcd:
  12. uid: 52034
  13. gid: 52034
  14. kube-api:
  15. pod_security_policy: true
  16. secrets_encryption_config:
  17. enabled: true
  18. audit_log:
  19. enabled: true
  20. admission_configuration:
  21. event_rate_limit:
  22. enabled: true
  23. kube-controller:
  24. extra_args:
  25. feature-gates: "RotateKubeletServerCertificate=true"
  26. scheduler:
  27. image: ""
  28. extra_args: {}
  29. extra_binds: []
  30. extra_env: []
  31. kubelet:
  32. generate_serving_certificate: true
  33. extra_args:
  34. feature-gates: "RotateKubeletServerCertificate=true"
  35. protect-kernel-defaults: "true"
  36. tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256"
  37. extra_binds: []
  38. extra_env: []
  39. cluster_domain: ""
  40. infra_container_image: ""
  41. cluster_dns_server: ""
  42. fail_swap_on: false
  43. kubeproxy:
  44. image: ""
  45. extra_args: {}
  46. extra_binds: []
  47. extra_env: []
  48. network:
  49. plugin: ""
  50. options: {}
  51. mtu: 0
  52. node_selector: {}
  53. authentication:
  54. strategy: ""
  55. sans: []
  56. webhook: null
  57. addons: |
  58. ---
  59. apiVersion: v1
  60. kind: Namespace
  61. metadata:
  62. name: ingress-nginx
  63. ---
  64. apiVersion: rbac.authorization.k8s.io/v1
  65. metadata:
  66. name: default-psp-role
  67. namespace: ingress-nginx
  68. rules:
  69. - apiGroups:
  70. - extensions
  71. resourceNames:
  72. - default-psp
  73. resources:
  74. - podsecuritypolicies
  75. verbs:
  76. ---
  77. apiVersion: rbac.authorization.k8s.io/v1
  78. kind: RoleBinding
  79. metadata:
  80. name: default-psp-rolebinding
  81. namespace: ingress-nginx
  82. roleRef:
  83. apiGroup: rbac.authorization.k8s.io
  84. kind: Role
  85. name: default-psp-role
  86. subjects:
  87. - apiGroup: rbac.authorization.k8s.io
  88. kind: Group
  89. name: system:serviceaccounts
  90. - apiGroup: rbac.authorization.k8s.io
  91. kind: Group
  92. name: system:authenticated
  93. ---
  94. apiVersion: v1
  95. kind: Namespace
  96. metadata:
  97. name: cattle-system
  98. ---
  99. apiVersion: rbac.authorization.k8s.io/v1
  100. kind: Role
  101. metadata:
  102. name: default-psp-role
  103. namespace: cattle-system
  104. rules:
  105. - apiGroups:
  106. - extensions
  107. resourceNames:
  108. - default-psp
  109. resources:
  110. - podsecuritypolicies
  111. verbs:
  112. - use
  113. ---
  114. apiVersion: rbac.authorization.k8s.io/v1
  115. kind: RoleBinding
  116. metadata:
  117. name: default-psp-rolebinding
  118. namespace: cattle-system
  119. roleRef:
  120. apiGroup: rbac.authorization.k8s.io
  121. kind: Role
  122. name: default-psp-role
  123. subjects:
  124. - apiGroup: rbac.authorization.k8s.io
  125. kind: Group
  126. name: system:serviceaccounts
  127. - apiGroup: rbac.authorization.k8s.io
  128. kind: Group
  129. name: system:authenticated
  130. ---
  131. apiVersion: policy/v1beta1
  132. kind: PodSecurityPolicy
  133. metadata:
  134. name: restricted
  135. spec:
  136. requiredDropCapabilities:
  137. - NET_RAW
  138. privileged: false
  139. allowPrivilegeEscalation: false
  140. defaultAllowPrivilegeEscalation: false
  141. fsGroup:
  142. rule: RunAsAny
  143. runAsUser:
  144. rule: MustRunAsNonRoot
  145. seLinux:
  146. rule: RunAsAny
  147. supplementalGroups:
  148. rule: RunAsAny
  149. volumes:
  150. - emptyDir
  151. - secret
  152. - persistentVolumeClaim
  153. - downwardAPI
  154. - configMap
  155. - projected
  156. ---
  157. apiVersion: rbac.authorization.k8s.io/v1
  158. kind: ClusterRole
  159. metadata:
  160. name: psp:restricted
  161. rules:
  162. - apiGroups:
  163. - extensions
  164. resourceNames:
  165. - restricted
  166. resources:
  167. - podsecuritypolicies
  168. verbs:
  169. - use
  170. ---
  171. apiVersion: rbac.authorization.k8s.io/v1
  172. kind: ClusterRoleBinding
  173. metadata:
  174. name: psp:restricted
  175. roleRef:
  176. apiGroup: rbac.authorization.k8s.io
  177. kind: ClusterRole
  178. name: psp:restricted
  179. - apiGroup: rbac.authorization.k8s.io
  180. kind: Group
  181. name: system:serviceaccounts
  182. - apiGroup: rbac.authorization.k8s.io
  183. name: system:authenticated
  184. ---
  185. apiVersion: v1
  186. kind: ServiceAccount
  187. metadata:
  188. name: tiller
  189. namespace: kube-system
  190. ---
  191. apiVersion: rbac.authorization.k8s.io/v1
  192. kind: ClusterRoleBinding
  193. metadata:
  194. name: tiller
  195. roleRef:
  196. apiGroup: rbac.authorization.k8s.io
  197. kind: ClusterRole
  198. name: cluster-admin
  199. subjects:
  200. - kind: ServiceAccount
  201. name: tiller
  202. namespace: kube-system
  203. addons_include: []
  204. system_images:
  205. etcd: ""
  206. alpine: ""
  207. nginx_proxy: ""
  208. cert_downloader: ""
  209. kubernetes_services_sidecar: ""
  210. kubedns: ""
  211. dnsmasq: ""
  212. kubedns_sidecar: ""
  213. kubedns_autoscaler: ""
  214. coredns: ""
  215. coredns_autoscaler: ""
  216. kubernetes: ""
  217. flannel: ""
  218. flannel_cni: ""
  219. calico_node: ""
  220. calico_cni: ""
  221. calico_controllers: ""
  222. calico_ctl: ""
  223. calico_flexvol: ""
  224. canal_node: ""
  225. canal_cni: ""
  226. canal_flannel: ""
  227. canal_flexvol: ""
  228. weave_node: ""
  229. weave_cni: ""
  230. pod_infra_container: ""
  231. ingress: ""
  232. ingress_backend: ""
  233. metrics_server: ""
  234. windows_pod_infra_container: ""
  235. ssh_key_path: ""
  236. ssh_cert_path: ""
  237. ssh_agent_auth: false
  238. authorization:
  239. mode: ""
  240. options: {}
  241. ignore_docker_version: false
  242. private_registries: []
  243. ingress:
  244. provider: ""
  245. options: {}
  246. node_selector: {}
  247. extra_args: {}
  248. dns_policy: ""
  249. extra_envs: []
  250. extra_volumes: []
  251. extra_volume_mounts: []
  252. cluster_name: ""
  253. prefix_path: ""
  254. addon_job_timeout: 0
  255. bastion_host:
  256. address: ""
  257. port: ""
  258. user: ""
  259. ssh_key: ""
  260. ssh_key_path: ""
  261. ssh_cert: ""
  262. ssh_cert_path: ""
  263. monitoring:
  264. provider: ""
  265. options: {}
  266. node_selector: {}
  267. restore:
  268. restore: false
  269. snapshot_name: ""
  270. dns: null

Reference Hardened RKE Template configuration

The reference RKE Template provides the configuration needed to achieve a hardened install of Kubenetes. RKE Templates are used to provision Kubernetes and define Rancher settings. Follow the Rancher documentaion for additional installation and RKE Template details.

  1. #cloud-config
  2. packages:
  3. - curl
  4. - jq
  5. runcmd:
  6. - sysctl -w vm.overcommit_memory=1
  7. - sysctl -w kernel.panic=10
  8. - sysctl -w kernel.panic_on_oops=1
  9. - curl https://releases.rancher.com/install-docker/18.09.sh | sh
  10. - usermod -aG docker ubuntu
  11. - return=1; while [ $return != 0 ]; do sleep 2; docker ps; return=$?; done
  12. - addgroup --gid 52034 etcd
  13. - useradd --comment "etcd service account" --uid 52034 --gid 52034 etcd
  14. write_files:
  15. - path: /etc/sysctl.d/kubelet.conf
  16. owner: root:root
  17. permissions: "0644"
  18. content: |
  19. vm.overcommit_memory=1
  20. kernel.panic_on_oops=1