A Certified Kubernetes Security Specialist (CKS) is an accomplished Kubernetes practitioner (must be CKA certified) who has demonstrated competence on a broad range of best practices for securing container-based applications and Kubernetes platforms during build, deployment, and runtime.
- Duration of Exam: 120 minutes
- Number of questions: 15-20 hands-on performance-based tasks
- Passing score: 67%
- Certification validity: 2 years
- Prerequisite: valid CKA certification
- Cost: $395 USD
- Exam Eligibility: 12 Month, with a free retake within this year.
- Software Version: Kubernetes v1.27
- The official website with certification
- CNCF Exam Curriculum repository
- Tips & Important Instructions: CKS
- Candidate Handbook
- Verify Certification
- Use network security policies to restrict cluster-level access. This will help to prevent unauthorized access to your cluster resources.
- Use the CIS benchmark to review the security configuration of Kubernetes components (etcd, kubelet, kubedns, kubeapi). The CIS benchmark is a set of security recommendations that can help you to harden your Kubernetes cluster.
- Properly set up Ingress objects with security control. Ingress objects allow you to expose your Kubernetes services to the outside world. It is important to configure Ingress objects with appropriate security controls to prevent unauthorized access.
- Protect node metadata and endpoints. Node metadata and endpoints contain sensitive information about your Kubernetes nodes. It is important to protect this information from unauthorized access.
- Minimize use of, and access to, GUI elements. The Kubernetes GUI can be a convenient way to manage your cluster, but it is also a potential security risk. It is important to minimize use of the GUI and to restrict access to it to authorized users.
- Verify platform binaries before deploying. Before deploying Kubernetes platform binaries, it is important to verify their authenticity and integrity. This can be done by using a checksum or by signing the binaries.
Examples:
-
Example_1: Create default deny networking policy with deny-all name in monitoring namespace:
--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all namespace: monitoring spec: podSelector: {} policyTypes: - Ingress - Egress
-
Example_2: Create networking policy with api-allow name and create a restriction access to api-allow application that has deployed on default namespace and allow access only from app2 pods:
--- kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: api-allow spec: podSelector: matchLabels: run: my-app ingress: - from: - podSelector: matchLabels: run: app2
-
Example_3: Define an allow-all policy which overrides the deny all policy on default namespace:
--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all namespace: default spec: podSelector: {} policyTypes: - Ingress - Egress ingress: {} egress: {}
Other examples you can find in hands-on with Kubernetes network policy
Useful official documentation
Useful non-official documentation
- Networking policy editor
- Kubernetes network policy recipes
- An Introduction to Kubernetes Network Policies for Security People
- Testing Kubernetes network policies behavior
- Network policy from banzaicloud
2. Use CIS benchmark to review the security configuration of Kubernetes components (etcd, kubelet, kubedns, kubeapi)
Examples:
-
Example_1: Fix issues that provided in CIS file (some example of the file):
[INFO] 1 Master Node Security Configuration [INFO] 1.2 API Server [FAIL] 1.2.20 Ensure that the --profiling argument is set to false (Automated) == Remediations master == 1.2.20 Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the master node and set the below parameter. --profiling=false == Summary master == 0 checks PASS 1 checks FAIL 0 checks WARN 0 checks INFO == Summary total == 0 checks PASS 1 checks FAIL 0 checks WARN 0 checks INFO
-
Example_2: Fix issues of 1.3.2 part with kube-bench:
kube-bench run --targets master --check 1.3.2 [INFO] 1 Master Node Security Configuration [INFO] 1.3 Controller Manager [FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated) == Remediations master == 1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the master node and set the below parameter. --profiling=false == Summary master == 0 checks PASS 1 checks FAIL 0 checks WARN 0 checks INFO == Summary total == 0 checks PASS 1 checks FAIL 0 checks WARN 0 checks INFO
Then, going to fix:
... containers: - command: - kube-apiserver - --profiling=false ... image: registry.k8s.io/kube-apiserver:v1.22.2 ...
Useful official documentation
- None
Useful non-official documentation
Examples:
-
Install ingress
Deploy the stack:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:
kubectl wait --namespace ingress-nginx \ --for=condition=ready pod \ --selector=app.kubernetes.io/component=controller \ --timeout=120s
Let's create a simple web server and the associated service:
kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo
Then create an ingress resource. The following example uses a host that maps to localhost:
kubectl create ingress demo-localhost --class=nginx \ --rule="demo.localdev.me/*=demo:80"
Now, forward a local port to the ingress controller:
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
At this point, you can access your deployment using curl:
curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080
You should see a HTML response containing text like "It works!".
-
Example_1: Create ingress with ingress-app1 name in app1 namespace for the app1-svc service:
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-app1 namespace: app1 annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx rules: - http: paths: - path: /health pathType: Prefix backend: service: name: app1-svc port: number: 80
-
Example_2: Create ingress with ingress-app1 name in app1 namespace (with TLS):
--- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-app1 namespace: app1 annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - "local.domail.name" secretName: local-domain-tls rules: - http: paths: - path: /health pathType: Prefix backend: service: name: app1-svc port: number: 80
NOTE: You should create the needed local-domain-tls secret for Ingress with certifications:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout cert.key -out cert.crt -subj "/CN=local.domail.name/O=local.domail.name"
kubectl -n app1 create secret tls local-domain-tls --key cert.key --cert cert.crt
Useful official documentation
Useful non-official documentation
It's part of networking policy where you can restrict access to metadata/endpoints.
Examples:
-
Create metadata restriction with networking policy of deny-all-allow-metadata-access name in monitoring namespace to deny all except 1.1.1.1 IP:
--- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all-allow-metadata-access namespace: monitoring spec: podSelector: {} policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 1.1.1.1/32
Useful official documentation
Useful non-official documentation
Restricting the Kubernetes GUI can be accomplished through proper Role-Based Access Control (RBAC) configuration. In Kubernetes, RBAC is created via the RoleBinding resource. Always ensure people are given least-privilege access by default, then provide requests as the user needs them.
A second way to secure the GUI is via Token authentication. Token authentication is prioritized by the Kubernetes Dashboard. The token is in the format Authorization: Bearer token
and it is located in the request header itself. Bearer Tokens are created through the use of Service Account Tokens. These are just a few of the K8s dashboard concepts that will wind up on the CKS. Make sure you have a thorough understanding of service accounts and how they relate to the Kubernetes Dashboard prior to taking the exam.
To install web-ui dashboard, use:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: non-functional in v1.27+; use the "seccompProfile" field instead
deployment.apps/dashboard-metrics-scraper created
Let's get dashboard's resources:
k -n kubernetes-dashboard get pod,deploy,svc
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-5bc754cb48-8gbcc 1/1 Running 0 65s
pod/kubernetes-dashboard-6db6d44699-49kk4 1/1 Running 0 65s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 65s
deployment.apps/kubernetes-dashboard 1/1 1 1 65s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.111.34.81 <none> 8000/TCP 65s
service/kubernetes-dashboard ClusterIP 10.98.70.117 <none> 443/TCP 65s
As most of you notice, default Kubernetes Dashboard service is exposed as Cluster IP and it would not be possible for administrators to access this IP address without getting inside a shell inside a Pod. For most cases, administrators use “kubectl proxy” to proxy an endpoint within the working machine to the actual Kubernetes Dashboard service. In some testing environments in less security concern, we could make Kubernetes Dashboard deployments and services to be exposed with Node Port, so administrators could use nodes’ IP address, public or private, and assigned port to access the service. We edit the actual running deployment YAML:
kubectl edit deployment kubernetes-dashboard -n kubernetes-dashboard
Then, add --insecure-port=9999
and tune it, likes:
.....
spec:
containers:
- args:
- --namespace=kubernetes-dashboard
- --insecure-port=9999
image: kubernetesui/dashboard:v2.1.0
imagePullPolicy: Always
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 9999
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 30
.....
NOTE:
- Delete the
auto-generate-certificates
from config. - Change
port
oflivenessProbe
to9999
. - Change
scheme
oflivenessProbe
toHTTP
.
After that, we make changes on Kubernetes Dashboard services:
kubectl edit service kubernetes-dashboard -n kubernetes-dashboard
And:
- Change port to
9999
. - Change targetPort to
9999
. - Change type to
NodePort
.
The config should be likes:
.....
ports:
- nodePort: 30142
port: 9999
protocol: TCP
targetPort: 9999
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
.....
Then, runnning the next command to forward port to:
kubectl port-forward deployments/kubernetes-dashboard 9999:30142 -n kubernetes-dashboard
Open your browser on http://127.0.0.1:30142/
.
Since Kubernetes Dashboard is leveraging service account “default” in namespace “kubernetes-dashboard” for accessing each resource, binding the right permission to this service account would allow the dashboard to show more information in the corresponding namespaces.
Useful official documentation
Useful non-official documentation
Examples:
-
Compare binary file of kubelet on the current host and with kubelet 1.27 that you must download from official release:
sha512sum $(which kubelet) | cut -c-10 wget -O kubelet https://dl.k8s.io/$(/usr/bin/kubelet --version | cut -d " " -f2)/bin/linux/$(uname -m)/kubelet sha512sum ./kubelet | cut -c -10
Useful official documentation
- None
Useful non-official documentation
When it comes to Kubernetes Production Implementation restricting API access is very important. Restricting access to the API server is about three things:
- Authentication
- Authorization
- Admission Control The primary topics under this section would be bootstrap tokens, RBAC, ABAC, service account, and admission webhooks.
- Cluster API access methods
- Kubernetes API Access Security
- Authentication
- Authorization
- Admission Controllers
- Admission Webhooks
Examples:
-
Example_1: Blocking anonymous access to use API:
First that need to check is:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -Ei "anonymous-auth"
If it's empty output, then:
ps -ef | grep kubelet | grep -Ei "kubeconfig"
Fix if it's enabled, oppening `/var/lib/kubelet/config.yaml` file:
--- apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false ............
NOTE: As workaround, you can use the
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
file and add--anonymous-auth=false
intoKUBELET_SYSTEM_PODS_ARGS
.Make restart service of kubelet:
systemctl daemon-reload systemctl restart kubelet.service
-
Example_2: Chaning authentication mode to Webhook:
Getting
kubeconfig
path:ps -ef | grep kubelet | grep -Ei "kubeconfig"
Oppening `/var/lib/kubelet/config.yaml` file:
--- apiVersion: kubelet.config.k8s.io/v1beta1 ..... authorization: mode: Webhook .....
Make restart service of kubelet:
systemctl daemon-reload systemctl restart kubelet.service
-
Example_3: Blocking insecure port:
First, checking:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -Ei "insecure-port"
Oppening `/etc/kubernetes/manifests/kube-apiserver.yaml` file:
--- apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.30.1.2:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver ............ - --insecure-port=0 - --secure-port=443 .........
-
Example_4: Enable protect kernel defaults for kube-apiserver:
First, checking:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -Ei "protect-kernel-defaults"
Oppening `/var/lib/kubelet/config.yaml` file:
--- apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 0s cacheUnauthorizedTTL: 0s cgroupDriver: systemd protectKernelDefaults: true .........
NOTE: As workaround, you can use the
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
file and add--protect-kernel-defaults=true
intoKUBELET_SYSTEM_PODS_ARGS
.Make restart service of kubelet:
systemctl daemon-reload systemctl restart kubelet.service
-
Example_5: NodeRestriction enabling:
Check if Node restriction is enabled (if so, - it should NodeRestriction):
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -Ei "enable-admission-plugins"
Open the
/etc/kubernetes/manifests/kube-apiserver.yaml
file with some editor.Let's enable NodeRestriction on Controlplane node
spec: containers: - command: - kube-apiserver - --advertise-address=172.30.1.2 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true
Let's check the configurations:
ssh node01 export KUBECONFIG=/etc/kubernetes/kubelet.conf k label node controlplane killercoda/two=123 # restricted k label node node01 node-restriction.kubernetes.io/two=123 # restricted k label node node01 test/two=123 # works
-
Example_6: Kubernetes API troubleshooting:
First al all, checking:
cat /var/log/syslog | grep kube-apiserver or cat /var/log/syslog | grep -Ei "apiserver" | grep -Ei "line"
Secondly, checking:
journalctl -xe | grep apiserver
Lastly, checking:
crictl ps -a | grep api crictl logs fbb80dac7429e
-
Example_7: Certificate signing requests sign manually:
First of all, we should have key. Let's get it through openssl:
openssl genrsa -out 60099.key 2048
Next, runnning the next command to generate certificate:
openssl req -new -key 60099.key -out 60099.csr
Note: set Common Name = 60099@internal.users
Certificate signing requests sign manually (manually sign the CSR with the K8s CA file to generate the CRT):
openssl x509 -req -in 60099.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out 60099.crt -days 500
Set credentials & context:
k config set-credentials 60099@internal.users --client-key=60099.key --client-certificate=60099.crt k config set-context 60099@internal.users --cluster=kubernetes --user=60099@internal.users k config get-contexts k config use-context 60099@internal.users
Checks:
k get ns k get po
-
Example_8: Certificate signing requests sign K8S:
First of all, we should have key. Let's get it through openssl:
openssl genrsa -out 60099.key 2048
Next, runnning the next command to generate certificate:
openssl req -new -key 60099.key -out 60099.csr
Note: set Common Name = 60099@internal.users
Convert the CSR file into base64:
cat 60099.csr | base64 -w 0
Copy it into the YAML:
apiVersion: certificates.k8s.io/v1 kind: CertificateSigningRequest metadata: name: 60099@internal.users # ADD spec: groups: - system:authenticated request: CERTIFICATE_BASE64_HERE signerName: kubernetes.io/kube-apiserver-client usages: - client auth
Create and approve:
k -f csr.yaml create k get csr # pending k certificate approve 60099@internal.users k get csr # approved k get csr 60099@internal.users -ojsonpath="{.status.certificate}" | base64 -d > 60099.crt
Set credentials & context:
k config set-credentials 60099@internal.users --client-key=60099.key --client-certificate=60099.crt k config set-context 60099@internal.users --cluster=kubernetes --user=60099@internal.users k config get-contexts k config use-context 60099@internal.users
Checks:
k get ns k get po
-
Example_9: Add minimal TLS 1.2 for ETCD and kube-apiserver; Add cipher=ECDHE-RSA-DES-CBC3-SHA as well:
-
ETCD side, open
/etc/kubernetes/manifests/etcd.yaml
file and put the next:.... spec: containers: - command: - etcd - --advertise-client-urls=https://172.30.1.2:2379 - --cert-file=/etc/kubernetes/pki/etcd/server.crt - --client-cert-auth=true - --data-dir=/var/lib/etcd - --experimental-initial-corrupt-check=true - --experimental-watch-progress-notify-interval=5s - --initial-advertise-peer-urls=https://172.30.1.2:2380 - --initial-cluster=controlplane=https://172.30.1.2:2380 - --key-file=/etc/kubernetes/pki/etcd/server.key - --listen-client-urls=https://127.0.0.1:2379,https://172.30.1.2:2379 - --listen-metrics-urls=http://127.0.0.1:2381 - --listen-peer-urls=https://172.30.1.2:2380 - --name=controlplane - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt - --peer-client-cert-auth=true - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --snapshot-count=10000 - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt - --cipher-suites=ECDHE-RSA-DES-CBC3-SHA image: registry.k8s.io/etcd:3.5.7-0 imagePullPolicy: IfNotPresent ....
Checking ETCD:
crictl ps -a | grep etcd
NOTE: To get logs, you can use:
cat /var/log/syslog | grep etcd
-
kube-apiserver side, open
/etc/kubernetes/manifests/etcd.yaml
file and put the next:--cipher-suites=ECDHE-RSA-DES-CBC3-SHA - --tls-min-version=VersionTLS12
Checking kube-apiserver:
crictl ps -a | grep apiserver
NOTE: To get logs, you can use:
cat /var/log/syslog | grep apiserver
-
Useful official documentation
- Controlling access
- Controlling access (api server ports and ips)
- Block anonymous requests
- Certificates
- Certificate signing requests
- Using Node Authorization
- Accessing the Kubernetes API from a Pod
- Access to Kubernetes cluster API
- Authorization Modes
- Admission controllers
- Extensible admission controllers
- Kubelet authn/authz
- Kubelet config
Useful non-official documentation
Allowing unnecessary cluster-wide access to everyone is a common mistake done during Kubernetes implementations. With Kubernetes RBAC, you can define fine-grained control on who can access the Kubernetes API to enforce the principle of least privilege. The concepts will include:
- Role = the position that could perform actions
- ClusterRoles = the position that could perform actions across the whole cluster
- RoleBinding = the position that could perform actions
- ClusterRoleBindings = the binding of user/service account and cluster roles
Examples:
-
Example_1: Working with RBAC (roles and role bindings):
Create role & rolebinding:
k create role role_name --verb=get,list,watch --resource=pods k create rolebinding role_name_binding --role=role_name --user=captain --group=group1
Verify:
k auth can-i get pods --as captain -n kube-public k auth can-i list pods --as captain -n default
-
Example_2: Working with RBAC (cluster roles and cluster role bindings):
Create clusterrole & clusterrolebinding:
k create clusterrole cluster_role --verb=get,list,watch --resource=pods k create clusterrolebinding cluster_role_binding --clusterrole=cluster_role --user=cap
Verify:
k auth can-i list pods --as cap -n kube-public k auth can-i list pods --as cap -n default
-
Example_3: Working with Service Account and RBAC:
Create Service Account and RBAC:
k -n name_space_1 create sa ser_acc k create clusterrolebinding ser_acc-view --clusterrole view --serviceaccount name_space_1:ser_acc
Verify:
k auth can-i update deployments --as system:serviceaccount:name_space_1:ser_acc -n default k auth can-i update deployments --as system:serviceaccount:name_space_1:ser_acc -n name_space_1
You must know to how:
- To create roles & role bindings.
- To create cluster roles & cluster role bindings.
- To create service account and grant it with some permission.
- To find needed resources and change/add permissions.
Useful official documentation
Useful non-official documentation
- Advocacy site for Kubernetes RBAC
- Simplify Kubernetes resource access rbac impersonation
- Manage Role Based Access Control (RBAC)
3. Exercise caution in using service accounts e.g. disable defaults, minimize permissions on newly created ones
Examples:
-
Example_1: Opt out of automounting API credentials for a service account (Opt out at service account scope):
--- apiVersion: v1 kind: ServiceAccount metadata: name: build-robot automountServiceAccountToken: false
-
Example_2: Opt out of automounting API credentials for a service account (Opt out at pod scope):
--- apiVersion: v1 kind: Pod metadata: name: cks-pod spec: serviceAccountName: default automountServiceAccountToken: false
-
Example_3: Disable automountServiceAccountToken on namespace side:
--- apiVersion: v1 kind: Namespace metadata: creationTimestamp: "2023-10-04T20:43:49Z" labels: kubernetes.io/metadata.name: default name: default automountServiceAccountToken: false resourceVersion: "36" uid: 7d0191eb-7187-4de9-90af-59121a4a9834 spec: finalizers: - kubernetes status: phase: Active
Useful official documentation
- Authorization Modes
- Use the default service account to access the API server
- Managing Service Accounts
- Configure Service Accounts for Pods
- Default roles and role bindings
Useful non-official documentation
You must know to how:
- To create service account and greant it with some permission.
- To find needed resources and change/add permissions.
There may be an upgrade question as the documentation about upgrading with kubeadm has been significantly better in recent releases. Also, you should have mechanisms to validate the cluster components, security configurations, and application status post-upgrade.
Examples:
-
Example_1: K8S upgrades (Controlplane):
First of all, draing the node:
k drain master --ignore-deamonsets
Update OS:
apt update -y
Install packages:
apt-cache show kubeadm | grep 1.22 apt install kubeadm=1.22.5-00 kubelet=1.22.5-00 kubectl=1.22.5-00
Applying updates:
kubeadm upgrade plan kubeadm upgrade apply v1.22.5
Adding master workloads back:
k uncordon master
-
Example_2: K8S upgrades (Nodes):
First of all, draing the node:
k drain node --ignore-deamonsets
Update OS:
apt update -y
Install packages:
apt-cache show kubeadm | grep 1.22 apt install kubeadm=1.22.5-00 kubelet=1.22.5-00 kubectl=1.22.5-00
Upgrade node with kubeadm:
kubeadm upgrade node
Restart service:
service kubelet restart
Then, adding master back:
k uncordon node
Useful official documentation
Useful non-official documentation
- None
You must know to how:
- Upgrade the K8S clusters
Examples:
-
Example_1: Use Seccomp:
By default, the folder for seccomp is located in the
/var/lib/kubelet/seccomp
location.Checking if seccomp is on host:
grep -i seccomp /boot/config-$(uname -r) CONFIG_SECCOMP=y CONFIG_HAVE_ARCH_SECCOMP_FILTER=y CONFIG_SECCOMP_FILTER=y
Open
/var/lib/kubelet/seccomp/custom.json
file and put the next:{ "defaultAction": "SCMP_ACT_ERRNO", "architectures": [ "SCMP_ARCH_X86_64", "SCMP_ARCH_X86", "SCMP_ARCH_X32" ], "syscalls": [ { "name": "accept", "action": "SCMP_ACT_ALLOW", "args": [] }, { "name": "uname", "action": "SCMP_ACT_ALLOW", "args": [] }, { "name": "chroot", "action": "SCMP_ACT_ALLOW", "args": [] } ] }
Going to start using seccomp with pod, for example:
--- apiVersion: v1 kind: Pod metadata: name: app1 namespace: app1 spec: containers: - image: nginx name: app1 securityContext: seccompProfile: type: Localhost localhostProfile: custom.json
-
Example_2: Use AppArmor:
Get AppArmor profiles:
apparmor_status
Or, run this:
aa-status
Load AppArmor profile:
apparmor_parser -q apparmor_config
-
Example_3: PSA enforces:
Pod Security admissions (PSA) support has been added for clusters with Kubernetes v1.23 and above. PSA defines security restrictions for a broad set of workloads and replace Pod Security Policies in Kubernetes v1.25 and above. The Pod Security Admission controller is enabled by default in Kubernetes clusters v1.23 and above. To configure its default behavior, you must provide an admission configuration file to the kube-apiserver when provisioning the cluster.
-
Example_4: Apply host updates:
sudo apt update && sudo apt install unattended-upgrades -y systemctl status unattended-upgrades.service
-
Example_5: Install minimal required OS fingerprint:
It is best practice to install only the packages you will use because each piece of software on your computer could possibly contain a vulnerability. Take the opportunity to select exactly what packages you want to install during the installation. If you find you need another package, you can always add it to the system later.
-
Example_6: Identify and address open ports:
Using lsof command and check if 8080 is open or not:
lsof -i :8080
Using netstat command - check if 66 is oppen and kill the process and delete the binary:
apt install net-tools netstat -natpl | grep 66 ls -l /proc/22797/exe rm -f /usr/bin/app1 kill -9 22797
-
Example_7: Remove unnecessary packages. For example, find and delete httpd package on the host:
apt show httpd apt remove httpd -y
-
Example_8: Find service that runs on the host and stop it. For example, find and stop httpd service on the host:
service httpd status service httpd stop service httpd status
-
Example_9: Working with users (Create, delete, add user to needed groups. Grant some permission):
To get all users on host:
cat /etc/passwd
If you want to display only the username you can use either awk or cut commands to print only the first field containing the username:
awk -F: '{ print $1}' /etc/passwd cut -d: -f1 /etc/passwd
The /etc/group file contains information on all local user groups configured on a Linux machine. With the /etc/group file, you can view group names, passwords, group IDs, and members associated with each group:
cat /etc/group
If you want to get goups of specific use:
groups root
Creating group:
groupadd developers
Creating user:
useradd -u 1005 -g mygroup test_user
Add a User to Multiple Groups:
usermod -a -G admins,mygroup,developers test_user
Add a User with a Specific Home Directory, Default Shell, and Custom Comment:
useradd -m -d /var/www/user1 -s /bin/bash -c "Test user 1" -U user1
-
Example_10: Working with kernel modules on the host (get, load, unload, etc):
To get all modules, use:
lsmod
Or:
lsmod | grep ^pppol2tp && echo "The module is loaded" || echo "The module is not loaded"
Also, you can use:
cat /proc/modules
Loading a Module:
modprobe wacom
You can blacklisting a module, open the file
/etc/modprobe.d/blacklist.conf
and put:blacklist evbug
-
Example_11: Working with UFW on Linux:
To allow 22 port:
ufw allow 22
To close an opened port:
ufw deny 22
It is also possible to allow access from specific hosts or networks to a port. The following example allows SSH access from host 192.168.0.2 to any IP address on this host:
ufw allow proto tcp from 192.168.0.2 to any port 22
To see the firewall status, enter:
ufw status ufw status verbose ufw status numbered
Enamble UFW service on Linux host:
ufw enable
Useful official documentation
Useful non-official documentation
- How to keep ubuntu-20-04 servers updated
- Enforce standards namespace labels
- Psa label enforcer policy
- Migrating from pod security policies a comprehensive guide part 1 transitioning to psa
- Using kyverno with pod security admission
- Add psa labels
- Pod security admission
- Pod security standards
- Implementing Pod Security Standards in Amazon EKS
- Seccomp profiles
IAM roles control access to cloud resources. It is important to minimize the permissions granted to IAM roles.
Useful official documentation
- None
Useful non-official documentation
The less exposure your system has to the outside world, the less vulnerable it is. Restrict network access to your system to only what is necessary.
Also, implement Network Policies - hands-on with Kubernetes network policy
Useful official documentation
Useful non-official documentation
- None
Examples:
-
Example_1: Working with Apparmor:
An example of configuration:
--- apiVersion: apps/v1 kind: Deployment metadata: name: pod-with-apparmor namespace: apparmor spec: replicas: 3 selector: matchLabels: app: pod-with-apparmor strategy: {} template: metadata: labels: app: pod-with-apparmor annotations: container.apparmor.security.beta.kubernetes.io/pod-with-apparmor: localhost/docker-default spec: containers: - image: httpd:latest name: pod-with-apparmor
Apply the prepared configuration file:
k apply -f pod-with-apparmor.yaml
Getting ID of container:
crictl ps -a | grep pod-with-apparmor
Then, run the command:
crictl inspect e428e2a3e9324 | grep apparmor "apparmor_profile": "localhost/docker-default" "apparmorProfile": "docker-default",
-
Example_2: Working with Seccomp:
The example is already described in
Minimize host OS footprint (reduce attack surface)
section.
Useful official documentation
Useful non-official documentation
Run containers as non-root users: Specify a non-root user in your Dockerfile or create a new user with limited privileges to reduce the risk of container breakout attacks.
Avoid privileged containers: Don’t run privileged containers with unrestricted access to host resources. Instead, use Linux kernel capabilities to grant specific privileges when necessary.
Examples:
-
Example_1: Working with Privilege Escalation:
An example of configuration:
--- apiVersion: v1 kind: Pod metadata: labels: run: my-ro-pod name: application namespace: sun spec: containers: - command: - sh - -c - sleep 1d image: busybox:1.32.0 name: my-ro-pod securityContext: allowPrivilegeEscalation: false dnsPolicy: ClusterFirst restartPolicy: Always
-
Example_2: Working with Privileged containers:
Run a Pod through CLI
k run privileged-pod --image=nginx:alpine --privileged
An example of configuration:
--- apiVersion: v1 kind: Pod metadata: labels: run: privileged-pod name: privileged-pod spec: containers: - command: - sh - -c - sleep 1d image: nginx:alpine name: privileged-pod securityContext: privileged: true dnsPolicy: ClusterFirst restartPolicy: Always
-
Example_3: Working with non-root user in containers (runAsNonRoot):
k run non-root-pod --image=nginx:alpine --dry-run=client -o yaml > non-root-pod.yaml
Edit that `non-root-pod.yaml` file to:
--- apiVersion: v1 kind: Pod metadata: labels: run: non-root-pod name: non-root-pod spec: containers: - image: nginx:alpine name: non-root-pod securityContext: runAsNonRoot: false resources: {} dnsPolicy: ClusterFirst restartPolicy: Always
Apply:
k apply -f non-root-pod.yaml
-
Example_4: Run container as user:
k run run-as-user-pod --image=nginx:alpine --dry-run=client -o yaml > run-as-user-pod.yaml
Edit that `run-as-user-pod.yaml` file to:
--- apiVersion: v1 kind: Pod metadata: labels: run: run-as-user-pod name: run-as-user-pod spec: securityContext: runAsUser: 1001 runAsGroup: 1001 containers: - image: nginx:alpine name: run-as-user-pod resources: {} securityContext: allowPrivilegeEscalation: false dnsPolicy: ClusterFirst restartPolicy: Always
Apply the YAML:
k apply -f run-as-user-pod.yaml
Useful official documentation
Useful non-official documentation
- None
OS-level security domains can be used to isolate microservices from each other and from the host OS. This can help to prevent microservices from interfering with each other and from being exploited by attackers.
Examples:
-
Example_1: Working with Open Policy Agent (OPA)/Gatekeeper:
To install:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
Deploy some example (k8srequiredlabels):
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/templates/k8srequiredlabels_template.yaml
You can install this Constraint with the following command:
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/demo/basic/constraints/all_ns_must_have_gatekeeper.yaml
To check constraints:
kubectl get constraints
-
Example_2: Working with Security context:
It's already described on other topics with a lot of examples.
Useful official documentation
- None
Useful non-official documentation
- Opa gatekeeper policy and governance for kubernetes
- Openpolicyagent
- Openpolicyagent online editor
- Gatekeeper
- Security context for pods
- Kubernetes security psp network policy
Kubernetes secrets can be used to store sensitive information such as passwords and API keys. It is important to manage secrets securely by encrypting them and by restricting access to them.
Examples:
-
Example_1: Secret Access in Pods:
Create a secret with
literal-secret
name through CLI:kubectl create secret generic literal-secret --from-literal secret=secret12345
Create a new secret with
file-secret
name throughfile-secret.yaml
file:--- apiVersion: v1 kind: Secret metadata: name: file-secret data: hosts: MTI3LjAuMC4xCWxvY2FsaG9zdAoxMjcuMC4xLjEJaG9zdDAxCgojIFRoZSBmb2xsb3dpbmcgbGluZXMgYXJlIGRlc2lyYWJsZSBmb3IgSVB2NiBjYXBhYmxlIGhvc3RzCjo6MSAgICAgbG9jYWxob3N0IGlwNi1sb2NhbGhvc3QgaXA2LWxvb3BiYWNrCmZmMDI6OjEgaXA2LWFsbG5vZGVzCmZmMDI6OjIgaXA2LWFsbHJvdXRlcnMKMTI3LjAuMC4xIGhvc3QwMQoxMjcuMC4wLjEgaG9zdDAxCjEyNy4wLjAuMSBob3N0MDEKMTI3LjAuMC4xIGNvbnRyb2xwbGFuZQoxNzIuMTcuMC4zNSBub2RlMDEKMTcyLjE3LjAuMjMgY29udHJvbHBsYW5lCg==
Apply it:
k apply -f file-secret.yaml
Then, create a new pod with
pod-secrets
name. Make Secretliteral-secret
available as environment variableliteral-secret
. Mount Secretfile-secret
as volume. The file should be available under/etc/file-secret/hosts
:--- apiVersion: v1 kind: Pod metadata: name: pod-secrets spec: volumes: - name: file-secret secret: secretName: file-secret containers: - image: nginx name: pod-secrets volumeMounts: - name: file-secret mountPath: /etc/file-secret env: - name: literal-secret valueFrom: secretKeyRef: name: literal-secret key: secret
Verify:
kubectl exec pod-secrets -- env | grep "secret=secret12345" kubectl exec pod-secrets -- cat /etc/file-secret/hosts
-
Example_2: Secret Read and Decode:
Get the secret that created in
opaque
ns and store it intoopaque_secret.txt
file:kubectl -n opaque get secret test-sec-1 -ojsonpath="{.data.data}" | base64 -d > opaque_secret.txt
-
Example_3: Secret etcd encryption:
Creating folder for this task:
mkdir -p /etc/kubernetes/enc
Encrypt secret phrase, for example:
echo -n Secret-ETCD-Encryption | base64 U2VjcmV0LUVUQ0QtRW5jcnlwdGlvbg==
Create EncryptionConfiguration
/etc/kubernetes/enc/encryption.yaml
file:--- apiVersion: apiserver.config.k8s.io/v1 kind: EncryptionConfiguration resources: - resources: - secrets providers: - aesgcm: keys: - name: key1 secret: U2VjcmV0LUVUQ0QtRW5jcnlwdGlvbg== - identity: {}
Open
/etc/kubernetes/manifests/kube-apiserver.yaml
file and putencryption-provider-config
parameter. Also add volume and volumeMount, for example:spec: containers: - command: - kube-apiserver ... - --encryption-provider-config=/etc/kubernetes/enc/encryption.yaml ... volumeMounts: - mountPath: /etc/kubernetes/enc name: enc readOnly: true ... hostNetwork: true priorityClassName: system-cluster-critical volumes: - hostPath: path: /etc/kubernetes/enc type: DirectoryOrCreate name: enc ...
Wait till apiserver was restarted:
watch crictl ps
When
apiserver
will be re-created, we can encrypt all existing secrets. For example, let's do it fort all secrets inone
NS:kubectl -n one get secrets -o json | kubectl replace -f -
To check you can do for example:
ETCDCTL_API=3 etcdctl --cert /etc/kubernetes/pki/apiserver-etcd-client.crt --key /etc/kubernetes/pki/apiserver-etcd-client.key --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/one/s1
Useful official documentation
Useful non-official documentation
- None
Before the open container initiative (OCI) proposed to have Container Runtime Interface(CRI), the communication between containers and Kubernetes (K8s) was relying on dockershim/rkt provided and maintained by Docker. However, when containers and K8s are getting more and more sophisticated, the maintenance cost of dockershim/rkt becomes higher and higher. Therefore, having an interface that opens to the open source community and for solely dealing with container runtime becomes the answer to this challenging situation.
Kata Containers and gVisor helps in workload isolation. It can be implemented using the Kubernetes RuntimeClass where you can specify the required runtime for the workload.
Examples:
-
Example_1: Use ReadOnly Root FileSystem. Create a new Pod named my-ro-pod in Namespace application of image busybox:1.32.0. Make sure the container keeps running, like using sleep 1d. The container root filesystem should be read-only:
Create RuntimeClass class, something like:
--- apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: gvisor handler: runsc
Deploy a new pod with created RuntimeClass, an example:
--- apiVersion: v1 kind: Pod metadata: name: sec spec: runtimeClassName: gvisor containers: - image: nginx:1.21.5-alpine name: sec dnsPolicy: ClusterFirst restartPolicy: Always
Checks:
k apply gvisor_file.yaml k exec sec -- dmesg
Useful official documentation
Useful non-official documentation
mTLS stands for mutual authentication, meaning client authenticates server and server does the same to client, its core concept is to secure pod-to-pod communications. In exams it may ask you to create the certificates. However, it is worth bookmarking certificate signing requests and understanding how to implement kubeconfig access and mTLS authentication credentials.
What nTLS is? Mutual TLS takes TLS to the next level by authenticating both sides of the client-server connection before exchanging communications. This may seem like a common-sense approach, but there are many situations where the client’s identity is irrelevant to the connection.
When only the server’s identity matters, standard unidirectional TLS is the most efficient approach. TLS uses public-key encryption, requiring a private and public key pair for encrypted communications. To verify the server’s identity, the client sends a message encrypted using the public key (obtained from the server’s TLS certificate) to the server. Only a server holding the appropriate private key can decrypt the message, so successful decryption authenticates the server.
To have bi-directional authentication would require that all clients also have TLS certificates, which come from a certificate authority. Because of the sheer number of potential clients (browsers accessing websites, for example), generating and managing so many certificates would be extremely difficult.
However, for some applications and services, it can be crucial to verify that only trusted clients connect to the server. Perhaps only certain users should have access to particular servers. Or maybe you have API calls that should only come from specific services. In these situations, the added burdens of mTLS are well worth it. And if your organization reinforces security with zero trust policies where every attempt to access the server must be verified, mTLS is necessary.
mTLS adds a separate authentication of the client following verification of the server. Only after verifying both parties to the connection can the two exchange data. With mTLS, the server knows that a trusted source is attempting to access it.
Examples:
-
Example_1: Using mTLS:
TBD!
Useful official documentation
Useful non-official documentation
- Why we need mTLS
- Kubernetes mTLS
- mTLS
- Istio
- Istio auto mutual TLS
- What is mTLS and How to implement it with Istio
- Linkerd
Use distroless, UBI minimal, Alpine, or relavent to your app nodejs, python but the minimal build. Do not include uncessary software not required for container during runtime e.g build tools and utilities, troubleshooting and debug binaries. The smaller the base image footprint, the less vulnerable your containers are. Use minimal base images and avoid adding unnecessary packages or services to your base images.
Examples:
-
Example_1: Create a Pod named nginx-sha-pod which uses the image digest nginx@sha256:ca045ecbcd423cec50367a194d61fd846a6f0964f4999e8d692e5fcf7ebc903f:
k run nginx-sha-pod --image=nginx@sha256:ca045ecbcd423cec50367a194d61fd846a6f0964f4999e8d692e5fcf7ebc903f
-
Example_2: Convert the existing Deployment nginx-sha-deployment to use the image digest of the current tag instead of the tag:
Getting labels of deployment:
k get deploy nginx-sha-deployment --show-labels
Get pod with labels:
k get pod -l app=nginx-sha-deployment -oyaml | grep imageID
Edit deploy and put needed sha:
k edit deploy nginx-sha-deployment
Checks:
k get pod -l app=nginx-sha-deployment -oyaml | grep image:
-
Example_3: Container Image Footprint:
In the current folder you have Dockerfile, let's build it with
golden-image
name:docker build -t golden-image .
Run a container with
cointainer-1
name:docker run --name cointainer-1 -d golden-image
-
Example_4: Harden a given Docker Container:
There is a Dockerfile at
/root/Dockerfile
. It’s a simple container which tries to make a curl call to an imaginary api with a secret token, the call will 404, but that's okay:- Use specific version 20.04 for the base image
- Remove layer caching issues with apt-get
- Remove the hardcoded secret value 2e064aad-3a90–4cde-ad86–16fad1f8943e. The secret value should be passed into the container during runtime as env variable TOKEN
- Make it impossible to docker exec , podman exec or kubectl exec into the container using bash
Dockerfile (before):
FROM ubuntu RUN apt-get update RUN apt-get -y install curl ENV URL https://google.com/this-will-fail?secret-token= CMD ["sh", "-c", "curl --head $URL=2e064aad-3a90-4cde-ad86-16fad1f8943e"]
Dockerfile (after):
FROM ubuntu:20.04 RUN apt-get update && apt-get -y install curl ENV URL https://google.com/this-will-fail?secret-token= RUN rm /usr/bin/bash CMD ["sh", "-c", "curl --head $URL$TOKEN"]
Testing:
podman build -t app . podman run -d -e TOKEN=6666666-5555555-444444-33333-22222-11111 app sleep 1d podman ps | grep app podman exec -it 4a848daec2e2 bash # fails podman exec -it 4a848daec2e2 sh # works
Useful official documentation
- None
Useful non-official documentation
- 7 best practices for building containers
- Smaller Docker images
- Kubernetes best practices how and why to build small container images
- Best practices for building containers
- Multi stages for Docker
- Tips to reduce Docker image sizes
- Docker Image Security Best Practices
- 3 simple tricks for smaller Docker images
- Top 20 Dockerfile best practices
- Checkov
Securing the images that are allowed to run in your cluster is essential. It’s important to verify the pulled base images are from valid sources. This can be done by ImagePolicyWebhook admission controller.
Examples:
-
Example_1: Use ImagePolicyWebhook:
First of all, let's create admission /etc/kubernetes/policywebhook/admission_config.json config
{ "apiVersion": "apiserver.config.k8s.io/v1", "kind": "AdmissionConfiguration", "plugins": [ { "name": "ImagePolicyWebhook", "configuration": { "imagePolicy": { "kubeConfigFile": "/etc/kubernetes/policywebhook/kubeconf", "allowTTL": 150, "denyTTL": 50, "retryBackoff": 500, "defaultAllow": false } } } ] }
Then, create /etc/kubernetes/policywebhook/kubeconf with the settings. For example:
apiVersion: v1 kind: Config# clusters refers to the remote service. clusters: - cluster: certificate-authority: /etc/kubernetes/policywebhook/external-cert.pem # CA for verifying the remote service. server: https://localhost:1234 # URL of remote service to query. Must use 'https'. name: image-checker contexts: - context: cluster: image-checker user: api-server name: image-checker current-context: image-checker preferences: {} # users refers to the API server's webhook configuration. users: - name: api-server user: client-certificate: /etc/kubernetes/policywebhook/apiserver-client-cert.pem # cert for the webhook admission controller to use client-key: /etc/kubernetes/policywebhook/apiserver-client-key.pem # key matching the cert
The /etc/kubernetes/manifests/kube-apiserver.yaml configuration of kube-apiserver, for example:
--- apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.30.1.2:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --advertise-address=172.30.1.2 - --allow-privileged=true - --authorization-mode=Node,RBAC - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook - --admission-control-config-file=/etc/kubernetes/policywebhook/admission_config.json - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --admission-control-config-file=/etc/kubernetes/policywebhook/admission_config.json - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key image: registry.k8s.io/kube-apiserver:v1.27.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 172.30.1.2 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-apiserver readinessProbe: failureThreshold: 3 httpGet: host: 172.30.1.2 path: /readyz port: 6443 scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 resources: requests: cpu: 50m startupProbe: failureThreshold: 24 httpGet: host: 172.30.1.2 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/kubernetes/policywebhook name: policywebhook readyOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/pki name: etc-pki readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true hostNetwork: true priority: 2000001000 priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/kubernetes/policywebhook type: DirectoryOrCreate name: policywebhook - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/pki type: DirectoryOrCreate name: etc-pki - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {}
Checks
crictl ps -a | grep api crictl logs 91c61357ef147 k run pod --image=nginx Error from server (Forbidden): pods "pod" is forbidden: Post "https://localhost:1234/?timeout=30s": dial tcp 127.0.0.1:1234: connect: connection refused
Useful official documentation
Useful non-official documentation
This is totally straightforward. You will need to vet the configuration of Kubernetes YAML files and Docker files and fix any security issues.
Examples:
-
Example_1: Static Manual Analysis Docker:
Everyone must understand Dockerfile and fix it with best practices (without any tools).
-
Example_2: Static Manual analysis k8s:
Everyone must understand YAML files of deployments/pods/etc and fix them out with best practices (without any tools).
Useful official documentation
- None
Useful non-official documentation
Using trivy to scan images in applications and infra namespaces and define if the images has CVE-2021-28831 and/or CVE-2016-9841 vulnerabilities. Scale down those Deployments to 0 if you will find something.
Getting images:
k -n applications get pod -oyaml | grep image: | sort -rn | uniq
- image: nginx:1.20.2-alpine
- image: nginx:1.19.1-alpine-perl
image: docker.io/library/nginx:1.20.2-alpine
image: docker.io/library/nginx:1.19.1-alpine-perl
Let's scan first deployment:
trivy image nginx:1.19.1-alpine-perl | grep CVE-2021-28831
trivy image nginx:1.19.1-alpine-perl | grep CVE-2016-9841
Let's scan second deployment:
trivy image nginx:1.20.2-alpine | grep CVE-2021-28831
trivy image nginx:1.20.2-alpine | grep CVE-2016-9841
Hit on the first one, so we scale down:
k -n applications scale deploy web1 --replicas 0
Useful official documentation
- None
Useful non-official documentation
1. Perform behavioral analytics of syscall process and file activities at the host and container level to detect malicious activities
Perform behavioural analytics of syscall process and file activities at the host and container level to detect malicious activities.
Examples:
-
Example_1: Use seccomp:
TBD: Restrict a Container's Syscalls with seccomp
strace -c -f -S name chmod 2>&1 1>/dev/null | tail -n +3 | head -n -2 | awk '{print $(NF)}'
Useful official documentation
Useful non-official documentation
- How to detect Kubernetes vulnerability with falco
- Falco 101
- Helm-chart to deploy Falco
- Detect CVE-2020 and CVE-8557
Examples:
-
Example_1
Create a new rule to detect shell inside container only for `nginx` PODs with the next format `Shell in container: TIMESTAMP,USER,COMMAND/SHELL` line. Set the priority to `CRITICAL`. Enable file output into `/var/log/falco.txt` file.First of all, let's start from file output, so - open
/etc/falco/falco.yaml
file, find the lines and put something like:file_output: enabled: true keep_alive: false filename: /var/log/falco.txt
Now, lets configure custom output commands for "Terminal shell in container" rule. So, open
/etc/falco/falco_rules.local.yaml
file and put the next:- rule: Terminal shell in container desc: A shell was used as the entrypoint/exec point into a container with an attached terminal. condition: > spawned_process and container.name = "nginx" and shell_procs and proc.tty != 0 and container_entrypoint and not user_expected_terminal_shell_in_container_conditions output: > Shell in container: %evt.time,%user.name,%proc.cmdline priority: CRITICAL tags: [container, shell, mitre_execution]
Restart Falco service:
service falco restart && service falco status
Checks:
k run nginx --image=nginx:alpine k exec -it nginx -- sh cat /var/log/syslog | grep falco | grep -Ei "Shell in container"
-
Example_2
Create a new rule to detect shell inside container only for
nginx
PODs with the next formatShell in container: TIMESTAMP,USER,COMMAND/SHELL
line. Set the priority toCRITICAL
. Enable file output into/var/log/falco.txt
file.First of all, let's start from file output, so - open
/etc/falco/falco.yaml
file, find the lines and put something like:file_output: enabled: true keep_alive: false filename: /var/log/falco.txt
Now, lets configure custom output commands for "Terminal shell in container" rule. So, open
/etc/falco/falco_rules.local.yaml
file and put the next:- macro: app_nginx condition: container and container.image contains "nginx" - list: nginx_allowed_processes items: ["nginx", "app-entrypoint.", "basename", "dirname", "grep", "nami", "node", "tini"] - rule: Terminal shell in container desc: A shell was used as the entrypoint/exec point into a container with an attached terminal. condition: > spawned_process and app_nginx and not proc.name in (nginx_allowed_processes) and shell_procs and proc.tty != 0 and container_entrypoint and not user_expected_terminal_shell_in_container_conditions output: > Shell in container: %evt.time,%user.name,%proc.cmdline priority: CRITICAL tags: [container, shell, mitre_execution, app_nginx]
Restart Falco service:
service falco restart && service falco status
Checks:
k run nginx --image=nginx:alpine k exec -it nginx -- sh cat /var/log/syslog | grep falco | grep -Ei "Shell in container"
Useful official documentation
Useful non-official documentation
- Common Kubernetes config security threats
- Guidance on Kubernetes threat modeling
- Attack matrix Kubernetes
This part of the task can be done with OPA for example and allow pulling images from private container image registries only.
Useful official documentation
Useful non-official documentation
- Attack matrix Kubernetes
- Mitre attck framework for container runtime security with sysdig falco
- Mitigating Kubernetes attacks
- Anatomy Kubernetes attack how untrusted docker images fail us
- Webinar: Mitigating Kubernetes attacks
Probably Falco can help to take care of it. You can easily put some Falco's rules to detect who and which UID enter inside container.
Useful official documentation
Useful non-official documentation
- Monitoring Kubernetes with Sysdig
- CNCF Webinar: Getting started with container runtime security using Falco
- Kubernetes security
Immutability of Volumes (Secrets, ConfigMaps, VolumeMounts) can be achieved with readOnly
: true field on the mount.
volumeMounts:
- name: instance-creds
mountPath: /secrets/creds
readOnly: true
Useful official documentation
Useful non-official documentation
- Principles of container app design
- Why I think we should all use immutable docker images
- Immutable infrastructure your systems can rise dead
The kube-apiserver allows us to capture the logs at various stages of a request sent to it. This includes the events at the metadata stage, request, and response bodies as well. Kubernetes allows us to define the stages which we intend to capture. The following are the allowed stages in the Kubernetes audit logging framework:
- RequestReceived: As the name suggests, this stage captures the generated events as soon as the audit handler receives the request.
- ResponseStarted: In this stage, collects the events once the response headers are sent, but just before the response body is sent.
- ResponseComplete: This stage collects the events after the response body is sent completely.
- Panic: Events collected whenever the apiserever panics.
The level field in the rules list defines what properties of an event are recorded. An important aspect of audit logging in Kubernetes is, whenever an event is processed it is matched against the rules defined in the config file in order. The first rule sets the audit level of logging the event. Kubernetes provides the following audit levels while defining the audit configuration:
- Metadata: Logs request metadata (requesting user/userGroup, timestamp, resource/subresource, verb, status, etc.) but not request or response bodies.
- Request: This level records the event metadata and request body but does not log the response body.
- RequestResponse: It is more verbose among all the levels as this level logs the Metadata, request, and response bodies.
- None: This disables logging of any event that matches the rule.
Examples:
-
Example_1: Create policy file.
Let's create policy, where you must log logs of PODs inside
prod
NS when you created them.Create
/etc/kubernetes/auditing/policy.yaml
policy file:--- apiVersion: audit.k8s.io/v1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: - level: Metadata namespaces: ["prod"] verbs: ["create"] resources: - group: "" # core resources: ["pods"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Don't log any other requests" - level: None namespaces: ["*"] verbs: ["*"] resources: - group: "" # core resources: ["*"] resourceNames: ["*"]
Edit kube-api configuration:
vim /etc/kubernetes/manifests/kube-apiserver.yaml
Add the next line to enable auditing:
--- spec: containers: - command: - kube-apiserver - --audit-policy-file=/etc/kubernetes/auditing/policy.yaml - --audit-log-path=/etc/kubernetes/audit-logs/audit.log - --audit-log-maxsize=3 - --audit-log-maxbackup=4
Add the new Volumes:
volumes: - name: audit-policy hostPath: path: /etc/kubernetes/auditing/policy.yaml type: File - name: audit-logs hostPath: path: /etc/kubernetes/audit-logs type: DirectoryOrCreate
Add the new VolumeMounts:
volumeMounts: - mountPath: /etc/kubernetes/auditing/policy.yaml name: audit-policy readOnly: true - mountPath: /etc/kubernetes/audit-logs name: audit-logs readOnly: false
Checks:
crictl ps -a | grep api
-
Example_2: Configure the Apiserver for Audit Logging. The log path should be /etc/kubernetes/audit-logs/audit.log on the host and inside the container. The existing Audit Policy to use is at /etc/kubernetes/auditing/policy.yaml. The path should be the same on the host and inside the container. Also, set argument --audit-log-maxsize=3 and set argument --audit-log-maxbackup=4:
Edit kube-api configuration:
vim /etc/kubernetes/manifests/kube-apiserver.yaml
Add the next line to enable auditing:
--- spec: containers: - command: - kube-apiserver - --audit-policy-file=/etc/kubernetes/auditing/policy.yaml - --audit-log-path=/etc/kubernetes/audit-logs/audit.log - --audit-log-maxsize=3 - --audit-log-maxbackup=4
Add the new Volumes:
volumes: - name: audit-policy hostPath: path: /etc/kubernetes/auditing/policy.yaml type: File - name: audit-logs hostPath: path: /etc/kubernetes/audit-logs type: DirectoryOrCreate
Add the new VolumeMounts:
volumeMounts: - mountPath: /etc/kubernetes/auditing/policy.yaml name: audit-policy readOnly: true - mountPath: /etc/kubernetes/audit-logs name: audit-logs readOnly: false
Checks:
crictl ps -a | grep api
Useful official documentation
Useful non-official documentation
Examples:
-
Example_1: Use ReadOnly Root FileSystem. Create a new Pod named my-ro-pod in Namespace application of image busybox:1.32.0. Make sure the container keeps running, like using sleep 1d. The container root filesystem should be read-only:
Generate configuration :
k -n application run my-ro-pod --image=busybox:1.32.0 -oyaml --dry-run=client --command -- sh -c 'sleep 1d' > my-ro-pod.yaml
Edit it to:
--- apiVersion: v1 kind: Pod metadata: labels: run: my-ro-pod name: application namespace: sun spec: containers: - command: - sh - -c - sleep 1d image: busybox:1.32.0 name: my-ro-pod securityContext: readOnlyRootFilesystem: true dnsPolicy: ClusterFirst restartPolicy: Always
Useful official documentation
Useful non-official documentation
- None
- Container Security
- Kubernetes Security
- Learn Kubernetes security: Securely orchestrate, scale, and manage your microservices in Kubernetes deployments
- Downloaded books inside this project
- Kubernetes Security Best Practices - Ian Lewis, Google
- Learn Kubernetes Security
- Let's Learn Kubernetes Security
- Webinar | Certified Kubernetes Security Specialist (CKS), January 2022
- Killer.sh CKS practice exam
- Kim Wüstkamp's on Udemy: Kubernetes CKS 2023 Complete Course - Theory - Practice
- Linux Foundation Kubernetes Security essentials LFS 260
- KodeCloud "Certified Kubernetes Security Specialist (CKS)
- Falco 101
- Killer Shell CKS - Interactive Scenarios for Kubernetes Security
- Linux Foundation Kubernetes Certifications Now Include Exam Simulator
- k8simulator
Created and maintained by Vitalii Natarov. An email: vitaliy.natarov@yahoo.com.
Apache 2 Licensed. See LICENSE for full details.