Exam4Training

The Linux Foundation CKS Certified Kubernetes Security Specialist (CKS) Online Training

Question #1

CORRECT TEXT

a. Retrieve the content of the existing secret named default-token-xxxxx in the testing namespace.

Store the value of the token in thetoken.txt

b. Create a new secret named test-db-secret in the DB namespace with the following content:

username: mysql

password: password@123

Create the Pod name test-db-pod of image nginx in the namespace db that can accesstest-db-secret via a volume at path /etc/mysql-credentials

Reveal Solution Hide Solution

Correct Answer: To add a Kubernetes cluster to your project, group, or instance:

✑ Navigate to your:

✑ Click Add Kubernetes cluster.

✑ Click the Add existing cluster tab and fill in the details:

Get the API URL by running this command:

kubectl cluster-info | grep-E’Kubernetes master|Kubernetes control plane’| awk’/http/ {print $NF}’

✑ uk.co.certification.simulator.questionpool.PList@dd80600

kubectl get secret <secret name>-ojsonpath="{[‘data’][‘ca.crt’]}"

Question #2

CORRECT TEXT

Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.

Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.

Verify: Exec the pods and run the dmesg, you will see output like this:-

Reveal Solution Hide Solution

Correct Answer: Send us your feedback on it.
Question #2

CORRECT TEXT

Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.

Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.

Verify: Exec the pods and run the dmesg, you will see output like this:-

Reveal Solution Hide Solution

Correct Answer: Send us your feedback on it.
Question #2

CORRECT TEXT

Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.

Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.

Verify: Exec the pods and run the dmesg, you will see output like this:-

Reveal Solution Hide Solution

Correct Answer: Send us your feedback on it.
Question #5

Pods being configured to be privileged in any way must be treated as potentially not stateless or not immutable.

Reveal Solution Hide Solution

Correct Answer: k get pods -n prodk get pod <pod-name> -n prod -o yaml | grep -E ‘privileged|ReadOnlyRootFileSystem’Delete the pods which do have any of these 2 propertiesprivileged:true or ReadOnlyRootFileSystem: false

[desk@cli]$ k get pods -n prod

NAME READY STATUS RESTARTS AGE

cms 1/1 Running 0 68m

db 1/1 Running 0 4m

nginx 1/1 Running 0 23m

[desk@cli]$ k get pod nginx -n prod -o yaml | grep -E ‘privileged|RootFileSystem’ {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"creationTimestamp":null,"label s":{"run":"nginx"},"name":"nginx","namespace":"prod"},"spec":{"containers":[{"image":"nginx ","name":"nginx","resources":{},"securityContext":{"privileged":true }}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always"},"status":{}}f:privileged: {}privileged:

true

[desk@cli]$ k delete pod nginx -n prod

[desk@cli]$ k get pod db -n prod -o yaml | grep -E ‘privileged|RootFilesystem’

[desk@cli]$ k get pod cms -n prod -o yaml | grep -E ‘privileged|RootFilesystem’



Question #6

CORRECT TEXT

Cluster: scanner

Master node: controlplane

Worker node: worker1

You can switch the cluster/configuration context using the following command:

[desk@cli] $ kubectl config use-context scanner

Given:

You may use Trivy’s documentation.

Task:

Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the namespace nato.

Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images.

Trivy is pre-installed on the cluster’s master node. Use cluster’s master node to use Trivy.

Reveal Solution Hide Solution

Correct Answer: [controlplane@cli] $ k get pods -n nato -o yaml | grep "image: "[controlplane@cli] $ trivy image <image-name>[controlplane@cli] $ k delete pod <vulnerable-pod> -n nato

[desk@cli] $ ssh controlnode[controlplane@cli] $ k get pods -n nato

NAME READY STATUS RESTARTS AGE

alohmora 1/1 Running 0 3m7s

c3d3 1/1 Running 0 2m54s

neon-pod 1/1 Running 0 2m11s

thor 1/1 Running 0 58s

[controlplane@cli] $ k get pods -n nato -o yaml | grep "image: "

Text

Description automatically generated[controlplane@cli] $ trivy image <image-name>

Text

Description automatically generated

Text

Description automatically generated

Text

Description automatically generatedNote: As there are 2 images have vulnerability with severity Hight & Critical. Delete containers for nginx:latest & alpine:3.7 [controlplane@cli] $ k delete pod thor -n nato


Question #7

CORRECT TEXT

On the Cluster worker node, enforce the prepared AppArmor profile

✑ #include<tunables/global>



✑ profilenginx-deny flags=(attach_disconnected) {

✑ #include<abstractions/base>



✑ file,



✑ # Deny all file writes.

✑ deny/** w,

✑ }

✑ EOF’

Edit the prepared manifest file to include the AppArmor profile.

✑ apiVersion: v1

✑ kind: Pod

✑ metadata:

✑ name:apparmor-pod

✑ spec:

✑ containers:

✑ – name: apparmor-pod

✑ image: nginx

Finally, apply the manifests files and create the Pod specified on it.

Verify: Try to make a file inside the directory which is restricted.

Reveal Solution Hide Solution

Correct Answer: Send us your Feedback on this.
Question #7

CORRECT TEXT

On the Cluster worker node, enforce the prepared AppArmor profile

✑ #include<tunables/global>



✑ profilenginx-deny flags=(attach_disconnected) {

✑ #include<abstractions/base>



✑ file,



✑ # Deny all file writes.

✑ deny/** w,

✑ }

✑ EOF’

Edit the prepared manifest file to include the AppArmor profile.

✑ apiVersion: v1

✑ kind: Pod

✑ metadata:

✑ name:apparmor-pod

✑ spec:

✑ containers:

✑ – name: apparmor-pod

✑ image: nginx

Finally, apply the manifests files and create the Pod specified on it.

Verify: Try to make a file inside the directory which is restricted.

Reveal Solution Hide Solution

Correct Answer: Send us your Feedback on this.
Question #7

CORRECT TEXT

On the Cluster worker node, enforce the prepared AppArmor profile

✑ #include<tunables/global>



✑ profilenginx-deny flags=(attach_disconnected) {

✑ #include<abstractions/base>



✑ file,



✑ # Deny all file writes.

✑ deny/** w,

✑ }

✑ EOF’

Edit the prepared manifest file to include the AppArmor profile.

✑ apiVersion: v1

✑ kind: Pod

✑ metadata:

✑ name:apparmor-pod

✑ spec:

✑ containers:

✑ – name: apparmor-pod

✑ image: nginx

Finally, apply the manifests files and create the Pod specified on it.

Verify: Try to make a file inside the directory which is restricted.

Reveal Solution Hide Solution

Correct Answer: Send us your Feedback on this.
Question #10

sysdig

Tools are pre-installed on the worker1 node only.

Analyse the container’s behaviour for at least 40 seconds, using filters that detect newly spawning and executing processes.

Store an incident file at /home/cert_masters/report, in the following format:

[timestamp],[uid],[processName]

Note: Make sure to store incident file on the cluster’s worker node, don’t move it to master node.

Reveal Solution Hide Solution

Correct Answer: $vim /etc/falco/falco_rules.local.yaml

✑ uk.co.certification.simulator.questionpool.PList@dd92f60 $kill -1 <PID of falco>

Explanation[desk@cli] $ ssh node01[node01@cli] $ vim /etc/falco/falco_rules.yamlsearch for Container Drift Detected & paste in falco_rules.local.yaml[node01@cli] $ vim /etc/falco/falco_rules.local.yaml

– rule: Container Drift Detected (open+create)

desc: New executable created in a container due to open+create

condition: >

evt.type in (open,openat,creat) and

evt.is_open_exec=true and

container and

not runc_writing_exec_fifo and

not runc_writing_var_lib_docker and

not user_known_container_drift_activities and

evt.rawres>=0

output: >

%evt.time,%user.uid,%proc.name # Add this/Refer falco documentation priority: ERROR

[node01@cli] $ vim /etc/falco/falco.yaml

Question #11

CORRECT TEXT

Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.

Fix all of the following violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

✑ b. Ensure that the admission control plugin PodSecurityPolicyisset.

✑ c. Ensure that the –kubelet-certificate-authority argumentissetasappropriate.

Fix all of the following violations that were found against the Kubelet:-

✑ a. Ensure the –anonymous-auth argumentissettofalse.

✑ b. Ensure that the –authorization-mode argumentissetto Webhook.

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensure that the –auto-tls argumentisnotsettotrue

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Hint: Take the use of Tool Kube-Bench

Reveal Solution Hide Solution

Correct Answer: Fix all of thefollowing violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: null

labels:

component:kubelet

tier: control-plane

name: kubelet

namespace: kube-system

spec:

containers:

– command:

– kube-controller-manager

+ – –feature-gates=RotateKubeletServerCertificate=true image: gcr.io/google_containers/kubelet-amd64:v1.6.0 livenessProbe:

failureThreshold: 8 httpGet:

host: 127.0.0.1

path: /healthz

port: 6443

scheme: HTTPS

initialDelaySeconds: 15

timeoutSeconds: 15

name:kubelet

resources:

requests:

cpu: 250m

volumeMounts:

– mountPath: /etc/kubernetes/ name: k8s

readOnly: true

– mountPath: /etc/ssl/certs name: certs

– mountPath: /etc/pki name:pki hostNetwork: true volumes:

– hostPath:

path: /etc/kubernetes

name: k8s

– hostPath:

path: /etc/ssl/certs

name: certs

– hostPath: path: /etc/pki name: pki

✑ b. Ensure that theadmission control plugin PodSecurityPolicyisset.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–enable-admission-plugins"

compare:

op: has

value:"PodSecurityPolicy"

set: true

remediation: |

Follow the documentation and create Pod Security Policy objects as per your environment. Then, edit the API server pod specification file $apiserverconf

on themaster node and set the –enable-admission-plugins parameter to a

value that includes PodSecurityPolicy :

–enable-admission-plugins=…,PodSecurityPolicy,…

Then restart the API Server.

scored: true

✑ c. Ensure thatthe –kubelet-certificate-authority argumentissetasappropriate.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–kubelet-certificate-authority" set: true

remediation: |

Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets. Then, edit the API server pod specification file $apiserverconf on the master node and set the –kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. –kubelet-certificate-authority=<ca-string>

scored: true

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensurethat the –auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — auto-tls parameter or set it to false.–auto-tls=false

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — peer-auto-tls parameter or set it to false.–peer-auto-tls=false

Question #11

CORRECT TEXT

Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.

Fix all of the following violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

✑ b. Ensure that the admission control plugin PodSecurityPolicyisset.

✑ c. Ensure that the –kubelet-certificate-authority argumentissetasappropriate.

Fix all of the following violations that were found against the Kubelet:-

✑ a. Ensure the –anonymous-auth argumentissettofalse.

✑ b. Ensure that the –authorization-mode argumentissetto Webhook.

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensure that the –auto-tls argumentisnotsettotrue

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Hint: Take the use of Tool Kube-Bench

Reveal Solution Hide Solution

Correct Answer: Fix all of thefollowing violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: null

labels:

component:kubelet

tier: control-plane

name: kubelet

namespace: kube-system

spec:

containers:

– command:

– kube-controller-manager

+ – –feature-gates=RotateKubeletServerCertificate=true image: gcr.io/google_containers/kubelet-amd64:v1.6.0 livenessProbe:

failureThreshold: 8 httpGet:

host: 127.0.0.1

path: /healthz

port: 6443

scheme: HTTPS

initialDelaySeconds: 15

timeoutSeconds: 15

name:kubelet

resources:

requests:

cpu: 250m

volumeMounts:

– mountPath: /etc/kubernetes/ name: k8s

readOnly: true

– mountPath: /etc/ssl/certs name: certs

– mountPath: /etc/pki name:pki hostNetwork: true volumes:

– hostPath:

path: /etc/kubernetes

name: k8s

– hostPath:

path: /etc/ssl/certs

name: certs

– hostPath: path: /etc/pki name: pki

✑ b. Ensure that theadmission control plugin PodSecurityPolicyisset.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–enable-admission-plugins"

compare:

op: has

value:"PodSecurityPolicy"

set: true

remediation: |

Follow the documentation and create Pod Security Policy objects as per your environment. Then, edit the API server pod specification file $apiserverconf

on themaster node and set the –enable-admission-plugins parameter to a

value that includes PodSecurityPolicy :

–enable-admission-plugins=…,PodSecurityPolicy,…

Then restart the API Server.

scored: true

✑ c. Ensure thatthe –kubelet-certificate-authority argumentissetasappropriate.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–kubelet-certificate-authority" set: true

remediation: |

Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets. Then, edit the API server pod specification file $apiserverconf on the master node and set the –kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. –kubelet-certificate-authority=<ca-string>

scored: true

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensurethat the –auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — auto-tls parameter or set it to false.–auto-tls=false

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — peer-auto-tls parameter or set it to false.–peer-auto-tls=false

Question #11

CORRECT TEXT

Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.

Fix all of the following violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

✑ b. Ensure that the admission control plugin PodSecurityPolicyisset.

✑ c. Ensure that the –kubelet-certificate-authority argumentissetasappropriate.

Fix all of the following violations that were found against the Kubelet:-

✑ a. Ensure the –anonymous-auth argumentissettofalse.

✑ b. Ensure that the –authorization-mode argumentissetto Webhook.

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensure that the –auto-tls argumentisnotsettotrue

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Hint: Take the use of Tool Kube-Bench

Reveal Solution Hide Solution

Correct Answer: Fix all of thefollowing violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: null

labels:

component:kubelet

tier: control-plane

name: kubelet

namespace: kube-system

spec:

containers:

– command:

– kube-controller-manager

+ – –feature-gates=RotateKubeletServerCertificate=true image: gcr.io/google_containers/kubelet-amd64:v1.6.0 livenessProbe:

failureThreshold: 8 httpGet:

host: 127.0.0.1

path: /healthz

port: 6443

scheme: HTTPS

initialDelaySeconds: 15

timeoutSeconds: 15

name:kubelet

resources:

requests:

cpu: 250m

volumeMounts:

– mountPath: /etc/kubernetes/ name: k8s

readOnly: true

– mountPath: /etc/ssl/certs name: certs

– mountPath: /etc/pki name:pki hostNetwork: true volumes:

– hostPath:

path: /etc/kubernetes

name: k8s

– hostPath:

path: /etc/ssl/certs

name: certs

– hostPath: path: /etc/pki name: pki

✑ b. Ensure that theadmission control plugin PodSecurityPolicyisset.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–enable-admission-plugins"

compare:

op: has

value:"PodSecurityPolicy"

set: true

remediation: |

Follow the documentation and create Pod Security Policy objects as per your environment. Then, edit the API server pod specification file $apiserverconf

on themaster node and set the –enable-admission-plugins parameter to a

value that includes PodSecurityPolicy :

–enable-admission-plugins=…,PodSecurityPolicy,…

Then restart the API Server.

scored: true

✑ c. Ensure thatthe –kubelet-certificate-authority argumentissetasappropriate.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–kubelet-certificate-authority" set: true

remediation: |

Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets. Then, edit the API server pod specification file $apiserverconf on the master node and set the –kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. –kubelet-certificate-authority=<ca-string>

scored: true

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensurethat the –auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — auto-tls parameter or set it to false.–auto-tls=false

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — peer-auto-tls parameter or set it to false.–peer-auto-tls=false

Question #11

CORRECT TEXT

Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.

Fix all of the following violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

✑ b. Ensure that the admission control plugin PodSecurityPolicyisset.

✑ c. Ensure that the –kubelet-certificate-authority argumentissetasappropriate.

Fix all of the following violations that were found against the Kubelet:-

✑ a. Ensure the –anonymous-auth argumentissettofalse.

✑ b. Ensure that the –authorization-mode argumentissetto Webhook.

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensure that the –auto-tls argumentisnotsettotrue

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Hint: Take the use of Tool Kube-Bench

Reveal Solution Hide Solution

Correct Answer: Fix all of thefollowing violations that were found against the API server:-

✑ a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.

apiVersion: v1

kind: Pod

metadata:

creationTimestamp: null

labels:

component:kubelet

tier: control-plane

name: kubelet

namespace: kube-system

spec:

containers:

– command:

– kube-controller-manager

+ – –feature-gates=RotateKubeletServerCertificate=true image: gcr.io/google_containers/kubelet-amd64:v1.6.0 livenessProbe:

failureThreshold: 8 httpGet:

host: 127.0.0.1

path: /healthz

port: 6443

scheme: HTTPS

initialDelaySeconds: 15

timeoutSeconds: 15

name:kubelet

resources:

requests:

cpu: 250m

volumeMounts:

– mountPath: /etc/kubernetes/ name: k8s

readOnly: true

– mountPath: /etc/ssl/certs name: certs

– mountPath: /etc/pki name:pki hostNetwork: true volumes:

– hostPath:

path: /etc/kubernetes

name: k8s

– hostPath:

path: /etc/ssl/certs

name: certs

– hostPath: path: /etc/pki name: pki

✑ b. Ensure that theadmission control plugin PodSecurityPolicyisset.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–enable-admission-plugins"

compare:

op: has

value:"PodSecurityPolicy"

set: true

remediation: |

Follow the documentation and create Pod Security Policy objects as per your environment. Then, edit the API server pod specification file $apiserverconf

on themaster node and set the –enable-admission-plugins parameter to a

value that includes PodSecurityPolicy :

–enable-admission-plugins=…,PodSecurityPolicy,…

Then restart the API Server.

scored: true

✑ c. Ensure thatthe –kubelet-certificate-authority argumentissetasappropriate.

audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"

tests:

test_items:

– flag: "–kubelet-certificate-authority" set: true

remediation: |

Follow the Kubernetes documentation and setup the TLS connection between the apiserver and kubelets. Then, edit the API server pod specification file $apiserverconf on the master node and set the –kubelet-certificate-authority parameter to the path to the cert file for the certificate authority. –kubelet-certificate-authority=<ca-string>

scored: true

Fix all of the following violations that were found against the ETCD:-

✑ a. Ensurethat the –auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — auto-tls parameter or set it to false.–auto-tls=false

✑ b. Ensure that the –peer-auto-tls argumentisnotsettotrue

Edit the etcd pod specification file $etcdconf on the masternode and either remove the — peer-auto-tls parameter or set it to false.–peer-auto-tls=false

Question #15

Create the Pod using this manifest

Reveal Solution Hide Solution

Correct Answer: [desk@cli] $ ssh worker1[worker1@cli] $apparmor_parser -q /etc/apparmor.d/nginx[worker1@cli] $aa-status | grep nginxnginx-profile-1[worker1@cli] $ logout[desk@cli] $vim nginx-deploy.yamlAdd these lines under metadata:annotations: # Add this line container.apparmor.security.beta.kubernetes.io/<container-name>: localhost/nginx-profile-1[desk@cli] $kubectl apply -f nginx-deploy.yaml

Explanation[desk@cli] $ ssh worker1[worker1@cli] $apparmor_parser -q /etc/apparmor.d/nginx[worker1@cli] $aa-status | grep nginxnginx-profile-1[worker1@cli] $ logout[desk@cli] $vim nginx-deploy.yaml

Text

Description automatically generated


Question #16

CORRECT TEXT

Using the runtime detection tool Falco, Analyse the container behavior for at least 30 seconds, using filters that detect newly spawning and executing processes store the incident file art /opt/falco-incident.txt, containing the detected incidents. one per line, in the format

[timestamp],[uid],[user-name],[processName]

Reveal Solution Hide Solution

Correct Answer: Send us your suggestion on it.
Question #17

CORRECT TEXT

Create a new ServiceAccount named backend-sa in the existing namespace default, which has the capability to list the pods inside thenamespace default.

Create a new Pod named backend-pod in the namespace default, mount the newly created sa backend-sa to the pod, and Verify that the pod is able to list pods.

Ensure that the Pod is running.

Reveal Solution Hide Solution

Correct Answer: A service account provides an identity for processes that run in a Pod.

When you (a human) access the cluster (for example, using kubectl), you are authenticated by the apiserver as a particular User Account (currently this is usually admin,unless your cluster administrator has customized your cluster). Processes in containers inside pods can also contact the apiserver. When they do, they are authenticated as a particular Service

Account (for example, default).

When you create a pod, if youdo not specify a service account, it is automatically assigned the default service account in the same namespace. If you get the raw json or yaml for a pod you have created (for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field has been automatically set.

You can access the API from inside a pod using automatically mounted service account credentials, as described in Accessing the Cluster. The API permissions of the service account depend on the authorization plugin and policy in use.

In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account: apiVersion:v1

kind:ServiceAccount

metadata:

name:build-robot

automountServiceAccountToken:false

In version 1.6+, you can also opt out of automounting API credentials for a particular pod:

apiVersion:v1

kind:Pod

metadata:

name:my-pod

spec:

serviceAccountName:build-robot

automountServiceAccountToken:false

The pod spec takes precedence over the service account if both specify a automountServiceAccountToken value.

Exit mobile version