Introduction#
- question 18 probes
- question 19 observability
- question 20 troubleshooting container
- question 21 query labels
Question 18 Probes#
- create a new pod hello with the image speedracer5dave/nodejs-hello-world
- expose the port 3000 and name the port nodejs-port
- add a readiness proble that checks the URL path (root /) on the port with the name above after a 2 second delay
- add a liveness probe that verifies the app is running every 8 seconds by checking the url path on the port
- the probe should start with a 5 second delay
- shell into the container and curl localhost:3000 note the message. retrieve the logs from the container
create a new pod
kubectl run hello --image=speedracer5dave/nodejs-hello-world --port=3000 --dry-run=client --output=yaml > q18.yaml
edit the q18.yaml to add probes
apiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: helloname: hellospec:containers:- image: speedracer5dave/nodejs-hello-worldname: helloports:- containerPort: 3000name: nodejs-portresources: {}livenessProbe:httpGet:path: /port: nodejs-portinitialDelaySeconds: 5periodSeconds: 8readinessProbe:httpGet:path: /port: nodejs-portinitialDelaySeconds: 2dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}
then apply
kubectl create -f q18.yaml
shell into the pod
kubectl exec hello -it -- /bin/sh
curl localhost:3000 and note the message
curl localhost:3000
exit and get logs
kubectl logs hello
delete pods
kubectl delete -f q18.yaml
Question 19 Observability#
- define a new pod named webserver with the image of nginx in a YAML file, expose port 80 and do not create it yet
- declare a startup probe of httpget, verify the root context endpoint can be called, use the default config for the probe
- declare a readiness probe of type httpget, verify the root context endpoint can be called, wait 5 seconds before checking for the first time
- declare a liveness probe of type httpget, verify the root context endpoint can be called, wait 10 seconds before checking the first time, the probe should run the check every 30 seconds
- create the pod and follow the lifecycle creating pod
- retrieve the metrics of the pod (CPU/memory) from the metrics server
- create a pod named custom-cmd with the iamge busybox, the container should run the command top-analyzer with the command flag --all, inspect status. how would you troubleshoot the pod to identify the root cause of the failure?
before begining, I have to install metrics server for aws eks cluster
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
then verify that metrics-server running
kubectl get deployment metrics-server -n kube-system
then check nodes and pods
kubectl top nodes/pods
create a pod named webserver
kubectl run webserver --image=nginx --port=80 --dry-run=client --output=yaml > q19.yaml
then update the yaml with probes (readiness, liveness, and startup)
apiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: webservername: webserverspec:containers:- image: nginxname: webserverports:- containerPort: 80name: checkportresources: {}livenessProbe:httpGet:path: /port: 80initialDelaySeconds: 10periodSeconds: 30startupProbe:httpGet:path: /port: 80readinessProbe:httpGet:path: /port: 80initialDelaySeconds: 5dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}
check the result, see these endpoints
kubectl describe pod webserver
create a custom-cmd pod, then check why error
apiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: custom-cmdname: custom-cmdspec:containers:- command:- bin/sh- -c- top -analyzer -allimage: busyboxname: custom-cmdresources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}
describe and log the custom-cmd
kubectl describe pod custom-cmd
kubectl logs custom-cmd
Question 20 Troubleshooting#
- create a new pod from yaml file debugpod.yaml at https://github.com/speedracer55/ckad-files
- check the pod's status. see any issue?
- follow the logs to find the issue
- fix the issue shelling into the container. After the fix the current date should be written to a file
create a new pod from yaml file
kubectl create -f q20.yaml
debug to see issue
kubectl logs failing-pod
then you see an issues which tmp dir is nonexist yet
nonexist directory /root/tmp
shell into pod and create the tmp dir
kubectl exec failing-pod -it -- /bin/sh
then create the tmp
mkdir /root/tmp
then check the curr-date.txt will be created in the tmp, double check logs again
kubectl logs failing-pod --since=60s
should be emtpy without any issue in the newest 60 seconds log
apiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: failing-podname: failing-podspec:containers:- args:- /bin/sh- -c- while true; do echo $(date) >> ~/tmp/curr-date.txt; sleep5; done;image: busyboxname: failing-podresources: {}dnsPolicy: ClusterFirstrestartPolicy: Neverstatus: {}
Question 21 Query Labels#
- create 3 different pods with names of web, batch and database and use the image of nginx
- declare labels for these pods as follows
- web: env=prod, tier=web
- batch: env=prod, tier=batch
- database: env=prod, tier=database
- list the pods and their labels
- use label selectors to query for all production belong to web and batch
- remove the label tier from the database pod
create 3 pods
kubectl run web --image=nginx --labels=env=prod,tier=web --dry-run=client --output=yaml > web.yamlkubectl run batch --image=nginx --labels=env=prod,tier=batch --dry-run=client --output=yaml > batch.yamlkubectl run database --image=nginx --labels=env=prod,tier=database --dry-run=client --output=yaml > database.yaml
check
kubectl get pods --show-labels
list the pods by filtering labels
kubectl get pods --l 'env in (prod), tier in (web,batch)'
remove label
kubectl label pod database tier-