3 - Pod design Flashcards
Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1
for i in seq 1 3
; do kubectl run nginx$i –image=nginx -l app=v1 ; done
Show all labels of the pods
kubectl get po –show-labels
Change the labels of pod ‘nginx2’ to be app=v2
kubectl label po nginx2 app=v2 –overwrite
Get the label ‘app’ for the pods (show a column with APP labels)
kubectl get po -L app
# or
kubectl get po –label-columns=app
Get only the ‘app=v2’ pods
kubectl get po -l app=v2
# or
kubectl get po -l ‘app in (v2)’
# or
kubectl get po –selector=app=v2
Add a new label tier=web to all pods having ‘app=v2’ or ‘app=v1’ labels
kubectl label po -l “app in(v1,v2)” tier=web
Add an annotation ‘owner: marketing’ to all pods having ‘app=v2’ label
kubectl annotate po -l “app=v2” owner=marketing
Remove the ‘app’ label from the pods we created before
kubectl label po nginx1 nginx2 nginx3 app-
# or
kubectl label po nginx{1..3} app-
# or
kubectl label po -l app app-
Annotate pods nginx1, nginx2, nginx3 with “description=’my description’” value
or
kubectl annotate po nginx1 nginx2 nginx3 description=’my description’
kubectl annotate po nginx{1..3} description=’my description’
Check the annotations for pod nginx1
kubectl annotate pod nginx1 –list
or
kubectl describe po nginx1 | grep -i ‘annotations’
or
kubectl get po nginx1 -o custom-columns=Name:metadata.name,ANNOTATIONS:metadata.annotations.description
Remove the annotations for these three pods
kubectl annotate po nginx{1..3} description- owner-
Remove these pods to have a clean state in your cluster
kubectl delete po nginx{1..3}
Create a pod that will be deployed to a Node that has the label ‘accelerator=nvidia-tesla-p100’
kubectl label nodes <your-node-name> accelerator=nvidia-tesla-p100
kubectl get nodes --show-labels</your-node-name>
apiVersion: v1
kind: Pod
metadata:
name: cuda-test
spec:
containers:
- name: cuda-test
image: “k8s.gcr.io/cuda-vector-add:v0.1”
nodeSelector: # add this
accelerator: nvidia-tesla-p100 # the selection label
kubectl explain po.spec
apiVersion: v1
kind: Pod
metadata:
name: affinity-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: accelerator
operator: In
values:
- nvidia-tesla-p100
containers:
…
Taint a node with key tier and value frontend with the effect NoSchedule. Then, create a pod that tolerates this taint.
kubectl taint node node1 tier=frontend:NoSchedule # key=value:Effect
kubectl describe node node1 # view the taints on a node
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: “tier”
operator: “Equal”
value: “frontend”
effect: “NoSchedule”
Create a pod that will be placed on node controlplane. Use nodeSelector and tolerations.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
kubernetes.io/hostname: controlplane
tolerations:
- key: “node-role.kubernetes.io/control-plane”
operator: “Exists”
effect: “NoSchedule”
Create a deployment with image nginx:1.18.0, called nginx, having 2 replicas, defining port 80 as the port that this container exposes (don’t create a service for this deployment)
change the replicas field from 1 to 2
kubectl create deployment nginx –image=nginx:1.18.0 –dry-run=client -o yaml > deploy.yaml
vi deploy.yaml
# add this section to the container spec and save the deploy.yaml file
# ports:
# - containerPort: 80
kubectl apply -f deploy.yaml
kubectl create deploy nginx –image=nginx:1.18.0 –replicas=2 –port=80
View the YAML of this deployment
kubectl get deploy nginx -o yaml
View the YAML of the replica set that was created by this deployment
kubectl describe deploy nginx # you’ll see the name of the replica set on the Events section and in the ‘NewReplicaSet’ property
# OR you can find rs directly by:
kubectl get rs -l run=nginx # if you created deployment by ‘run’ command
kubectl get rs -l app=nginx # if you created deployment by ‘create’ command
# you could also just do kubectl get rs
kubectl get rs nginx-7bf7478b77 -o yaml
Get the YAML for one of the pods
kubectl get po # get all the pods
# OR you can find pods directly by:
kubectl get po -l run=nginx # if you created deployment by ‘run’ command
kubectl get po -l app=nginx # if you created deployment by ‘create’ command
kubectl get po nginx-7bf7478b77-gjzp8 -o yaml
Check how the deployment rollout is going
kubectl rollout status deploy nginx
Update the nginx image to nginx:1.19.8
kubectl set image deploy nginx nginx=nginx:1.19.8
# alternatively…
kubectl edit deploy nginx # change the .spec.template.spec.containers[0].image
Check the rollout history and confirm that the replicas are OK
kubectl rollout history deploy nginx
kubectl get deploy nginx
kubectl get rs # check that a new replica set has been created
kubectl get po
Undo the latest rollout and verify that new pods have the old image (nginx:1.18.0)
kubectl rollout undo deploy nginx
# wait a bit
kubectl get po # select one ‘Running’ Pod
kubectl describe po nginx-5ff4457d65-nslcl | grep -i image # should be nginx:1.18.0
Do an on purpose update of the deployment with a wrong image nginx:1.91
kubectl set image deploy nginx nginx=nginx:1.91
# or
kubectl edit deploy nginx
# change the image to nginx:1.91
# vim tip: type (without quotes) ‘/image’ and Enter, to navigate quickly
Verify that something’s wrong with the rollout
kubectl rollout status deploy nginx
# or
kubectl get po # you’ll see ‘ErrImagePull’ or ‘ImagePullBackOff’
Return the deployment to the second revision (number 2) and verify the image is nginx:1.19.8
kubectl rollout undo deploy nginx –to-revision=2
kubectl describe deploy nginx | grep Image:
kubectl rollout status deploy nginx # Everything should be OK
Check the details of the fourth revision (number 4)
kubectl rollout history deploy nginx –revision=4 # You’ll also see the wrong image displayed here
Scale the deployment to 5 replicas
kubectl scale deploy nginx –replicas=5
kubectl get po
kubectl describe deploy nginx
Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80%
kubectl autoscale deploy nginx –min=5 –max=10 –cpu-percent=80
# view the horizontalpodautoscalers.autoscaling for nginx
kubectl get hpa nginx
Pause the rollout of the deployment
kubectl rollout pause deploy nginx
Update the image to nginx:1.19.9 and check that there’s nothing going on, since we paused the rollout
kubectl set image deploy nginx nginx=nginx:1.19.9
# or
kubectl edit deploy nginx
# change the image to nginx:1.19.9
kubectl rollout history deploy nginx # no new revision
Resume the rollout and check that the nginx:1.19.9 image has been applied
kubectl rollout resume deploy nginx
kubectl rollout history deploy nginx
kubectl rollout history deploy nginx –revision=6 # insert the number of your latest revision
Delete the deployment and the horizontal pod autoscaler you created
Or
kubectl delete deploy nginx
kubectl delete hpa nginx
kubectl delete deploy/nginx hpa/nginx
Implement canary deployment by running two instances of nginx marked as version=v1 and version=v2 so that the load is balanced at 75%-25% ratio
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: v1
template:
metadata:
labels:
app: my-app
version: v1
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: busybox:1.28
command:
- /bin/sh
- -c
- “echo version-1 > /work-dir/index.html”
volumeMounts:
- name: workdir
mountPath: “/work-dir”
volumes:
- name: workdir
emptyDir: {}
run a wget to the Service my-app-svc
kubectl run -it –rm –restart=Never busybox –image=gcr.io/google-containers/busybox –command – wget -qO- my-app-svc
version-1
kubectl scale –replicas=4 deploy my-app-v2
kubectl delete deploy my-app-v1
while sleep 0.1; do curl $(kubectl get svc my-app-svc -o jsonpath=”{.spec.clusterIP}”); done
version-2
version-2
version-2
version-2
version-2
version-2
Create a job named pi with image perl:5.34 that runs the command with arguments “perl -Mbignum=bpi -wle ‘print bpi(2000)’”
kubectl create job pi –image=perl:5.34 – perl -Mbignum=bpi -wle ‘print bpi(2000)’
Wait till it’s done, get the output
kubectl get jobs -w # wait till ‘SUCCESSFUL’ is 1 (will take some time, perl image might be big)
kubectl get po # get the pod name
kubectl logs pi-** # get the pi numbers
kubectl delete job pi
Create a job with the image busybox that executes the command ‘echo hello;sleep 30;echo world’
kubectl create job busybox –image=busybox – /bin/sh -c ‘echo hello;sleep 30;echo world’
Follow the logs for the pod (you’ll wait for 30 seconds)
kubectl get po # find the job pod
kubectl logs busybox-ptx58 -f # follow the logs
See the status of the job, describe it and see the logs
kubectl get jobs
kubectl describe jobs busybox
kubectl logs job/busybox
Delete the job
kubectl delete job busybox
Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute
kubectl create job busybox –image=busybox –dry-run=client -o yaml – /bin/sh -c ‘while true; do echo hello; sleep 10;done’ > job.yaml
vi job.yaml
Add job.spec.activeDeadlineSeconds=30
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
activeDeadlineSeconds: 30 # add this line
template:
metadata:
creationTimestamp: null
labels:
run: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- while true; do echo hello; sleep 10;done
image: busybox
name: busybox
resources: {}
restartPolicy: OnFailure
status: {}
Create the same job, make it run 5 times, one after the other. Verify its status and delete it
kubectl create job busybox –image=busybox –dry-run=client -o yaml – /bin/sh -c ‘echo hello;sleep 30;echo world’ > job.yaml
vi job.yaml
Add job.spec.completions=5
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
completions: 5 # add this line
template:
metadata:
creationTimestamp: null
labels:
run: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- echo hello;sleep 30;echo world
image: busybox
name: busybox
resources: {}
restartPolicy: OnFailure
status: {}
kubectl create -f job.yaml
Verify that it has been completed:
kubectl get job busybox -w # will take two and a half minutes
kubectl delete jobs busybox
Create the same job, but make it run 5 parallel times
vi job.yaml
Add job.spec.parallelism=5
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
parallelism: 5 # add this line
template:
metadata:
creationTimestamp: null
labels:
run: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- echo hello;sleep 30;echo world
image: busybox
name: busybox
resources: {}
restartPolicy: OnFailure
status: {}
kubectl create -f job.yaml
kubectl get jobs
It will take some time for the parallel jobs to finish (>= 30 seconds)
kubectl delete job busybox
Create a cron job with image busybox that runs on a schedule of “*/1 * * * *” and writes ‘date; echo Hello from the Kubernetes cluster’ to standard output
kubectl create cronjob busybox –image=busybox –schedule=”*/1 * * * *” – /bin/sh -c ‘date; echo Hello from the Kubernetes cluster’
See its logs and delete it
kubectl get po # copy the ID of the pod whose container was just created
kubectl logs <busybox-***> # you will see the date and message
kubectl delete cj busybox # cj stands for cronjob
Create the same cron job again, and watch the status. Once it ran, check which job ran by the created cron job. Check the log, and delete the cron job
kubectl get cj
kubectl get jobs –watch
kubectl get po –show-labels # observe that the pods have a label that mentions their ‘parent’ job
kubectl logs busybox-1529745840-m867r
# Bear in mind that Kubernetes will run a new job/pod for each new cron job
kubectl delete cj busybox
Create a cron job with image busybox that runs every minute and writes ‘date; echo Hello from the Kubernetes cluster’ to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time (i.e. the job missed its scheduled time).
kubectl create cronjob time-limited-job –image=busybox –restart=Never –dry-run=client –schedule=”* * * * *” -o yaml – /bin/sh -c ‘date; echo Hello from the Kubernetes cluster’ > time-limited-job.yaml
vi time-limited-job.yaml
Add cronjob.spec.startingDeadlineSeconds=17
apiVersion: batch/v1
kind: CronJob
metadata:
creationTimestamp: null
name: time-limited-job
spec:
startingDeadlineSeconds: 17 # add this line
jobTemplate:
metadata:
creationTimestamp: null
name: time-limited-job
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: busybox
name: time-limited-job
resources: {}
restartPolicy: Never
schedule: ‘* * * * *’
status: {}
Create a cron job with image busybox that runs every minute and writes ‘date; echo Hello from the Kubernetes cluster’ to standard output. The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution.
kubectl create cronjob time-limited-job –image=busybox –restart=Never –dry-run=client –schedule=”* * * * *” -o yaml – /bin/sh -c ‘date; echo Hello from the Kubernetes cluster’ > time-limited-job.yaml
vi time-limited-job.yaml
Add cronjob.spec.jobTemplate.spec.activeDeadlineSeconds=12
apiVersion: batch/v1
kind: CronJob
metadata:
creationTimestamp: null
name: time-limited-job
spec:
jobTemplate:
metadata:
creationTimestamp: null
name: time-limited-job
spec:
activeDeadlineSeconds: 12 # add this line
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: busybox
name: time-limited-job
resources: {}
restartPolicy: Never
schedule: ‘* * * * *’
status: {}
Create a job from cronjob.
kubectl create job –from=cronjob/sample-cron-job sample-job