3 - Pod design Flashcards

1
Q

Create 3 pods with names nginx1,nginx2,nginx3. All of them should have the label app=v1

A

for i in seq 1 3; do kubectl run nginx$i –image=nginx -l app=v1 ; done

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Show all labels of the pods

A

kubectl get po –show-labels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Change the labels of pod ‘nginx2’ to be app=v2

A

kubectl label po nginx2 app=v2 –overwrite

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Get the label ‘app’ for the pods (show a column with APP labels)

A

kubectl get po -L app
# or
kubectl get po –label-columns=app

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Get only the ‘app=v2’ pods

A

kubectl get po -l app=v2
# or
kubectl get po -l ‘app in (v2)’
# or
kubectl get po –selector=app=v2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Add a new label tier=web to all pods having ‘app=v2’ or ‘app=v1’ labels

A

kubectl label po -l “app in(v1,v2)” tier=web

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Add an annotation ‘owner: marketing’ to all pods having ‘app=v2’ label

A

kubectl annotate po -l “app=v2” owner=marketing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Remove the ‘app’ label from the pods we created before

A

kubectl label po nginx1 nginx2 nginx3 app-
# or
kubectl label po nginx{1..3} app-
# or
kubectl label po -l app app-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Annotate pods nginx1, nginx2, nginx3 with “description=’my description’” value

A

or

kubectl annotate po nginx1 nginx2 nginx3 description=’my description’

kubectl annotate po nginx{1..3} description=’my description’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Check the annotations for pod nginx1

A

kubectl annotate pod nginx1 –list

or

kubectl describe po nginx1 | grep -i ‘annotations’

or

kubectl get po nginx1 -o custom-columns=Name:metadata.name,ANNOTATIONS:metadata.annotations.description

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Remove the annotations for these three pods

A

kubectl annotate po nginx{1..3} description- owner-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Remove these pods to have a clean state in your cluster

A

kubectl delete po nginx{1..3}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Create a pod that will be deployed to a Node that has the label ‘accelerator=nvidia-tesla-p100’

A

kubectl label nodes <your-node-name> accelerator=nvidia-tesla-p100
kubectl get nodes --show-labels</your-node-name>

apiVersion: v1
kind: Pod
metadata:
name: cuda-test
spec:
containers:
- name: cuda-test
image: “k8s.gcr.io/cuda-vector-add:v0.1”
nodeSelector: # add this
accelerator: nvidia-tesla-p100 # the selection label

kubectl explain po.spec

apiVersion: v1
kind: Pod
metadata:
name: affinity-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: accelerator
operator: In
values:
- nvidia-tesla-p100
containers:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Taint a node with key tier and value frontend with the effect NoSchedule. Then, create a pod that tolerates this taint.

A

kubectl taint node node1 tier=frontend:NoSchedule # key=value:Effect
kubectl describe node node1 # view the taints on a node

apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: nginx
image: nginx
tolerations:
- key: “tier”
operator: “Equal”
value: “frontend”
effect: “NoSchedule”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Create a pod that will be placed on node controlplane. Use nodeSelector and tolerations.

A

apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
kubernetes.io/hostname: controlplane
tolerations:
- key: “node-role.kubernetes.io/control-plane”
operator: “Exists”
effect: “NoSchedule”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Create a deployment with image nginx:1.18.0, called nginx, having 2 replicas, defining port 80 as the port that this container exposes (don’t create a service for this deployment)

A

change the replicas field from 1 to 2

kubectl create deployment nginx –image=nginx:1.18.0 –dry-run=client -o yaml > deploy.yaml
vi deploy.yaml
# add this section to the container spec and save the deploy.yaml file
# ports:
# - containerPort: 80
kubectl apply -f deploy.yaml

kubectl create deploy nginx –image=nginx:1.18.0 –replicas=2 –port=80

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

View the YAML of this deployment

A

kubectl get deploy nginx -o yaml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

View the YAML of the replica set that was created by this deployment

A

kubectl describe deploy nginx # you’ll see the name of the replica set on the Events section and in the ‘NewReplicaSet’ property
# OR you can find rs directly by:
kubectl get rs -l run=nginx # if you created deployment by ‘run’ command
kubectl get rs -l app=nginx # if you created deployment by ‘create’ command
# you could also just do kubectl get rs
kubectl get rs nginx-7bf7478b77 -o yaml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Get the YAML for one of the pods

A

kubectl get po # get all the pods
# OR you can find pods directly by:
kubectl get po -l run=nginx # if you created deployment by ‘run’ command
kubectl get po -l app=nginx # if you created deployment by ‘create’ command
kubectl get po nginx-7bf7478b77-gjzp8 -o yaml

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Check how the deployment rollout is going

A

kubectl rollout status deploy nginx

21
Q

Update the nginx image to nginx:1.19.8

A

kubectl set image deploy nginx nginx=nginx:1.19.8
# alternatively…
kubectl edit deploy nginx # change the .spec.template.spec.containers[0].image

22
Q

Check the rollout history and confirm that the replicas are OK

A

kubectl rollout history deploy nginx
kubectl get deploy nginx
kubectl get rs # check that a new replica set has been created
kubectl get po

23
Q

Undo the latest rollout and verify that new pods have the old image (nginx:1.18.0)

A

kubectl rollout undo deploy nginx
# wait a bit
kubectl get po # select one ‘Running’ Pod
kubectl describe po nginx-5ff4457d65-nslcl | grep -i image # should be nginx:1.18.0

24
Q

Do an on purpose update of the deployment with a wrong image nginx:1.91

A

kubectl set image deploy nginx nginx=nginx:1.91
# or
kubectl edit deploy nginx
# change the image to nginx:1.91
# vim tip: type (without quotes) ‘/image’ and Enter, to navigate quickly

25
Q

Verify that something’s wrong with the rollout

A

kubectl rollout status deploy nginx
# or
kubectl get po # you’ll see ‘ErrImagePull’ or ‘ImagePullBackOff’

26
Q

Return the deployment to the second revision (number 2) and verify the image is nginx:1.19.8

A

kubectl rollout undo deploy nginx –to-revision=2
kubectl describe deploy nginx | grep Image:
kubectl rollout status deploy nginx # Everything should be OK

27
Q

Check the details of the fourth revision (number 4)

A

kubectl rollout history deploy nginx –revision=4 # You’ll also see the wrong image displayed here

28
Q

Scale the deployment to 5 replicas

A

kubectl scale deploy nginx –replicas=5
kubectl get po
kubectl describe deploy nginx

29
Q

Autoscale the deployment, pods between 5 and 10, targetting CPU utilization at 80%

A

kubectl autoscale deploy nginx –min=5 –max=10 –cpu-percent=80
# view the horizontalpodautoscalers.autoscaling for nginx
kubectl get hpa nginx

30
Q

Pause the rollout of the deployment

A

kubectl rollout pause deploy nginx

31
Q

Update the image to nginx:1.19.9 and check that there’s nothing going on, since we paused the rollout

A

kubectl set image deploy nginx nginx=nginx:1.19.9
# or
kubectl edit deploy nginx
# change the image to nginx:1.19.9
kubectl rollout history deploy nginx # no new revision

32
Q

Resume the rollout and check that the nginx:1.19.9 image has been applied

A

kubectl rollout resume deploy nginx
kubectl rollout history deploy nginx
kubectl rollout history deploy nginx –revision=6 # insert the number of your latest revision

33
Q

Delete the deployment and the horizontal pod autoscaler you created

A

Or

kubectl delete deploy nginx
kubectl delete hpa nginx

kubectl delete deploy/nginx hpa/nginx

34
Q

Implement canary deployment by running two instances of nginx marked as version=v1 and version=v2 so that the load is balanced at 75%-25% ratio

A

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: v1
template:
metadata:
labels:
app: my-app
version: v1
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: busybox:1.28
command:
- /bin/sh
- -c
- “echo version-1 > /work-dir/index.html”
volumeMounts:
- name: workdir
mountPath: “/work-dir”
volumes:
- name: workdir
emptyDir: {}

run a wget to the Service my-app-svc
kubectl run -it –rm –restart=Never busybox –image=gcr.io/google-containers/busybox –command – wget -qO- my-app-svc

version-1

kubectl scale –replicas=4 deploy my-app-v2
kubectl delete deploy my-app-v1
while sleep 0.1; do curl $(kubectl get svc my-app-svc -o jsonpath=”{.spec.clusterIP}”); done
version-2
version-2
version-2
version-2
version-2
version-2

35
Q

Create a job named pi with image perl:5.34 that runs the command with arguments “perl -Mbignum=bpi -wle ‘print bpi(2000)’”

A

kubectl create job pi –image=perl:5.34 – perl -Mbignum=bpi -wle ‘print bpi(2000)’

36
Q

Wait till it’s done, get the output

A

kubectl get jobs -w # wait till ‘SUCCESSFUL’ is 1 (will take some time, perl image might be big)
kubectl get po # get the pod name
kubectl logs pi-** # get the pi numbers
kubectl delete job pi

37
Q

Create a job with the image busybox that executes the command ‘echo hello;sleep 30;echo world’

A

kubectl create job busybox –image=busybox – /bin/sh -c ‘echo hello;sleep 30;echo world’

38
Q

Follow the logs for the pod (you’ll wait for 30 seconds)

A

kubectl get po # find the job pod
kubectl logs busybox-ptx58 -f # follow the logs

39
Q

See the status of the job, describe it and see the logs

A

kubectl get jobs
kubectl describe jobs busybox
kubectl logs job/busybox

40
Q

Delete the job

A

kubectl delete job busybox

41
Q

Create a job but ensure that it will be automatically terminated by kubernetes if it takes more than 30 seconds to execute

A

kubectl create job busybox –image=busybox –dry-run=client -o yaml – /bin/sh -c ‘while true; do echo hello; sleep 10;done’ > job.yaml
vi job.yaml
Add job.spec.activeDeadlineSeconds=30

apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
activeDeadlineSeconds: 30 # add this line
template:
metadata:
creationTimestamp: null
labels:
run: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- while true; do echo hello; sleep 10;done
image: busybox
name: busybox
resources: {}
restartPolicy: OnFailure
status: {}

42
Q

Create the same job, make it run 5 times, one after the other. Verify its status and delete it

A

kubectl create job busybox –image=busybox –dry-run=client -o yaml – /bin/sh -c ‘echo hello;sleep 30;echo world’ > job.yaml
vi job.yaml
Add job.spec.completions=5

apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
completions: 5 # add this line
template:
metadata:
creationTimestamp: null
labels:
run: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- echo hello;sleep 30;echo world
image: busybox
name: busybox
resources: {}
restartPolicy: OnFailure
status: {}
kubectl create -f job.yaml
Verify that it has been completed:

kubectl get job busybox -w # will take two and a half minutes
kubectl delete jobs busybox

43
Q

Create the same job, but make it run 5 parallel times

A

vi job.yaml
Add job.spec.parallelism=5

apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: null
labels:
run: busybox
name: busybox
spec:
parallelism: 5 # add this line
template:
metadata:
creationTimestamp: null
labels:
run: busybox
spec:
containers:
- args:
- /bin/sh
- -c
- echo hello;sleep 30;echo world
image: busybox
name: busybox
resources: {}
restartPolicy: OnFailure
status: {}
kubectl create -f job.yaml
kubectl get jobs
It will take some time for the parallel jobs to finish (>= 30 seconds)

kubectl delete job busybox

44
Q

Create a cron job with image busybox that runs on a schedule of “*/1 * * * *” and writes ‘date; echo Hello from the Kubernetes cluster’ to standard output

A

kubectl create cronjob busybox –image=busybox –schedule=”*/1 * * * *” – /bin/sh -c ‘date; echo Hello from the Kubernetes cluster’

45
Q

See its logs and delete it

A

kubectl get po # copy the ID of the pod whose container was just created
kubectl logs <busybox-***> # you will see the date and message
kubectl delete cj busybox # cj stands for cronjob

46
Q

Create the same cron job again, and watch the status. Once it ran, check which job ran by the created cron job. Check the log, and delete the cron job

A

kubectl get cj
kubectl get jobs –watch
kubectl get po –show-labels # observe that the pods have a label that mentions their ‘parent’ job
kubectl logs busybox-1529745840-m867r
# Bear in mind that Kubernetes will run a new job/pod for each new cron job
kubectl delete cj busybox

47
Q

Create a cron job with image busybox that runs every minute and writes ‘date; echo Hello from the Kubernetes cluster’ to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time (i.e. the job missed its scheduled time).

A

kubectl create cronjob time-limited-job –image=busybox –restart=Never –dry-run=client –schedule=”* * * * *” -o yaml – /bin/sh -c ‘date; echo Hello from the Kubernetes cluster’ > time-limited-job.yaml
vi time-limited-job.yaml
Add cronjob.spec.startingDeadlineSeconds=17

apiVersion: batch/v1
kind: CronJob
metadata:
creationTimestamp: null
name: time-limited-job
spec:
startingDeadlineSeconds: 17 # add this line
jobTemplate:
metadata:
creationTimestamp: null
name: time-limited-job
spec:
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: busybox
name: time-limited-job
resources: {}
restartPolicy: Never
schedule: ‘* * * * *’
status: {}

48
Q

Create a cron job with image busybox that runs every minute and writes ‘date; echo Hello from the Kubernetes cluster’ to standard output. The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution.

A

kubectl create cronjob time-limited-job –image=busybox –restart=Never –dry-run=client –schedule=”* * * * *” -o yaml – /bin/sh -c ‘date; echo Hello from the Kubernetes cluster’ > time-limited-job.yaml
vi time-limited-job.yaml
Add cronjob.spec.jobTemplate.spec.activeDeadlineSeconds=12

apiVersion: batch/v1
kind: CronJob
metadata:
creationTimestamp: null
name: time-limited-job
spec:
jobTemplate:
metadata:
creationTimestamp: null
name: time-limited-job
spec:
activeDeadlineSeconds: 12 # add this line
template:
metadata:
creationTimestamp: null
spec:
containers:
- args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
image: busybox
name: time-limited-job
resources: {}
restartPolicy: Never
schedule: ‘* * * * *’
status: {}

49
Q

Create a job from cronjob.

A

kubectl create job –from=cronjob/sample-cron-job sample-job