k8s Basics Flashcards
Know the definitions of different concepts
Namespaces
Namespace is kind of a virtual cluster where u can deploy your objects.
By default kubectl interacts with “default” namespace
To reference objects in other namespaces
Use : kubectl –namespace=mystuff
To list objects in all namespaces
Ex:_ kubectl get pods –all-namespaces
How to change to another namespace ?
We need to use a context.
it gets recorded in configuration file /.kube/config
$kubectl config set-context my-context –namespace=mystuff
After this to use the newly created context
$ kubectl config use-context my-context
Note: contexts can also be used to manage different clusters , users with set-context
Get detailed information on a kubernetes object
$ kubectl describe
Ex.
$kubectl describe pod nginx
Create update and destroy k8s objects
Create k8s object
$ kubectl apply -f obj.yaml
To print the object to console
$kubectl apply -f object.yaml –dry-run
To a YAML file
$kubectl apply -f object.yaml –dry-run -o yaml > object.yaml
Interactive edits instead of editing file
$ kubectl edit
View last applied configuration
$kubectl apply -f object.yaml view-last-applied
Delete object
$ kubectl delete -f object.yaml
extract a pod definition into YAML
pod : webserver
$ kubectl get po webserver -o yaml > pod.yaml
$ vi pod.yaml [edit what u want here , delete the webserver POD , then create new POD using kubectl create -f pod.yaml ]
edit a deployment
deployment name : webapp-deploy
you do not need to necessarily delete a deployment to edit it
$ kubectl edit deploy webapp-deploy
imperative commands use:
Deploy a redis pod using the redis:alpine image with the labels set to tier=db.
$ kubectl run redis –image=redis:alpine -l tier=db
Create a new pod called custom-nginx using the nginx image and expose it on container port 8080
$ kubectl run custom-nginx –image=nginx –port=8080
Create a new namespace called dev-ns
$ kubectl create namespace dev-ns
or
$ kubectl create ns dev-ns
Create a new deployment called redis-deploy in the dev-ns namespace with the redis image. It should have 2 replicas.
$ kubectl create deploy redis-deploy -n dev-ns –image=redis –replicas=2
Create a pod called httpd using the image httpd:alpine in the default namespace. Next, create a service of type ClusterIP by the same name (httpd). The target port for the service should be 80.
$ kubectl run po http –image=httpd:alpine
$ kubectl expose po httpd –port=80
How to check how many Namespaces exist on the system?
$ kubectl get namespace –no-headers | wc -l
- check pods in a namespace ‘love’
- create pod in namespace ‘love’
image: redis
name: redis-pod
1.
$ kubectl get po -n love
2.
$ k run redis-pod –images=redis -n love
how to find a POD using POD name in which namespace it exists ?
$ kubectl get po –all-namespaces
how to access a service ex: db-service
if it exists in same namespace and also suppose it exists in different namespace say namespace=dev
- If
db-service
exists in same namespace
you can access it using the service name itself
heredb-service
- If
db-service
exists in another namespace
you need to provide the FQDN(fully-qualified domain name ) of the namespace it exists
db-service.dev.svc.cluster.local
In dockerfile , how do you specify commands and arguments
CMD [“sleep” , “5”]
the first argument should be an executable
like sleep , kubectl etc…
what is ENTRYPOINT instruction in Dockerfile
it is a command which makes u to run an executable/job etc.. at the start of a container
example :
image name: ubuntu
Dockerfile:
…
ENTRYPOINT [“sleep”]
container : ubuntu-sleeper
we can run docker run ubuntu-sleeper 10
and it will take command as sleep 10
how to override entrypoint command at docker startup
$ docker run –entrypoint=<> <> <>
how do you give args in mainifest files in k8s?
let’s say a POD def file
pod.yaml
apiVersion: v1 kind:Pod metadata: name:ubuntu-sleeper spec: containers: - name: ubuntu-sleeper image: ubuntu-sleeper args: ["10"]
above will add the arg to the command in DOckerfile let’s say ENTRYPOINT [“sleep”]
then the command totally becomes sleep 10
but if you want to override the ENTRYPOINT itself in Dockerfile
utilize commands field
... spec: containers: - name: ubuntu-sleeper image: ubuntu-sleeper command: ["sleep2.0"] args: ["10"]
…
so args field in yaml overrides CMD in docker file and
command field in yaml overrides ENTRYPOINT field in Dockerfile
can we edit the running pod and deployment ?
for POD , only a few instructions can be changed
other than that , put your pod into a manifest if not present
delete the previous pod and create a new one using the new YAML manifest
for deployment , we can edit as PODs will be re-checked and changed to desired state if they are not
$kubectl edit deploy webapp-deploy
how to check what command is used in a POD
$ kubectl describe po pod_name
go to Command: field and check for it
how to set environment variables in manifest file of POD
pod. yaml
- –
... spec: containers: - name: webapp-container image: webapp ports: - containerPort: 8080 env: - name: DB_HOST #namewillAvailable to be used value: postgres
what are different types of environment variables we can set
1. plain key-value ... env: - name: DB_PASSWORD value: YPO4reh=
2. configMap ... env: - name: DB_PASSWORD valueFrom: configMapKeyRef:
3. secrets ... env: - name: DB_PASSWORD valueFrom: secretMapKeyRef:
create a configmap imperatively
we can add as many –from-literal
$ kubectl create configmap app-config –from-literal=DB_HOST=postgres /
–from-literal=ENV=e1
or we can use a .properties file and load it into configmap
app.properties
____________
DB_HOST=postgres
ENV=e1
$kubectl create configmap app-config –from-file=app.properties
create configmap declaratively
apiVersion: v1 kind: ConfigMap metadata: name: app-config data: DB_HOST: postgres ENV: e1
note: data
instead of spec
name your configmaps appropriately
create secrets imperatively
$ kubectl create secret generic –from-literal=password= –from-literal=DB_HOST=mysql
or use a file
$ kubectl create secret generic –from-file=app_secrets.properties
create secrets declaratively
apiVersion: v1
kind: Secret
metadata:
name: app-secret
data:
DB_HOST: YWRtaW4=
ENV: YWRtaW4=
password: YWRtaW4=
note: hashed values are present above
to get hashed value on linux =>
encode : echo -n “admin” | base64
decode: echo -n “YWRtaW4=” | base64 –decode
... 3 ways :- 1. spec: containers: - name: webapp image: redis ports: - containerports: 6379 envFrom: - secretRef: name: app-secret
- as single env variable
…
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: app-secret
key: DB_password
- as volumes ( inject into POD )
…
volumes:
- name: app-secret-volume
secret:
secretName: app-secret
…
note: each secret property inside app-secret will be added a single file into the POD (container)
what can ROOT user do on linux system
?
root user has the highest privileges on a system
~ modifying/restricting permissions on files/folders
~ setting user IDs
~ network related operations ( port binding etc..)
~ system operations like rebooting the host etc..
~ manipulating system clocks etc…………………
etc….
set security context on a POD
for whole POD
…
spec:
securityContext:
runAsUser: 1000
for only one container : spec: containers: - name: web-pod image: ubuntu command: ["sleep" , "3600"] securityContext: runAsUser: 1000 capabilities: add: ["MAC_ADMIN"]
check the user in a container/pod
$ kubectl exec ubuntu-sleeper – whoami
security context:
set SYS_TIME capability to root user on a container
containers: - name: web-pod image: ubuntu command: ["sleep" , "3600"] securityContext: capabilities: add: ["MAC_ADMIN"]
difference between user account and service account ?
user account:- admin or developer can utilize this to get into cluster or deploy your pods etc..
service account :- is used by an application /system
for example prometheus polls k8s api for metrics etc..
or jenkins uses its automated build to deploy applications on a cluster
create service account imperatively
$ kubectl create serviceaccount webapp-sa
when a service account is created , it automatically creates a token and this token has to be used by external application to authenticate into the k8s
note: if the external application is inside the cluster itself , all you need to do is mount the secret as volume inside the pod which is hosting the external application
what is difference between Mibibyte ( Mi) Vs Megabyte ( MB) ; 1Gi Vs 1 G ; 1 Ki Vs 1 K
1 G (Gigabytes) = 10^9 bytes 1 M (Megabytes) = 10^6 bytes 1 K (Kilobytes) = 10 ^3 bytes
1 Gi (Gibibytes) = 1,073,741,824 bytes 1 Mi (Mibibytes) = 1,048,576 bytes 1 Ki (Kibibytes) = 1024 bytes
The status ‘OOMKilled; means
?
Out of memory for a POD to be deployed on a node.
for ex: POD consumes 15 Mi memory and limit on POD resources is put as 10 Mi then it shows status as
OOMKilled
Taints are set on ________ and Tolerations are set on ______ ?
Taints are set on nodes and tolerations are set on PODS
imperative commands :
set taint on a node
set toleration on a POD
set taint on Node :
1. $ kubectl taint nodes node-name key=value:taint-effect
taint-effect : 3 types
- NoSchedule
- NoExecute (no new pods are put on POD and any existing pods will be evicted )
- PreferNoSchedule (prefers not to schedule any pod
but that is not guranteed )
add tolerations to POD
suppose: taint on node is
$ kubectl taint nodes node1 app=blue:NoSchedule
then tolerations
pod. yaml
- –
... spec: ... tolerations: - key: "app" operator: "Equal" value: "blue" effect: "NoSchedule"
remove a taint from a node
remember the -
hyphen which say to remove the taint
$ kubectl taint nodes node-name <>-
for example remove taint on master node::
$ kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
Create a taint on node01 with key of ‘spray’, value of ‘mortein’ and effect of ‘NoSchedule’
$ kubectl taint nodes node01 spray=mortein:NoSchedule
Node selectors how to add label to a node
suppose you want to add label size=Large on a node
you can add $ kubectl label nodes node01 size=Large
then u can add nodeSelector on POD .
…
nodeSelector:
size: Large
…
different types of multi-container pods
- Adapter
- Side-car [ ex:- logging server]
- ambassador
sidecar will get the logs from the app container and will send to Adapter container which will take/transform the logs and send it to central server
suppose there are DBs which are in diff env ( Dev,test, prod ) , we can add an ambassador container which the main app can use as a localhost but the ambassador container knows the DB(it will proxy to right DB)
note: these are patterns , but when u use on pod definition files , it will be just another container added
define POD life cycle
when the POD is first created it will be in PENDING
state
. kube-scheduler will schedule the POD onto a node which is available
once the POD is scheduled , it will pull all the images inside the POD - ContainerCreating
state
once you have the images pulled and containers are run .it will go into RUNNING state
what are liveness and readinessprobes ?
liveness probe helps to see if the container is able to serve requests
readiness probe helps to see at the start of application , if the application is ready to serve requests which are routed through a service
container logging command
if only one container is present
$ kubectl logs -f pod-name
if multiple containers are present
$ kubectl logs -f pod-name container-name
how monitoring works on k8s
- k8s doesnt have in-built metrics in it
- it utilizes 3rd party services like METRICS SERVER , Prometheus , dynatrace etc…
- every node has
kubelet
, which contains cAdvisor-which retrieves performance metrics and exposes it through an API - any metrics server can utilize these metrics
- command for knowing metrics of nodes
2. command for knowing metrics of POD/s
- $ kubectl top node
2. $ kubectl top pod
what is clusterIP Vs NodePort Vs LoadBalancer Vs TargetPort
clusterIP: exposes a service on cluster’s internal IP
nodePort: exposes a service on each Node’s IP (static port)
load balancer : will create an external Load Balancer (AWS Classic LB), “behind it” automatically will create a NodePort, then ClusterIP and in this way will route traffic from the Load Balancer to a pod in a cluster
TargetPort: it is where service will send requests to.your application will be listening on this port
what is endpoint in kubernetes ?
An endpoint is an object that gets IP addresses of individual pods assigned to it. The endpoint object is then in turn referenced by a kubernetes service, so that the service has a record of the internal IPs of pods in order to be able to communicate with them
how to get a network policy
$ kubectl get networkpolicy
how to get a network policy and how to check to which POD the network policy is added ?
$ kubectl get networkpolicy
to check which pod it is applied to
$ kubectl get networkpolicy and look for pod selector
create a network policy
from pod internal
to mysql@3306 and payroll@8080
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: internal-policy namespace: default spec: podSelector: matchLabels: name: internal policyTypes: - Egress egress: - to: - podSelector: matchLabels: name: payroll ports: - protocol: TCP port: 8080 - to: - podSelector: matchLabels: name: mysql ports: - protocol: TCP port: 3306
How would you load balance into different services without ingress
- you have an online store ( www.online-store.com/store )
you have service exposing it to outside at 30080,
a reverse proxy present at port 80 to access the application at www.online-store.com/store[in cloud ex: GCP , we get cloud native load balancer] rather than
www.online-store.com:30080/store
now
suppose you are scaling your application
-> you added another application ( say vdo streaming app ) which needs to be accessed through the same DNS name
then we need separate loadbalancer(costly) then configure the proxy to route to this and to reverse proxy this , we need to add another proxy above it
two types of ingress ?
Ingress controller :- ( underlying can be NGINX , HAProxy , Traefik, Istio etc.. ) ..note that they wont come with k8s , you need to deploy manually..
Ingress controllers also monitors k8s clusters for new Ingress definitions ,resources and configure the underlying solution (NGINX , Isito etc.. ) accordingly
in defintion file , Ingress controller is just a deployment file
Ingress resources:-
set of rules we can configure like TLS , SSL certs , URL routes etc.. [these are defined in definition files .yaml]
how do you configure all the settings in a Ingress controller
underlying solution : NGINX
Ingress controller is just a deployment definition file .
you configure nginx-ngress-controller image in the defn file
and to add nginx configurations like
err-logs path , session timeouts , keep-alive , ssl-protocols etc… we can use a configMap and the configuration settings there
then we have to expose this Ingress controller to external world as Service NodePort
also for ingress controllers to monitor for Ingress resources and changes to them , we need to give right set of permissions for Ingress controller
we do that by adding a Service Account(Roles,ClusterRole,RoleBindings )
So for a setup of Ingress controller we need
objects
Deployment : Ingress-controller
Service : expose the controller
configMap : to store the config settings for the controller
ServiceAccount: to give right set of permissions for accessing
what is default backend in Ingress resources
If any URL path doesn’t match with the resources we have , then we need to configure a default backend like HTTP 404 page not found.
difference between having a host
and having simple paths
in Ingress resource?
host: it will configure such a way the requests coming from a hostname is forwarded to the host
... spec: rules: - host: watch.online-service.com http: paths: - backend: serviceName: watch-service servicePort: 80 ...
only `paths` : it will just look into the path in the url and route accordingly ... spec: rules: - http: paths: - path: /wear backend: serviceName: watch-service servicePort: 80 - http: paths: - path: /store backend: serviceName: store-service servicePort: 80
…
difference between having a host
and having simple paths
in Ingress resource?
host: it will configure such a way the requests coming from a hostname is forwarded to the host
... spec: rules: - host: watch.online-service.com http: paths: - backend: serviceName: watch-service servicePort: 80 ...
only `paths` : it will just look into the path in the url and route accordingly [here host: * means it will accept all routes ] ... spec: rules: - http: paths: - path: /wear backend: serviceName: watch-service servicePort: 80 - http: paths: - path: /store backend: serviceName: store-service servicePort: 80
…
- get ingress resource in all namespaces
2. get ingress in lobe
namespace
- $ kubectl get ingress –all-namespaces
2. $ kubectl get ingress -n lobe