Kubernetes Flashcards

study kubernetes

1
Q

[k8s] what are the different ways to create k8s cluster

A

k8s cluster can be setup in various ways

  1. minikube - single node cluster
  2. Google Kubernetes engine which allows setting up multi-master k8s cluster on the google cloud.
  3. kubeadm tool - use this tool to setup k8s cluster on physical or virtual machines. Manual setup of k8s cluster.
  4. kops tool - This tool is built on top of kubeadm and is available on github. It helps you deploy production-grade, highly available Kubernetes clusters on AWS, GKE, VMware vSphere and so on.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

[k8s] what is a pod

A

A pod is a group of one or more tightly related containers that will always run together on the same worker node and in the same linux namespace(s). Each pod is like separate logical machine with its own IP, hostname, processes and so on, running a single application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

[k8s]how the pod is created

A

When you run the “kubectl run” command, it creates a new ReplicationController object in the cluster by sending a REST HTTP request to the Kubernetes API server. the ReplicationController then created a new pod, which was then scheduled to one of the worker nodes by the scheduler. The kubelet on that node saw that the pod was scheduled to it and instructed docker to pull the specified image from the registry because the imgae wan not available locally. After downloading the image, Docker created and ran the container

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

[k8s] what is a service and different types of services avaialble in k8s

A

Pods are exposed within cluster and to outside world using service. A service gets a static IP address which never changes during the lifetime of the service.
Pods are ephemeral - it can disappear at any time because the node it is running failed, because someone deleted the pod, or because the pod was evicted from an otherwise healthy node. When any of those occurs, a missing pod is replaced by a new one by the ReplicationController. This new pod gets a different IP
address from the pod it’s replacing.

Instead of connecting to pods directly, clients should connect to the service through its constant IP address. The service makes sure one of the pods receives the connection, regardless of where the pod is currently running (and what its IP address is).

Some useful properties of service object
sessionAffinity: clientIP = All requests made by a certain client to be redirected to same pod every time.

Following are the different types of services

  1. ClusterIP
  2. ExternalName
  3. NodePort
  4. LoadBalancer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

[k8s]A pod is group of one or more tightly related containers. So does all the containers in a pod share same filesystem

A

No.
Because all containers of a pod run under the same Network and UTS namespaces, they all share the same hostname and network interfaces. Similarly, all containers of a pod run under the same IPC namespace
and can communicate through IPC.
But when it comes to the filesystem, things are a little different. Because most of the container’s filesystem comes from the container image, by default, the filesystem of each container is fully isolated from other containers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

[k8s]where do containerized application logs

A

Containerized applications usually log to the standard output and standard error stream instead of writing to their log files.
$docker logs
$kubectl logs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

[k8s] what is a label

A

A label is an arbitrary key-value pair you
attach to a resource, which is then utilized when selecting resources using label selectors. resources are filtered based on whether they include the label specified in the selector. A resource can have more than one label, as long as the keys of those labels are
unique within that resource.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

[k8s] How a pod can be scheduled to specific nodes?

A

Assign labels to the nodes (for example: gpu=true). Use nodeSelector attribute in pod spec to tell Kubernetes to deploy the pod only to the nodes containing the label “gpu=true”.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

[k8s] What is a namespace? whats its use?

A

Kubernetes namespaces provide a scope for objects names. Instead of having all your resources in one single namespace, you can split them into multiple namespaces, which also allows you to use the same resource names multiple times (across different namespaces).
Using multiple namespaces allows you to split complex systems with numerous components into smaller distinct groups. They can also be used for separating resources
in a multi-tenant environment, splitting up resources into production, development, and QA environments, or in any other way you may need.

$kubectl get ns

Besides isolating resources, namespaces are also used for allowing only certain users access to particular resources and even for limiting the amount of computational resources available to individual users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

[k8s] what is managed pods

A

Managed pods are the pods created by replication controller / replica set or deployment. Pods created directly (using kubectl run command) are called unmanaged pods. In case of node failure, kubernetes will reschedule managed pods to other nodes. It will never schedule unmanaged pods to other nodes as only kubelet on that given node knows about the pod. Since node no longer there, it cannot be rescheduled on other nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

[k8s] what is liveness prob? why is it needed?

A

kubelet on the node starts running a container as soon as it is scheduled on the node. If containers main process crashes then kubelet will restart the container.
But sometimes application process does not crash but stops responding say because it falls into an infinite loop or a deadlock. To make sure applications are restarted in such cases, an application’s health must be checked from outside. This can be done using liveness probe. The liveness probe can be specified for each container in the pod’s specification. Kubernetes will periodically execute the probe and restart the container if the probe fails.
3 types of liveness probe supported by k8s
1. An HTTP get probe performs GET request on the container’s IP and path you specified. If response is received and response code does not represent an error then prob is successful
2. A TCP socket probe which tries to open a TCP socket to the given port
3. an exec probe that executes arbitrary command
inside the container.
Additional properties can be set for liveness probe, such as
1. delay: start probe after ‘delay; time once container started
2. timeout: time within container must return response
3. period: frequency of the probe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

[k8s]What is replication controller

A

RC is a k8s resource that ensures its pods are always kept running. It constantly monitors the list of running pods and makes sure the number of pods matches desired number. RC has three essential parts
1. label selector - which determines what pods are in RC’s scope.
2. replica count - which specifies desired number of pods that should be running
3. pod template - which is used for creating new pod replicas.
RC enables following powerful features
- Makes sure a pod is always running by starting a new pod when an existing one goes missing
- When a cluster node fails, it creates replacement replicas for all the pods that were running on the failed node.
- Enables horizontal scaling of pod, manual or automatic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

[k8s]What is replica set? why it is preferred over RC?

A

Replica set is new generation of RC and replaces it completely. Replica set behaves exactly like RC, but it has more expressive pod selector. Replica set’s selector allows matching pods that lack a certain label or pods that include certain label key, regardless of its value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

[k8s]What is a daemon set

A

DaemonSets run only a single pod replica on each node in the cluster and each node needs to run exactly one instance of pod. This is useful for infrastructure related pods that perform system level operations. For example, log collector or resource monitor that need to run on every node.
The pods created by Daemonset have a target node specified and skips the scheduler..
Its possible to run the pods on a subset of all the nodes. This is done by specifying the nodeSelector property in pod template.

$kubectl get ds #get all daemon sets

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

[k8s] What is a job resource

A

Replica set, replication controller, daemon sets are used when a pod need to be run continuously. Job resource allows you to run a pod whose container isn’t restarted when the process running inside finishes successfully. Once it does, the pod is considered complete.
Pods managed by job are rescheduled in case of node failure.
Pod is not deleted once job is complete so that you can check the job logs. Pod is deleted when job is deleted or when you explicitly delete the pod.
The job may be configured to create more than one pod instance and run them in parallel or sequentially. If you want to run a job more than once, then you set completions to how many times you want the job’s pod to run. The job would be run one after another sequentially. The job can be made run parallel by specifying parallelism property
example
completions: 5
parallelism: 2
means run the job 5 times, with 2 jobs in parallel.
A pod’s time can be limited by setting the activeDeadlineSeconds property in the pod spec. If the pod runs longer than that, the system will try to terminate it and will mark the job as failed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

[k8s] what is cronJob resource

A

CronJob resource allows to run a job a given time and the interval. Its same as cron daemon on Linux.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

[k8s] What is the use of named port

A

A name can be given to the port exposed by the pod in the pod’s yaml definition, which can be referred in the service spec.
This enables you to change port numbers of pod later without having to change the service spec.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

[l8s] What are the different service discoveries supported by k8s?

A
  1. Discovering services through environment variables - When a pod is started, Kubernetes initializes a set of environment variables pointing to each service that exists at that moment. If you create the service before creating the client pods, processes in those pods can get the IP address and port of the service by
    inspecting their environment variables.
  2. Using DNS - Kubernetes run an internal DNS server in a pod named “kube-dns-xxx” and a service by same name i.e. “kube-dns”. All the pods running in the cluster are automatically configured to use this internal DNS server. This is done by modifying the container’s /etc/resolve.conf file. Any DNS query performed by a process running in a pod will be handled by kubernetes’s own DNS server which knows all the services running in the system. Each service automatically gets a DNS entry in the internal DNS server.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

[k8s] How a pod can access a service which is living outside the cluster

A

Services don’t link to pods directly. an ‘endpoint’ resource sits between pod(s) and service. An Endpoint resource is a list of IP addresses and ports exposing a service.
$kubectl get endpoints
The endpoint resource is automatically created by k8s using the pod selector specified in the service spec. If you create a service with a pod selector then k8s will not even create the Endpoint resource. Then its upto you to create the Endpoint resource to specify the list of endpoint for the service.
This technique is used to create a service which is living outside the cluster.
1. Create service spec without mentioning pod selector.
2. Create Endpoint spec which would list external IP addresses and ports

Note that the endpoint resource has same name as service. That is how an endpoint is associated with a service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

[k8s] What is an External service?

A

An external service can be referred by using its FQDN instead of exposing it by manually configuring the service’s Endpoints,
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: someapi.somecompany.com
ports:
- port: 80
After the service is created, pods can connect to the external service through the external-service.default.svc.cluster.local domain name.
ExternalName service are implemented at DNS level using a simple CNAME DNS record. Therefor, clients connecting to the service will connect to the external service directly, bypassing the service proxy completely. For this reason, these types of service don’t even get cluster IP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

[k8s] How service can be exposed to the external clients clients?

A

Following are the ways to expose a service to external clients

  1. NodePort service - For a NodePort service, each cluster node opens a port on the node itself and redirects traffic received on that port to the underlying service.
  2. LoadBalancer service - This makes service accessible through a dedicated load balancer. The load balancer redirects traffic to the node port across all the nodes. Clients connect to the service through the load balancer’s IP.
  3. Creatign an Ingress resource for exposing multiple services through a single IP address.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

[k8s] What is a readyness probe?

A

The readiness probe is invokes periodically and determines whether the specific pod should receive client requests or not. When a container’s readiness probe returns success, it is signaling that the container is ready to accept requests.
When a container is started, kubernetes can be configured to wait for a configurable amount of time to pass before performing the first readiness check. After that, it invokes the probe periodically and acts based on the result of the readiness probe. If a pod reports that it is not ready, it is removed from the service. If the pod then becomes ready again, it is re-added.
Unlike liveness probe, if a container fails the readiness check, it wont be killed or restarted
Three types of readiness probe exists
1. An Exec probe - A process is executed and containers’s status is determined by the process’s exit code.
2. HTTP Get probe - Sends HTTP Get request to container and the status code of response is used to determine container is ready or not.
3. A TCP socket probe - Opens TCP socket to a specified port and container is ready if connection succeeds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

[k8s] What is a headless service

A

A service which does not have a clusterIP is called headless service. This kind of service is required if client needs to connect to all pods or if backing pods needs to connect to each other.

24
Q

[k8s] What are storage volumes

A

Changes made to the file system does not persist between container restart. In certain scenarios you might want containers to persist certain data across restart. This is done using storage volumes. Storage volumes are not top-level resources like pods, but are instead defined as part of a pod and share same lifecycle as the pod. This means volume is created when the pod is started and is destroyed when the pod is deleted (depending on volume type). Because of this, a volume’s contents will persist across container restarts. If a pod contains multiple containers, the volume can be used by all of them at once. The volume need to be mounted in each container that needs access to it.
Depending on volume type, the volume’s file may remain intact even after the pod and volume disappear. Types of volume:
 emptyDir—A simple empty directory used for storing transient data. volume’s contents are lost when the pod is deleted.
 hostPath— volume points to a specific file or directory on the node’s file system. Pods running on the same node and using the same path in their hostPath volume see the same files. Its a persistent storage volume.
 gitRepo—A volume initialized by checking out the contents of a Git repository. Is basically an emptyDir volume.
 nfs—An NFS share mounted into the pod. Its a persistent storage volume.
 gcePersistentDisk (Google Compute Engine Persistent Disk), awsElastic-BlockStore (Amazon Web Services Elastic Block Store Volume), azureDisk (Microsoft Azure Disk Volume)—Used for mounting cloud provider-specific
storage. Using volumes to share data between containers.
 cinder, cephfs, iscsi, flocker, glusterfs, quobyte, rbd, flexVolume, vsphere-Volume, photonPersistentDisk, scaleIO—Used for mounting other types of
network storage.
 configMap, secret, downwardAPI—Special types of volumes used to expose certain Kubernetes resources and cluster information to the pod.
 persistentVolumeClaim—A way to use a pre- or dynamically provisioned persistent storage.

25
Q

[k8s] What is sidecar container?

A

A sidecar container is a container that augments the operation of the main container of the pod. You add a sidecar to a pod so you can use an existing container image instead of cramming additional logic into the main app’s code, which would make it overly complex and less reusable

26
Q

Why there is a need of PersistentVolumes and PersistentVolumeClaims

A

The persistent volumes types explored so far required the developer of the pod to have knowledge of the actual network storage infrastructure available in cluster. For example, to create a NFS-backed volume, the developer has to know the actual server the NFS export is located on. This is against the basic idea of Kubernetes, which aims to hide the actual infrastructure from both the application and its developer, leaving
them free from worrying about the specifics of the infrastructure and making apps portable across a wide array of cloud providers and on-premises data centers.

To enable apps to request storage in a Kubernetes cluster without having to deal with infrastructure specifics, two new resources were introduced. They are Persistent-Volumes and PersistentVolumeClaim.

27
Q

Whats the process of creating PV and PVC and how to add a PVC in a pod specification?

A

The cluster administrator sets up the underlying storage and then registers it in Kubernetes by creating a PersistentVolume resource through the Kubernetes API
server. When creating the PersistentVolume, the admin specifies its size and the access modes it supports. Note the PV need be created (static) by the administrator.

When a cluster user needs to use persistent storage in one of their pods, they first create a PersistentVolumeClaim manifest, specifying the minimum size and the access mode they require. The user then submits the PersistentVolumeClaim manifest to the
Kubernetes API server, and Kubernetes finds the appropriate PersistentVolume and binds the volume to the claim.

The PersistentVolumeClaim can then be used as one of the volumes inside a pod. Other users cannot use the same PersistentVolume until it has been released by deleting the bound PersistentVolumeClaim.

PersistentVolume dont belong to any namespace. They are cluster-level resource like nodes.

28
Q

[k8s] How deletion of PVC and PV is handled by k8s?

A

When a PVC is deleted, PV goes into state ‘released’ if ‘PersistentVolumeClaimPolicy’ is set to ‘retain’. So in this case new pod requesting PVC will not get the PV. For that the PV need to be cleaned up manually.
‘recycle’ policy automatically deletes the volume’s contents and makes the volume available to be claimed again. ‘Delete’ policy deletes the underlying storage

29
Q

[k8s] How to provision PersistentVolumes dynamically

A

PersistentVolumes and PersistentVolumeClaims makes it easy to obtain persistent storage without the developer having to deal with the actual storage technology used underneath. But this still requires a cluster administrator to provision the actual storage up front.

Kubernetes can also perform this job automatically through dynamic provisioning of PersistentVolumes.
The cluster admin, instead of creating PersistentVolumes, can deploy a PersistentVolume provisioner and define one or more StorageClass objects to let users choose what type of PersistentVolume they want. The users can refer to the StorageClass in their PersistentVolumeClaims and the provisioner will take that into account when
provisioning the persistent storage. An user can also make a Storageclass default whcich would be used to dynamically provision a PersistentVolume if the PersistentVolumeClaim does not say which storage class to use.
30
Q

Is it possible to override container’s ENTRYPOINT and CMD attributes by k8s?

A

Yes. While specifying the container information in pod spec, use ‘command’ attribute to specify entry point and ‘args’ attribute to specify arguments to be passed to the command (i.e. overriding the CMD)

31
Q

What are different ways to configure a pod?

A

Configuration information can be passed on to the containers of the pod in different ways

  1. By overriding ENTRYPOINT and CMD attributes.
  2. By specifying a custom list of environment variables for each container of a pod.
  3. Above two methods are kind of hard-coding configuration information in the pod spec. This problem can be addressed by using configMap resource.
32
Q

What is configMap?

A

Kubernetes allows separating configuration options into a separate object called a ConfigMap, which is a map containing key/value pairs with the values ranging from
short literals to full config files. An application doesn’t need to read the ConfigMap directly or even know that it exists. The contents of the map are instead passed to containers as either environment variables or as files in a volume.

if the referenced ConfigMap doesn’t exist when you
create the pod. Kubernetes schedules the pod normally and tries to run its containers. The container referencing the non-existing ConfigMap will fail to start, but the other container will start normally. If you then create the missing ConfigMap, the failed container is started without requiring you to recreate the pod.

33
Q

[k8s] How to update application’s configuration information without restarting it?

A

Using a ConfigMap and exposing it through a volume brings the ability to update the configuration without having to recreate the pod or even restart
the container. When you update a ConfigMap, the files in all the volumes referencing it are updated. It’s then up to the process to detect that they’ve been changed and reload them.

34
Q

[k8s] What are secrets

A

To store and distribute sensitive information, Kubernetes provides a separate object called a Secret. Secrets are much like ConfigMaps—they’re also maps that hold key-value pairs. They can be used the same way as a ConfigMap. You can
 Pass Secret entries to the container as environment variables
 Expose Secret entries as files in a volume.

Kubernetes helps keep your Secrets safe by making sure each Secret is only distributed to the nodes that run the pods that need access to the Secret. Also, on the nodes themselves, Secrets are always stored in memory and never written to physical storage, which would require wiping the disks after deleting the Secrets from them.

35
Q

[k8s] What is the use of default secret volume attached to each pod?

A

Every pod gets a default secret mounted in it which contains information that can be used by the pod to talk to the kubernets api-server.

36
Q

[k8s] whats the difference between secret and configmaps

A

conceptually secret and configMap are same. They allow run time configurations for the pod. But secrets and configMaps has few differences -

  • The contents of secret’s entries are shown as Base-64 encoded strings, whereas those of a configMap are shown in cleartext
  • Maximum size of secret is limited to 1 MB
37
Q

What is downward API?

A

The Downward API enables you to expose the pod’s own metadata to the processes running inside that pod.
It allows you to pass metadata about the pod and its environment through environment variables or files (in a downwardAPI volume). Don’t be confused by the name. The Downward API isn’t like a REST endpoint that your app needs to hit so it can get the data. It’s a way of
having environment variables or files populated with values from the pod’s specification or status
Currently it allows to pass following info to the processes running inside the pod
 The pod’s name
 The pod’s IP address
 The namespace the pod belongs to
 The name of the node the pod is running on
 The name of the service account the pod is running under
 The CPU and memory requests for each container
 The CPU and memory limits for each container
 The pod’s labels
 The pod’s annotations
Downward API allows you to keep the application Kubernetes-agnostic. This is especially useful when you’re dealing with an existing application that expects certain data in environment variables. The Downward
API allows you to expose the data to the application without having to rewrite the application or wrap it in a shell script, which collects the data and then exposes it
through environment variables.

38
Q

[k8s] how to call k8s API using “kubectl proxy”

A

kubernetes master hosts the kubernetes API server component which uses HTTPS and requires authentication. The URL to the API server can be obtained by running command
$kubectl cluster-info
Rather than dealing with authentication yourself, you can talk to the server through a proxy by running the kubectl proxy command. The ‘kubectl proxy’ command runs a proxy server that accepts HTTP connection on your local machine and proxies them to the API server while taking care of authentication, so you don’t need to pass the authentication token in every request. It also makes sure you are talking to actual API server and not a man in the middle by verifying the server’s certificate on each request
$kubectl proxy

39
Q

[k8s] How to access k8s API server from within a pod? Can you use ‘kubectl proxy’ command to access API server from within pod?

A

To talk to the API server from inside a pod, you need
to take care of three things:
 Find the location of the API server.
 Make sure you’re talking to the API server and not something impersonating it.
 Authenticate with the server; otherwise it won’t let you see or do anything.

Finding the location of API server is easy because a service called ‘kubernetes’ is automatically exposed in the default namespace and configured to point to the API server.
$kubectl get svc
Environment variables are configured for each service so you can get both the IP and port of API server by lookingup the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT variables

The default secret is mounted inside each container at /var/run/secrets/kubernetes.io/serviceaccount/.
$ls /var/run/secrets/kubernetes.io/serviceaccount/
ca.crt namespace token
ca.crt file: Holds the certificate of the certificate authority
(CA) used to sign the Kubernetes API server’s certificate. This file can be used in curl command to ensure that you are talking to API server only
$ curl –cacert /var/run/secrets/kubernetes.io/serviceaccount /ca.crt
https://kubernetes

An authentication token on the secrets volume can be used to authenticate to the API server
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)

Using these details you can talk to the API server
$curl -H “Authorization: Bearer $TOKEN” https://kubernetes

40
Q

What is an ambassador container

A

setting up certificate option and using tokens to access API server is cumbersome. So instead of talking to API server directly, you can run ‘kubectl proxy’ in an ambassador container alongside the main container and communicate with the API server through it.
Because all containers in a pod share the same loopback network interface, your app can access the proxy through a port on a localhost.

41
Q

What is a deployment

A

A deployment is a higher-level resource meant for deploying applications and updating them declaratively, instead of doing it through a ReplicationController or a ReplicaSet, which are both considered lower level concepts.
When you create a Deployment, a ReplicaSet resource is created underneath. So when using a Deployment, the actual pods are created and managed by Deployment’s ReplicaSets and not by the Deployment directly.
A hash value of pod’s template is calculated and it is used while naming the underneath ReplicaSet and the pods. While updating the application, one just need to update a pod template and Deployment resource will take care of rolling out the update. Updates are done using different strategies, in case of rollingUpdate strategy, the pods are replaced one by one without any downtime of the application. There are other strategies like ReCreate which removes all the pods and then creates new ones.
Note the Deployment resource creates multiple ReplicaSets, one for each version of the pod template.
The Deployment can be rolled back or aborted mid-way. One can pause a deployment to inspect how a single instance of the new version behaves in production before allowing additional pod instances to replace old ones. You can control the rate of the rolling update through maxSurge and maxUnavailable properties

42
Q

[k8s] What is a canary release

A

A canary release is a technique for minimizing the risk of rolling out a bad version of an application and it affecting all your users. Instead of rolling out the new version to everyone, you replace only one or a small number of old pods with the new once. This way only a small number of users wil initially hit the new version. You can then verify whether the new version is working fine or not and then either continue the roll-out across the remaining pods or roll back the previous version.

43
Q

[k8s] What is blue-green deployment

A

Pods are usually fronted by a Service. It’s possible to have the Service front only the initial version of the pods while you bring up the pods running the new version. Then, once all the new pods are up, you can change the Service’s label selector and have the Service switch over to the new pods. This is called a blue-green deployment. After switching over, and once you’re sure the new version functions correctly,
you’re free to delete the old pods by deleting the old ReplicationController.

44
Q

how to run multiple replicas of a pod and have each pod use its own storage volume?

A

ReplicaSets create exact copies of a pod. These replicas dont differ from each other, apart from their name and IP address. If a pod template includes a volume, all replicas of the ReplicaSet will use the exact same volume.
Statefulsets can be used here, where each pod created can have its own storage volume.

45
Q

Explain statefulsets

A

When a stateful pod instance dies (or the node it is running on fails), the pod instance need to be resurrected on another node, but the new instance needs to get the same name, network identity and state as the one it is replacing.
ReplicaSets are stateless so they can be replaced with a completely new pod replica at any time.

46
Q

[k8s] What are different components of kubernetes?

A

A Kubernetes cluster is split into two parts:
 The Kubernetes Control Plane
 The (worker) nodes
The Control Plane is what controls and makes the whole cluster function. The components that make up the Control Plane are -
 The etcd distributed persistent storage
 The API server
 The Scheduler
 The Controller Manager
These components store and manage the state of the cluster, but they aren’t what runs the application containers. The task of running your containers is up to the components running on each worker node:
 The Kubelet
 The Kubernetes Service Proxy (kube-proxy)
 The Container Runtime (Docker, rkt, or others)
Kubernetes system components communicate only with the API server. They don’t talk to each other directly. The API server is the only component that communicates
with etcd. None of the other components communicate with etcd directly, but instead modify the cluster state by talking to the API server.

#Get status of control plane components
$kubectl get componentstatuses
47
Q

[k8s] What all components of k8s need to made highly available

A

A single master k8s cluster is not a good idea as the master becomes a single point of failure in the system. Generally a multi-master k8s deployment is done to ensure highly available system. For high availability of the control plane multiple instances of etc and API server can be active at the same time and do perform their jobs in parallel. Only one instance of scheduler and controller manager can be active at given time, with other in standby mode.

48
Q

[k8s] What are the functions of API server

A

API server provides
- CRUD operations
- Validation of objects stored in etcd
- Call various registered plugins like authorization and authentication.
The API server doesn’t do anything else. For example, it
doesn’t create pods when you create a ReplicaSet resource and it doesn’t manage the endpoints of a service. That’s what controllers in the Controller Manager do.
But the API server doesn’t even tell these controllers what to do. All it does is enable those controllers and other components to observe changes to deployed
resources. A Control Plane component can request to be notified when a resource is created, modified, or deleted. This enables the component to perform whatever task it needs in response to a change of the cluster metadata.

# what the pods i.e. creation, deletion etc.
$kubectl get pods --watch
49
Q

[k8s] How kubernetes scheduler works?

A

The operation of the Scheduler is simple. All it does is wait for newly created pods through the API server’s watch mechanism and assign a node to each new pod that doesn’t already have the node set.
The Scheduler doesn’t instruct the selected node (or the Kubelet running on that node) to run the pod. All the Scheduler does is update the pod definition through the
API server. The API server then notifies the Kubelet (again, through the watch mechanism described previously) that the pod has been scheduled. As soon as the Kubelet on the target node sees the pod has been scheduled to its node, it creates and runs the
pod’s containers.

50
Q

[k8s] What the kubelet does

A

the Kubelet is the component responsible for everything running on a worker node. Its initial job is to register the node it’s running on by creating a Node resource in the API server. Then it needs to continuously monitor the API server for Pods that have been scheduled to the node, and start the pod’s containers. It does this
by telling the configured container runtime (which is Docker, CoreOS’ rkt, or something else) to run a container from a specific container image. The Kubelet then constantly monitors running containers and reports their status, events, and resource consumption to the API server.
The Kubelet is also the component that runs the container liveness probes, restarting containers when the probes fail. Lastly, it terminates containers when their Pod is deleted from the API server and notifies the server that the pod has terminated.

51
Q

[k8s] What is the role of kubernetes service proxy (kube-proxy)?

A

kube-proxy makes sure clients can connect to the services you define using service object. it makes sure connections to the service IP and port end up at one of the pods backing that service (or other, non-pod service endpoints). When a service is backed by more than one pod, the proxy performs load balancing across those pods.

52
Q

[k8s] How can one observer cluster events so to monitor kubernetes cluster activity

A

Both the Control Plane components and the Kubelet emit events to the API server as they perform actions. They do this by creating Event resources, which are like any other Kubernetes resource. you can also retrieve events directly with “kubectl get events”.
Also watching events with the –watch option is much easier on the eyes and useful for seeing what is happening in the cluster.
$kubectl get events –watch

53
Q

[k8s] What is pause container?

A

Th pause container is the container that holds all the containers of a pod together. The pause container is an infrastructure container whose sole purpose is to hold all namespaces i.e.e network and other Linux namespaces. All other user-defined containers of the pod then use the namespaces of the pod infrastructure container.
Actual application containers may die and get restarted. When such a container starts up again, it needs to become part of the same Linux namespaces as before. The infrastructure container makes this possible since its lifecycle is tied to that of the pod—the container runs from the time the pod is scheduled until the pod is deleted. If the infrastructure pod is killed in the meantime, the Kubelet recreates it and all the pod’s
containers.

54
Q

[k8s] How interpod networking works in Kubernetes

A
Each pod gets its own unique IP address and can communicate with all other pods through a flat, NAT-less network. The network is set up by the system administrator or by a Container Network Interface (CNI) plugin, not by Kubernetes itself.
Number of CNI plugins are available
* Calico
* Flannel
* romana
* Weave Net
55
Q

[k8s] Why you cannot ping service IP address

A

Each Service gets its own stable IP address and port. Clients (usually pods) use the service by connecting to this IP address and port. The IP address is virtual—it’s not assigned to any network interfaces and is never listed as either the source or the destination IP address in a network packet when the packet leaves the node. A key detail of Services is that they consist of an IP and port pair (or multiple IP and port pairs in the case of multi-port Services), so the service IP by itself doesn’t represent anything. That’s why you can’t ping them.