Kubernetes Architecture explained Flashcards
Each worker node has 3 processes that must be installed:
1) Container Runtime
2) Kubelet
3) Kube-proxy
These 3 processes are used to schedule and manage those Pods.
Container runtime
Because application Pods have containers inside them, a container runtime must be installed on every Node.
Kubelet is the process that
starts those Pods and the containers on the server
Why does Kubelet have an interface with both: the container runtime and the server or the Node itself?
Kubelet needs to get resources, like CPU, RAM, and storage, from the worker node to create a Pod, and then it needs to talk to Docker to start a container inside the pod.
Kube-proxy must be installed on every Node. This process is responsible for
forwarding requests from Services to Pods.
Kube-proxy has built-in intelligent
forwarding logic that makes sure all communication is sent in the most efficient way
When you as a user want to deploy a new application in a Kubernetes cluster, you interact with the
API server using a Kubernetes client
The API server is like a cluster gateway, which receives the requests for
creating or updating components in the cluster, as well as queries about these components.
The API server also acts as a gatekeeper to make sure that
only authenticated and authorized requests get through to the cluster
After validating you, what does the API server do with your request?
It hands it over to the Scheduler to start the application Pod on one of the Worker Nodes.
How does the scheduler intelligently decide which Worker Node the new Pod should be placed on?
- The Scheduler will look at your request and see how much resources the application that you want to schedule will need (CPU, RAM etc.)
- Then it goes through the Worker Nodes to check the available resources on each one of them.
- The Node with the most resources available or the least amount of load will be where the new pod is placed.
The Controller Manager’s main job is detecting
state changes like the crashing of Pods for example.
When the Controller Manager detects that a Pod has died it will make a
request to the Scheduler to redeploy that pod. The Scheduler may not put the pod back on the same Node it came from .
etcd, is a
key value store of the cluster state
etcd is often called the cluster brain, which means that
every change in the cluster, for example, when a new Pod gets scheduled, when a Pod dies, all of these changes get saved or updated into this key-value store of etcd.