Containers & k8s Flashcards
What is virtualization?
It is the emulation of a physical computer.
What is bare metal hypervisor?
It is a piece of software that runs on top of bare metal hardware. It doesn’t rely on the host operating system and can achieve better performance. However the hardware it runs on is more expensive.
What are the benefits of virtual machines?
They are cheaper to run. Many of them share the same hardware and they allow much higher resource utilization. They are easy to scale too. There is some migration software even moving one VM from one host to another without even shutting down the VM.
What are the downsides of virtual machines?
They could be vulnerable to the noisy neighbor problem. When our own application is sharing the host machine with a resource hog of a neighbor, our own application performance could suffer. Also vms that are sharing the same host share the same physical cores. They are vulnerable to attacks that take advantage of the design flaws in modern micro processors. Side channel attacks like Meltdown and Spektre some well known examples.
What is containerization?
It is considered to be a lightweight version of virtualization. We have hardware and the host operating system. But instead of virtualizing the underlying hardware, we virtualize the host operating system with a special software called Container Engine. On top of the container engine, there are many containers running, each have their own application environment isolated from each other.
What are the advantages of containers?
The container engine provides even faster resource provisioning, and all the resources that needed to run an application are packaged together so applications can run anywhere. Containers are scalable and portable. They are lightweight and require less hardware resources to run than virtual machines. A bare metal can run significantly more containers than virtual machines. Each container runs as a native process of the host operating system, they are much faster to start and this makes it even easier to deploy and maintain at scale. Only with a container engine software.
What are the downsides of containers?
Containers are less secure. They share the same underlying operating system and isolation lies on the OS level primitives. This means containers are exposed to a wider class of security vulnerabilities at the OS level. One way is to run containers in different VMs to reduce the attack surface. This is a tradeoff between flexibility and security.
What comes after containers?
Serverless and edge computing come to mind. They make the developer productivity and deployment stories even more compelling.
What are the virtualization layers?
Hardware
Host Operating System
Hypervisor
Then in each virtual machine, there is a guest os and on top of every operating system, runs an application for a tenant.
What is bare metal hardware?
A bare metal software is a physical computer that is single tenant only. It has hardware, host OS and many applications on top of it. Once upn a time all servers were bare metal. It gives us complete control on the hardware resources and the software stack to run\ for software applications that require the ablolute highest performance from the hardware, this could be a good way. Bare metal servers are physically isolated so they are not affected by the noisy neighbor problem. Second the isolation provides the highest level of security. Not afected by side-channel attacks. A malicious neighbor can steal secrets from its tenants. When an application needs the most strict isolation and security compliance and regulatory requirements, the bare metal could be sometimes the only way to go.
What are the downsides of bare metal?
Is expensive, hard to manage and hard to scale. acquiring new hardware takes time, and a competent team to manage it well.
What’s a hypervisor software?
A hypervisor, also known as a virtual machine monitor (VMM), is software that creates and runs virtual machines (VMs) by separating the physical hardware from the operating systems of the VMs. There are two types of hypervisors: Type 1 (or bare-metal), which runs directly on the hardware, like VMware ESXi, Microsoft Hyper-V, and Xen, and Type 2 (or hosted), which operates on top of a host operating system, like VMware Workstation and Oracle VirtualBox. Hypervisors allow multiple VMs, each with its own OS, to run concurrently on a single physical machine, effectively maximizing resource utilization and providing environment isolation.
What is Kubernetes, and what are some of its main features and benefits?
Kubernetes is an open-source platform designed to automate the deployment, scaling, and operation of application containers across clusters of hosts, providing container-centric infrastructure. Here are some of its main features and benefits:
- Orchestration and Management: Kubernetes efficiently manages clusters of containers, handling the deployment and ensuring that the state of containers matches user configurations.
- Scaling: It allows for manual or automatic scaling of applications based on demand, ensuring efficient use of resources.
- Load Balancing: Kubernetes can distribute network traffic so that deployments are stable and performant, automatically finding the best container instance for each request.
- Self-healing: It automatically restarts containers that fail, replaces and reschedules containers when nodes die, and kills containers that don’t respond to user-defined health checks.
- Automated Rollouts and Rollbacks: Kubernetes rolls out changes to the application or its configuration while monitoring application health to ensure it doesn’t kill all your instances at the same time.
- Service Discovery and Load Balancing: Kubernetes groups sets of containers into Pods for easy management and discovery, and provides them with IP addresses and a single DNS name, making it easy to set up load balancing.
- Secret and Configuration Management: Kubernetes can manage sensitive information like passwords and API keys using secrets, and it can deploy and update configuration settings without rebuilding container images.
- Storage Orchestration: It automatically mounts the storage system of your choice, whether from local storage, public cloud providers, or network storage systems.
- Resource Usage Monitoring: Kubernetes allows you to monitor the resource usage of applications through built-in tools like Kubernetes Dashboard or external solutions like Prometheus.
- Extensibility and Flexibility: It supports multiple container runtimes and provides extensive APIs, making it highly extensible and able to integrate with existing infrastructure.
These features make Kubernetes a powerful tool for modern application deployment and management in the cloud-native ecosystem.
What are the components of a Kubernetes cluster, and how do they interact with each other?
A Kubernetes cluster consists of several components that interact to manage the state of the cluster and run applications efficiently:
- Control Plane (Master Node): This central component manages the state of the Kubernetes cluster, making decisions about the cluster (e.g., scheduling) and reacting to cluster events (e.g., starting up a new container when one fails).
- etcd: A consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data. It stores the configuration and state of the cluster, ensuring that data is always available to the control plane components.
- API Server (kube-apiserver): The API server is a component of the Kubernetes control plane that exposes the Kubernetes API. It serves as the front end for the Kubernetes control plane, allowing interaction, management, and configuration through API calls.
- Scheduler (kube-scheduler): The Kubernetes scheduler is a control plane process which assigns Pods to Nodes. It watches for newly created Pods that have no assigned node, and selects a node for them to run on based on resource availability, policy, affinity specifications, and other criteria.
- Controller Manager (kube-controller-manager): This component runs controller processes which regulate the state of the cluster, managing the lifecycle of different resources like nodes, jobs, endpoints, etc.
- Node (Worker Node): Nodes are the workers that run application containers. Each node has the services necessary to run Pods and is managed by the master components.
- Kubelet: Running on nodes, the kubelet is the primary node agent. It watches for tasks sent from the API Server, executes the container tasks, reports back to the master, and ensures that the containers running on the node are healthy.
- Kube-Proxy: This component runs on each node to maintain network rules that allow network communication to your Pods from network sessions inside or outside of your cluster.
- Container Runtime: The underlying software that is used to run containers. Kubernetes is compatible with various container runtimes such as Docker, containerd, and CRI-O.
- Add-ons: These provide cluster features like DNS, which gives a DNS service for the cluster, handling DNS resolution for Kubernetes services, or the Kubernetes Dashboard, a general-purpose, web-based UI for Kubernetes clusters.
These components interact through the Kubernetes API, which is exposed by the API Server, ensuring that the cluster maintains the desired state and responds appropriately to changes and failures.
How can you interact with Kubernetes API?
- kubectl
- directly accessing the rest api (kubectl proxy –port=8080 &then curl http://localhost:8080/api/)
- programmatic access to the api for go python java dotnet etc
https://kubernetes.io/docs/reference/using-api/client-libraries/
Explain these commands:
k create, k get, k describe, k delete, k apply, k logs, k exec, scale, rollout
Here’s an explanation of each of these Kubernetes kubectl
commands:
-
kubectl create
: This command is used to create a resource in your Kubernetes cluster. You can specify the resource type and provide a YAML or JSON file for the resource configuration. For example,kubectl create -f mypod.yaml
creates a pod as defined in themypod.yaml
file. -
kubectl get
: This command retrieves information about one or more resources in your Kubernetes cluster. It can list various resources like pods, services, deployments, etc., and can be used to see their current state. For example,kubectl get pods
lists all pods in the current namespace. -
kubectl describe
: This command shows detailed information about a specific resource or group of resources. It includes status, readiness, and other important metadata and is more verbose thankubectl get
. For instance,kubectl describe pod mypod
gives detailed information about the pod named “mypod”. -
kubectl delete
: This command removes resources from the cluster. You can delete pods, services, deployments, and more, either by specifying the resource type and name or by using a file. For example,kubectl delete -f deploy.yaml
deletes the resources defined in thedeploy.yaml
file. -
kubectl apply
: This command is used to apply a configuration to a resource created from a file or stdin. It is commonly used for updating existing resources or creating resources if they do not already exist. For example,kubectl apply -f mydeployment.yaml
updates or creates the deployment as specified. -
kubectl logs
: This command fetches the logs from a container in a pod. If a pod has multiple containers, you specify which container’s logs you want to view. For example,kubectl logs mypod -c mycontainer
fetches logs from “mycontainer” within “mypod”. -
kubectl exec
: This command executes a command in a container within a pod. It is often used for diagnostic purposes, such as checking the current environment or running interactive shells. For example,kubectl exec mypod -- ls /app
runs thels /app
command inside “mypod”. -
kubectl scale
: This command changes the number of replicas of a specified deployment or other scalable resource. It’s used for manually scaling the number of pods in a deployment, for example,kubectl scale deployment mydeployment --replicas=4
scales “mydeployment” to 4 replicas. -
kubectl rollout
: This command manages a deployment’s rollout process. It can be used to start a new rollout, undo a deployment, or pause/resume an ongoing rollout. For instance,kubectl rollout restart deployment/mydeployment
restarts the rollout of “mydeployment”.
These commands form the backbone of most interactions with Kubernetes clusters, providing the tools needed to manage and maintain your applications and their environments.