KUBERNETES ARCHITECTURE Flashcards

1
Q

Components of Kubernetes

A

In its simplest form, Kubernetes is made of a central manager (aka master) and some worker nodes (we will see in a follow-on chapter how you can actually run everything on a single node for testing purposes). The manager runs an API server, a scheduler, various controllers and a storage system to keep the state of the cluster, container settings, and the networking configuration.

Kubernetes exposes an API (via the API server): you can communicate with the API using a local client called kubectl or you can write your own client. The kube-scheduler sees the requests for running containers coming to the API and finds a suitable node to run that container in. Each node in the cluster runs two processes: a kubelet and a kube-proxy. The kubelet receives requests to run the containers, manages any necessary resources and watches over them on the local node. The kube-proxy creates and manages networking rules to expose the container on the network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Pod

A

We have learned that Kubernetes is an orchestration system to deploy and manage containers. Containers are not managed individually; instead, they are part of a larger object called a Pod. A Pod consists of one or more containers which share an IP address, access to storage and namespace. Typically, one container in a Pod runs an application, while other containers support the primary application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Orchestration

A

Orchestration is managed through a series of watch-loops, also known as operators or controllers. Each controller interrogates the kube-apiserver for a particular object state, modifying the object until the declared state matches the current state. The default, newest, and feature-filled controller for containers is a Deployment. A Deployment ensures that resources are available, such as IP address and storage, and then deploys a ReplicaSet. The ReplicaSet is a controller which deploys and restarts containers, Docker by default, until the requested number of containers is running. Previously, the function was handled by the ReplicationController, but has been obviated by Deployments. There are also Jobs and CronJobs to handle single or recurring tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Deployment

A

The default, newest, and feature-filled controller for containers is a Deployment. A Deployment ensures that resources are available, such as IP address and storage, and then deploys a ReplicaSet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

ReplicaSet

A

The ReplicaSet is a controller which deploys and restarts containers, Docker by default, until the requested number of containers is running. Previously, the function was handled by the ReplicationController, but has been obviated by Deployments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Jobs and CronJobs

A

There are also Jobs and CronJobs to handle single or recurring tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

DaemonSet

A

A DaemonSet will ensure that a single pod is deployed on every node. These are often used for logging and metrics pods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

StatefulSet

A

A StatefulSet can be used to deploy pods in a particular order, such that following pods are only deployed if previous pods report a ready status.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Labels

A

To easily manage thousands of Pods across hundreds of nodes can be a difficult task to manage. To make management easier, we can use labels, arbitrary strings which become part of the object metadata. These can then be used when checking or changing the state of objects without having to know individual names or UIDs. Nodes can have taints to discourage Pod assignments, unless the Pod has a toleration in its metadata.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

metadata for annotations

A

There is also space in metadata for annotations which remain with the object but cannot be used by Kubernetes commands. This information could be used by third-party agents or other tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

multi-tenancy

A

While using lots of smaller, commodity hardware could allow every user to have their very own cluster, often multiple users and teams share access to one or more clusters. This is referred to as multi-tenancy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

namespace

A

A segregation of resources, upon which resource quotas and permissions can be applied. Kubernetes objects may be created in a namespace or cluster-scoped. Users can be limited by the object verbs allowed per namespace. Also the LimitRange admission controller constrains resource usage in that namespace. Two objects cannot have the same Name: value in the same namespace.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

context

A

A combination of user, cluster name and namespace. A convenient way to switch between combinations of permissions and restrictions. For example you may have a development cluster and a production cluster, or may be part of both the operations and architecture namespaces. This information is referenced from ~/.kube/config.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Resource Limits

A

A way to limit the amount of resources consumed by a pod, or to request a minimum amount of resources reserved, but not necessarily consumed, by a pod. Limits can also be set per-namespaces, which have priority over those in the PodSpec.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pod Security Policies

A

A policy to limit the ability of pods to elevate permissions or modify the node upon which they are scheduled. This wide-ranging limitation may prevent a pod from operating properly. The use of PSPs may be replaced by Open Policy Agent in the future.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Network Policies

A

The ability to have an inside-the-cluster firewall. Ingress and Egress traffic can be limited according to namespaces and labels as well as typical network traffic characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Master Node

A

The Kubernetes master runs various server and manager processes for the cluster. Among the components of the master node are the kube-apiserver, the kube-scheduler, and the etcd database. As the software has matured, new components have been created to handle dedicated needs, such as the cloud-controller-manager; it handles tasks once handled by the kube-controller-manager to interact with other tools, such as Rancher or DigitalOcean for third-party cluster management and reporting.

There are several add-ons which have become essential to a typical production cluster, such as DNS services. Others are third-party solutions where Kubernetes has not yet developed a local component, such as cluster-level logging and resource monitoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Master Node Components

A

master node specific components

kube-apiserver
kube-scheduler
etcd Database
Other Agents

Common node components.
kubelet
kube-proxy
Container Runtime

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

kube-apiserver

A

The kube-apiserver is central to the operation of the Kubernetes cluster.

All calls, both internal and external traffic, are handled via this agent. All actions are accepted and validated by this agent, and it is the only agent which connects to the etcd database. As a result, it acts as a master process for the entire cluster, and acts as a frontend of the cluster’s shared state. Each API call goes through three steps: authentication, authorization, and several admission controllers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

kube-scheduler

A

The kube-scheduler uses an algorithm to determine which node will host a Pod of containers. The scheduler will try to view available resources (such as volumes) to bind, and then try and retry to deploy the Pod based on availability and success.

There are several ways you can affect the algorithm, or a custom scheduler could be used instead. You can also bind a Pod to a particular node, though the Pod may remain in a pending state due to other settings.

One of the first settings referenced is if the Pod can be deployed within the current quota restrictions. If so, then the taints and tolerations, and labels of the Pods are used along with those of the nodes to determine the proper placement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

etcd Database

A

The state of the cluster, networking, and other persistent information is kept in an etcd database, or, more accurately, a b+tree key-value store. Rather than finding and changing an entry, values are always appended to the end. Previous copies of the data are then marked for future removal by a compaction process. It works with curl and other HTTP libraries, and provides reliable watch queries.

Simultaneous requests to update a value all travel via the kube-apiserver, which then passes along the request to etcd in a series. The first request would update the database. The second request would no longer have the same version number, in which case the kube-apiserver would reply with an error 409 to the requester. There is no logic past that response on the server side, meaning the client needs to expect this and act upon the denial to update.

There is a master database along with possible followers. They communicate with each other on an ongoing basis to determine which will be master, and determine another in the event of failure. While very fast and potentially durable, there have been some hiccups with some features like whole cluster upgrades. Starting with v1.15.1, kubeadm allows easy deployment of a multi-master cluster with stacked etcd or an external database cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Other Agents

A

The kube-controller-manager is a core control loop daemon which interacts with the kube-apiserver to determine the state of the cluster. If the state does not match, the manager will contact the necessary controller to match the desired state. There are several controllers in use, such as endpoints, namespace, and replication. The full list has expanded as Kubernetes has matured.

Remaining in beta as of v1.16, the cloud-controller-manager interacts with agents outside of the cloud. It handles tasks once handled by kube-controller-manager. This allows faster changes without altering the core Kubernetes control process. Each kubelet must use the –cloud-provider-external settings passed to the binary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Worker node

A

All worker nodes run the kubelet and kube-proxy, as well as the container engine, such as Docker or cri-o. Other management daemons are deployed to watch these agents or provide services not yet included with Kubernetes.

The kubelet interacts with the underlying Docker Engine also installed on all the nodes, and makes sure that the containers that need to run are actually running. The kube-proxy is in charge of managing the network connectivity to the containers. It does so through the use of iptables entries. It also has the userspace mode, in which it monitors Services and Endpoints using a random high-number port to proxy traffic. Use of ipvs can be enabled, with the expectation it will become the default, replacing iptables.

Kubernetes does not have cluster-wide logging yet. Instead, another CNCF project is used, called Fluentd. When implemented, it provides a unified logging layer for the cluster, which filters, buffers, and routes messages.

Cluster-wide metrics is not quite fully mature, so Prometheus is also often deployed to gather metrics from nodes and perhaps some applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

kubelet

A

The kubelet interacts with the underlying Docker Engine also installed on all the nodes, and makes sure that the containers that need to run are actually running.

It is the heavy lifter for changes and configuration on worker nodes. It accepts the API calls for Pod specifications (a PodSpec is a JSON or YAML file that describes a Pod). It will work to configure the local node until the specification has been met.

Should a Pod require access to storage, Secrets or ConfigMaps, the kubelet will ensure access or creation. It also sends back status to the kube-apiserver for eventual persistence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

kube-proxy

A

The kube-proxy is in charge of managing the network connectivity to the containers. It does so through the use of iptables entries. It also has the userspace mode, in which it monitors Services and Endpoints using a random high-number port to proxy traffic. Use of ipvs can be enabled, with the expectation it will become the default, replacing iptables.

26
Q

Container Runtime

A

The container runtime is the software that is responsible for running containers. Kubernetes supports several container runtimes: Docker, containerd, cri-o, rktlet and any implementation of the Kubernetes CRI (Container Runtime Interface).

27
Q

Services

A

Services
With every object and agent decoupled we need a flexible and scalable operator which connects resources together and will reconnect, should something die and a replacement is spawned. Each Service is a microservice handling a particular bit of traffic, such as a single NodePort or a LoadBalancer to distribute inbound requests among many Pods.

A Service also handles access policies for inbound requests, useful for resource control, as well as for security.

A service, as well as kubectl, uses a selector in order to know which objects to connect.

28
Q

Selectors currently supported

A

equality-based

Filters by label keys and their values. Three operators can be used, such as =, ==, and !=. If multiple values or keys are used, all must be included for a match.

set-based

Filters according to a set of values. The operators are in, notin, and exists. For example, the use of status notin (dev, test, maint) would select resources with the key of status which did not have a value of dev, test, nor maint.

29
Q

Operators\Controllers

A

https://kubernetes.io/docs/concepts/architecture/controller/

An important concept for orchestration is the use of operators. These are also known as watch-loops and controllers. They query the current state, compare it against the spec, and execute code based on how they differ. Various operators ship with Kubernetes, and you can create your own, as well. A simplified view of an operator is an agent, or Informer, and a downstream store. Using a DeltaFIFO queue, the source and downstream are compared. A loop process receives an obj or object, which is an array of deltas from the FIFO queue. As long as the delta is not of the type Deleted, the logic of the operator is used to create or modify some object until it matches the specification.

The Informer which uses the API server as a source requests the state of an object via an API call. The data is cached to minimize API server transactions. A similar agent is the SharedInformer; objects are often used by multiple other objects. It creates a shared cache of the state for multiple requests.

A Workqueue uses a key to hand out tasks to various workers. The standard Go workqueues of rate limiting, delayed, and time queue are typically used.

The endpoints, namespace, and serviceaccounts operators each manage the eponymous resources for Pods.

30
Q

Single IP per Pod

A

known as the pause container. The pause container is used to get an IP address, then all the containers in the pod will use its network namespace. You won’t see this container from the Kubernetes perspective, but you would by running sudo docker ps.

To communicate with each other, containers can use the loopback interface, write to files on a common filesystem, or via inter-process communication (IPC). As a result, co-locating applications in the same pod may have issues. There is a network plugin which will allow more than one IP address, but so far, it has only been used within HPE labs.

Support for dual-stack, IPv4 and IPv6 continues to increase with each release. For example, in a recent release kube-proxy iptables supports both stacks simultaneously.

31
Q

Where operators live

A

kube-controller-manager

kube-controller-manager compares states and makes decisions

32
Q

Container Runtime Interface (CRI)

A

The goal of the Container Runtime Interface (CRI) is to allow easy integration of container runtimes with kubelet. By providing a protobuf method for API, specifications and libraries, new runtimes can easily be integrated without needing deep understanding of kubelet internals.

33
Q

containerd

A

The intent of the containerd project is not to build a user-facing tool instead, it is focused on exposing highly-decoupled low-level primitives:

​Defaults to runC to run containers according to the OCI Specifications
Intended to be embedded into larger systems
Minimal CLI, focused on debugging and development.
With a focus on supporting the low-level, or backend plumbing of containers, this project is better suited to integration and operation teams building specialized products, instead of typical build, ship, and run applicatio

34
Q

Running Commands in a Container

A

Use the -it options for an interactive shell instead of the command running without interaction or access.

If you have more than one container, declare which container:

kubectl exec -i​t – /bin/bash

35
Q

Multi-Container Pod

A

https://matthewpalmer.net/kubernetes-app-developer/articles/multi-container-pod-design-patterns.html#:~:text=The%20ambassador%20pattern%20is%20a,depending%20on%20the%20cluster’s%20needs.

There are three common design patterns and use-cases for combining multiple containers into a single pod. We’ll walk through the sidecar pattern, the adapter pattern, and the ambassador pattern

36
Q

Probes

A

Liveness, Readiness and Startup Probes

The kubelet uses liveness probes to know when to restart a container. For example, liveness probes could catch a deadlock, where an application is running, but unable to make progress. Restarting a container in such a state can help to make the application more available despite bugs.

The kubelet uses readiness probes to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready. One use of this signal is to control which Pods are used as backends for Services. When a Pod is not ready, it is removed from Service load balancers.

The kubelet uses startup probes to know when a container application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don’t interfere with the application startup. This can be used to adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before they are up and running.

37
Q

Container Storage Interface (CSI)

A

an industry standard interface for container orchestration to allow access to arbitrary storage systems. Currently, volume plugins are “in-tree”, meaning they are compiled and built with the core Kubernetes binaries. This “out-of-tree” object will allow storage vendors to develop a single driver and allow the plugin to be containerized. This will replace the existing Flex plugin which requires elevated access to the host node, a large security concern.

38
Q

PersistentVolumeClaim (pvc).

A

Keeping acquired data or ingesting it into other containers is a common task, typically requiring the use of a PersistentVolumeClaim (pvc).

39
Q

Access modes are

A

he three access modes are RWO (ReadWriteOnce), which allows read-write by a single node, ROX (ReadOnlyMany), which allows read-only by multiple nodes, and RWX (ReadWriteMany), which allows read-write by many nodes.

40
Q

Persistent Volumes and Claims

A

A PersistentVolume (PV) is a storage abstraction used to retain data longer than the Pod using it. Pods define a volume of type PersistentVolumeClaim (PVC) with various parameters for size and possibly the type of backend storage known as its StorageClass. The cluster then attaches the PersistentVolume.

Kubernetes will dynamically use volumes that are available, irrespective of its storage type, allowing claims to any backend storage.

$ kubectl get pv
$ kubectl get pvc

41
Q

Phases to Persistent Storage

A

Provisioning
Binding
Using
Releasing

Reclaiming
The reclaim phase has three options:

Retain, which keeps the data intact, allowing for an administrator to handle the storage and data.

Delete tells the volume plugin to delete the API object, as well as the storage behind it.

The Recycle option runs an rm -rf /mountpoint and then makes it available to a new claim. With the stability of dynamic provisioning, the Recycle option is planned to be deprecated.

42
Q

Persistent volumes and t persistent volume claims scopes

A

Persistent volumes are cluster-scoped, but persistent volume claims are namespace-scoped. An alpha feature since v1.11 this allows for static provisioning of Raw Block Volumes, which currently support the Fibre Channel plugin. There is a lot of development and change in this area, with plugins adding dynamic provisioning.

43
Q

K8S secret

A

A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. Such information might otherwise be put in a Pod specification or in an image. Users can create Secrets and the system also creates some Secrets.

44
Q

Types of Secret

A

Opaque - arbitrary user-defined data

+ 7 k8s specified types
https://kubernetes.io/docs/concepts/configuration/secret/#secret-types

45
Q

ConfigMap

A

A similar API resource to Secrets is the ConfigMap, except the data is not encoded. In keeping with the concept of decoupling in Kubernetes, using a ConfigMap decouples a container image from configuration artifacts.

They store data as sets of key-value pairs or plain configuration files in any format. The data can come from a collection of files or all files in a directory. It can also be populated from a literal value.

46
Q

ConfigMaps consumption

A

ConfigMaps can be consumed in various ways:

Pod environmental variables from single or multiple ConfigMaps

Use ConfigMap values in Pod commands
Populate Volume from ConfigMap

Add ConfigMap data to specific path in Volume

Set file names and access mode in Volume from ConfigMap data
Can be used by system components and controllers.

47
Q

Scaling and Rolling Updates

A

A common update is to change the number of replicas running. If this number is set to zero, there would be no containers, but there would still be a ReplicaSet and Deployment. This is the backend process when a Deployment is deleted.

48
Q

Ambassador pattern

A

The ambassador pattern is a useful way to connect containers with the outside world. An ambassador container is essentially a proxy that allows other containers to connect to a port on localhost while the ambassador container can proxy these connections to different environments depending on the cluster’s needs.

49
Q

K8S security objectives

A

Learning Objectives
By the end of this section, you should be able to:

Explain the flow of API requests.​​
Configure authorization rules.
Examine authentication policies.
Restrict network traffic with network policies.​

50
Q

Authentication

A

There are three main points to remember with authentication in Kubernetes:

In its straightforward form, authentication is done with certificates, tokens or basic authentication (i.e. username and password).
Users are not created by the API, but should be managed by an external system.
System accounts are used by processes to access the API (to learn more read “Configure Service Accounts for Pods”).

51
Q

The type of authentication used

A

The type of authentication used is defined in the kube-apiserver startup options. Below are four examples of a subset of configuration options that would need to be set depending on what choice of authentication mechanism you choose:

–basic-auth-file

–oidc-issuer-url

–token-auth-file

–authorization-webhook-config-file

52
Q

three main authorization modes and two global Deny/Allow settings.

A

ABAC - Attribute-Based Access Control (ABAC)
RBAC - Role-Based Access Control (RBAC)
Webhook.

They can be configured as kube-apiserver startup options:

–authorization-mode=ABAC

–authorization-mode=RBAC

–authorization-mode=Webhook

–authorization-mode=AlwaysDeny

–authorization-mode=AlwaysAllow

53
Q

Admission Controller

A

Admission controllers are pieces of software that can access the content of the objects being created by the requests. They can modify the content or validate it, and potentially deny the request.

Admission controllers are needed for certain features to work properly. Controllers have been added as Kubernetes matured. Starting with the 1.13.1 release of the kube-apiserver, the admission controllers are now compiled into the binary, instead of a list passed during execution. To enable or disable, you can pass the following options, changing out the plugins you want to enable or disable:

  • -enable-admission-plugins=Initializers,NamespaceLifecycle,LimitRanger
  • -disable-admission-plugins=PodNodeSelector

The first controller is Initializers which will allow the dynamic modification of the API request, providing great flexibility. Each admission controller functionality is explained in the documentation. For example, the ResourceQuota controller will ensure that the object created does not violate any of the existing quotas.

54
Q

Security Contexts

A

Pods and containers within pods can be given specific security constraints to limit what processes running in containers can do. For example, the UID of the process, the Linux capabilities, and the filesystem group can be limited.

This security limitation is called a security context. It can be defined for the entire pod or per container, and is represented as additional sections in the resources manifests. The notable difference is that Linux capabilities are set at the container level.

spec:
securityContext:
runAsNonRoot: true

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

55
Q

Pod Security Policies

A

To automate the enforcement of security contexts, you can define PodSecurityPolicies (PSP). A PSP is defined via a standard Kubernetes manifest following the PSP API schema. An example is presented below.

These policies are cluster-level rules that govern what a pod can do, what they can access, what user they run as, etc.

For instance, if you do not want any of the containers in your cluster to run as the root user, you can define a PSP to that effect. You can also prevent containers from being privileged or use the host network namespace, or the host PID namespace.

For Pod Security Policies to be enabled, you need to configure the admission controller of the controller-manager to contain PodSecurityPolicy

56
Q

The Open Policy Agent (OPA)

A

While PSP has been helpful, there are other methods gaining popularity.
The Open Policy Agent (OPA), often pronounced as “oh-pa”, provides a unified set of tools and policy framework. This allows a single point of configuration for all of your cloud deployments.

OPA can be deployed as an admission controller inside of Kubernetes, which allows OPA to enforce or mutate requests as they are received. Using the OPA Gatekeeper it can be deployed using Custom Resource Definitions.

57
Q

Network Security Policies

A

By default, all pods can reach each other all ingress and egress traffic is allowed. This has been a high-level networking requirement in Kubernetes. However, network isolation can be configured and traffic to pods can be blocked. In newer versions of Kubernetes, egress traffic can also be blocked. This is done by configuring a NetworkPolicy. As all traffic is allowed, you may want to implement a policy that drops all traffic, then, other policies which allow desired ingress and egress traffic.

Not all network providers support the NetworkPolicies kind. A non-exhaustive list of providers with support includes Calico, Romana, Cilium, Kube-router, and WeaveNet.

58
Q

Service Update Pattern

A

Labels are used to determine which Pods should receive traffic from a service. Labels can be dynamically updated for an object, which may affect which Pods continue to connect to a service.

The default update pattern is for a rolling deployment, where new Pods are added, with different versions of an application, and due to automatic load balancing, receive traffic along with previous versions of the application.

Should there be a difference in applications deployed, such that clients would have issues communicating with different versions, you may consider a more specific label for the deployment, which includes a version number. When the deployment creates a new replication controller for the update, the label would not match. Once the new Pods have been created, and perhaps allowed to fully initialize, we would edit the labels for which the Service connects. Traffic would shift to the new and ready version, minimizing client version confusion.

59
Q

Service Without a Selector

A

Typically, a service creates a new endpoint for connectivity. Should you want to create the service, but later add the endpoint, such as connecting to a remote database, you can use a service without selectors. This can also be used to direct the service to another service, in a different namespace or cluster.

60
Q

NodePort, Port, TargetPort

A
  • Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
  • TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
  • NodePort exposes a service externally to the cluster by means of the target nodes IP address and the
    NodePort. NodePort is the default setting if the port field is not specified.
61
Q

LoadBalancer

A

LoadBalancer
Creating a LoadBalancer service generates a NodePort, which then creates a ClusterIP. It also sends an asynchronous call to an external load balancer, typically supplied by a cloud provider. The External-IP value will remain in a state until the load balancer returns. Should it not return, the NodePort created acts as it would otherwise.

62
Q

Troubleshooting

A

More Resources
There are several things that you can do to quickly diagnose potential issues with your application and/or cluster.

The official Documentation offers additional materials to help you get familiar with troubleshooting:

“Troubleshooting” https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/

“Troubleshooting Applications” https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/
“Troubleshoot Cluster” https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/
“Debug Pods and ReplicationControllers” https://kubernetes.io/docs/tasks/debug-application-cluster/debug-pod-replication-controller/
“Debug Services” https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/
You can also follow:

Kubernetes GitHub resources for issues and bug tracking
Kubernetes Slack channel