Config - Resource Requirements Flashcards
What happens to the resources on a node, when a pod gets deployed on it?
- ## the pod consumes part of the resources that are available to the node
Who decides where pods are deployed? On what is part of this consideration?
- kube-scheduler decides this
- considers the current resource utilization of the node
- identifies the best node for pod deployment
What happens if nodes have not sufficient resources available?
- kube-scheduler avoids these nodes
- uses a node with sufficient resources
- if there are no nodes with sufficient resources available, it holds back the pod (Status: Pending)
What is the amount of needed CPU/Memory/… called for a pod?
Resource Request
How do we add a Resource Request to a pod?
pod-yaml
apiVersion: v1 kind: Pod metadata: name: labels: spec: containers: - name: image: ports: resources: requests: memory: "4Gi" cpu: 2
What are valid cpu values for resource requests?
0.1 - full int value
0.1 equals 100m
lowest value is 1m
1 count of cpu is equivalent of vCPU (AWS, CGP, Azure, …)
What are valid memory values for resource requests?
256 Mi
268M
1G
When specifying resouce requests for memory, what is the difference between G/M/K and Gi/Mi/Ki
G= Gigabytes = 1,000,000,000 bytes
Gi = Gigibytes = 1,074,741,824 bytes
What are the default limitations for a node what resources it can consume on a node?
By default no limit
How can we limit, what a pod should be allowed to consume on a node?
by setting/using Resource Limits
in pod-yaml, under resources
spec: resources: requests: cpu: 2 memory: "1Gi" limits: cpu: 3 memory: "2Gi"
What happens, when a pod tries to exceed the set limit of resource consumption?
- cpu: consumption gets throttled not to exceed the limit
- memory: container can consume more memory than its limit
- if this happens constantly, the pod gets terminated with Out Of Memory Error (OOM)
What is an implication of the default Kubernetes behavior looking at resource limits and requests
- by default no limits or requests
- pods can suffocate each other by rising resource utiliation
What happens if we only set limits for a pod, but no resource requests?
- limits are assumed to be also the request value
How do we let one pod consume cpu resources, higher than its limit, as long as the other pod on the node are not using all thats left?
Only setting requests, no limits
Usually most ideal set-up
How do we ensure that every pod has some limit set?
By using LimitRanges
Help you define default values to be set for pods.
Set at namespace level
Is an object, created by yaml
How does a Limit Range yaml look like?
apiVersion: v1 kind: LimitRange metadata: name: cpu-resource-constraint spec: limits: - default: cpu: 500m defaultRequest: cpu: 500m max: cpu: "1" min: cpu: 100m type: container
Default and max refer to the limit
When are the limits, set by a limit range, enforced?
When a pod gets created
How can we limit the total amount of resources that can be consumed by applications deployed in a Kubernetes cluster?
yes: Resource Quota, at namespace level
I.E. all pods together should consume max this amount of memory
What is a resource quota and how does it look?
- namespace level object
- limits the total amount of resources used by all pods
own yaml:
apiVersion: v1 kind: ResourceQuota metadata: name: my-resource-quota spec: hard: requests.cpu: 4 requests.memory: 4Gi limits.cpu: 10 limits:memory: 10Gi