CKA Study Flashcards
Command: Apply configuration changes to a resource
kubectl apply -f FILENAME
Command: Access a web server via a NodePort
curl http://<Node>:<NodePort>
ie: http://198.168.1.2:30008</NodePort></Node>
Command: Create a resource
kubectl create -f FILENAME
Command: Creating a file from a Deployment
kubectl create deployment –image=nginx nginx –dry-run=client -o yaml > nginx-deployment.yaml
Command: Edit and update the definition of a resource
kubectl edit (-f FILENAME | TYPE NAME)
Command: Get documentation for a resource type
kubectl explain RESOURCE-TYPE
Command: Replace a resource
kubectl replace –force -f FILENAME
Command: Update the size of the specified replication controller
kubectl scale -f FILENAME
Command: Change the ETCD API version for commands
If you get the message, “No help topic for …”
export ETCDCTL_API=3 (default is 2)
Command: Execute a command against an resource
kubectl exec etcd-master -n kube-system etcdctl get / –prefix - keys-only
Command: List Pods
kubectl get pods
Command: Display the detailed state of a Pod
kubectl describe pod <Pod> -n=NAMESPACE</Pod>
Command: View a running resource and its effective options
ps -aux | grep <Process-Name></Process-Name>
If a specific Controller doesn’t seem to work or exist
Look at the Kube-Controller-Manager options
Kubeadm: /etc/kubernetes/manifests/kube-controller-manager.yaml
Non-Kubeadm: /etc/systemd/system/kube-controller-manager.service
Location: Where is the Pod Definition file located
Kubeadm: /etc/kubernetes/manifests/kube-apiserver.yaml
Non-Kubeadm: /etc/systemd/system/kube-apiserver.service
Object: ETCD-Master
Key/Value data store
Runs on Port 2379
Can be accessed via a browser at https://<IP>:2379</IP>
Set Value: ./etcdctl set key1 value1
Get Value: ./etcdctl set key1
Object: Kube-APIServer
Authenticate User
Validate Request
Retrieve Data
Update ETCD
Used by Kube-Scheduler-Master and Kubelet
Object: Kube-Proxy
Runs on each Node in the Cluster
Look for new Services
Creates an new Rule on each Node to forward traffic to those Services
Object: Kube-Scheduler
Decides which Pod goes on which Node
Kubeadm: /etc/kubernetes/manifests/kube-scheduler.yaml
Non-Kubeadm: /etc/systemd/system/kube-scheduler.service
Object: Kubelet
Registers Node on the K8S cluster
Creates Pods
Monitors Nodes and Pods
Configuration located at /var/lib/kubelet/config.yaml
Object: Master Node
ETCD: Information on the cluster
Kube-Scheduler: Schedule applications or containers
Kube-Controller-Manager: Takes care of all controllers
Kube-APIServer: Orchestrating operations on the cluster
Object: Kube-Controller-Manager
Monitors services and brings them to the desired state
Object: Node-Controller
Monitor Nodes and keeps Pods running
Object: Namespace Kube-System
Namespace for system resources
Object: Namespace Kube-Public
Namespace for shared resources
Object: ResourceQuota
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
spec:
hard:
pods: “10”
requests.cpu: “4”
requests.memory: 5Gi
limits.cpu: “10”
limits.memory: 10Gi
Object: ClusterIP Service
Creates a virtual IP to enable communication between services and pods, in the cluster
Object: NodePort Service
Listens to a port and forwards requests on that port to another port, across the cluster
- the Port is required and must be between 30000 and 32676
- if the TargetPort is not specified, it will be the same value as Port
- if the NodePort is not specified, it will be automatically assigned
NodePort services also externally exposes the IP Address
Object: LoadBalancer Service
Provisions a Load Balancer for applications
Object: Pods
A single instance of an Application
Helper Containers, supporting the Application, can be in the same Pod
There are many Pods in a Node
Object: Replication-Controller
Manages the Replicate Sets
- Pods per Set
- High Availability and Resiliency
Object: Worker Node
Container Engine: Docker (other engines are available)
Kubelet: Listens to Kube-API-Server and carries out instructions
Kube-Proxy: communication between nodes
YAML: Service NodePort
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 30008
selector:
app: myapp
type: frontend
In the Selector, app and type are copied from the Pod’s Labels section
YAML: Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
labels:
app: myapp
tier: frontend
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
tier: frontend
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
selector:
matchLabels:
type: frontend
YAML: Namespace
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
YAML: Pod
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
labels:
app: nginx
tier: frontend
spec:
containers:
- name: my-nginx
image: nginx
YAML: ReplicaSet
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels: // these labels are for the ReplicaSet
app: myapp
tier: frontend
spec:
template:
metadata:
name: myapp-pod
labels: // these labels are for filtering the specific Pods
app: myapp
tier: frontend
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
selector:
matchLabels: // these matchLabels must match the Pod filtering labels
type: frontend
YAML: ReplicationController
apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc
labels:
app: myapp
tier: frontend
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
tier: frontend
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
YAML: Service ClusterIP
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: myapp
type: backend
In the Selector, app and type are copied from the Pod’s Labels section
YAML: Service LoadBalancer
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
type: LoadBalancer
ports:
- targetPort: 80
port: 80
nodePort: 30008
Command: Create a Pod from the command line
kubectl run nginx –image=nginx –port=8080
Command: Create a Service to expose a Deployment or Pod
kubectl expose deployment nginx –port 80 –name nginx-service
kubectl expose pod redis –port 80 –name redis-service
Command: Update an Image on a Deployment
kubectl set image deployment nginx nginx=nginx:1.18
Command: Getting Help
kubectl create service clusterip –help
Command: Check if the K8S components are executing
kubectl get nodes -n kube-system
Command: Filter by Labels
kubectl get pods –selector LABEL1=VALUE1,LABEL2=VALUE2
Command: Taints
Add: kubectl taint nodes NODE-NAME key=value:taint-effect
taint-effects: NoSchedule | PreferNoSchedule | MoExecute
Remove: kubectl taint nodes NODE-NAME TAINT-
Command: Labels
kubectl label nodes NODE-NAME KEY=VALUE
YAML: Node Affinity
apiVersion: v1
kind: Pod
metadata:
name: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- Large
- Medium
Command: Create YAML from running resource
kubectl get pod PODNAME -o yaml > PODNAME.yaml
YAML: DaemonSet
Same YAML as ReplicaSet, except the kind is DaemonSet
Location: Path of the Static Pod Manifests
Usually, these are in /etc/kubernetes/manifests
If not, view /var/lib/kubelet/config.yaml
Look for “staticPodPath”
Command: Viewing Docker processes
docker ps
Command: Verify if a Pod is Static
Execute kubectl get pod NAME -o yaml
Investigate the ownerReferences section
If “kind: Node”, then it’s a static pod
YAML: Create a Pod with an imbedded command
kubectl run <…> –command – sleep 100
Make sure “command” is the last parameter
YAML: Pod with tolerations
Using a Pod YAML, under the “spec” section, at the same indent as “containers”, add:
tolerations:
- key: “app”
operator: “Equal”
value: “blue”
effect: “NoSchedule”
Command: Get Events
kubectl get events
Command: View Scheduler Logs
kubectl logs SCHEDULERNAME –name-space=kube-system
YAML: Additional Scheduler
apiVersion: v1
kind: Pod
metadata:
name: my-custom-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- –address=127.0.0.1
- –kubeconfig=/etc/kubernetes/scheduler.conf
- –leader-elect=true
- –scheduler-name=my-custom-scheduler
- –lock-object-name=my-custom-scheduler
image: k8s.gcr.io/…
name: kube-scheduler
In /etc/kubernetes/manifests/kube-scheduler.yaml
Either change the “–leader-elect=true” to false
- or -
Add the “–lock-object-name=CustomSchedulerName”
YAML: Using a custom scheduler
Using a Pod YAML, under the “spec” section, at the same indent as “containers”, add:
schedulerName: SCHEDULERNAME
Command: View Performance
kubectl top Node
kubectl top pod
Command: Install K8S Metrics Server
git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
kubectl create -f kubernetes-metrics-server/.
Command: Display Logs for a Pod
kubectl logs -f PODNAME CONTAINERNAME –previous
Command: App Rollout Status
kubectl rollout status DEPLOYMENTNAME
Command: App Rollout History
kubectl rollout history DEPLOYMENTNAME
Command: App Rollback
kubectl rollout undo DEPLOYMENTNAME
YAML: Pod Command and Arguments
Using a Pod YAML, under the “spec” section, under the “containers” section,
at the same indent as “image”, add:
command:[“COMMAND”]
args: [“ARGUMENTS”]
YAML: Pod Environment Variables
Using a Pod YAML, under the “spec” section, under the “containers” section,
at the same indent as “image”, add:
env:
- name: myVar
value: pink
Command: Configmap
kubectl create configmap \
CONFIGNAME –from-literal=KEY=VALUE
–from-literal=KEY=VALUE
- OR -
kubectl create configmap \
CONFIGNAME –from-file=PATHANDFILE
YAML: Configmap File
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
KEY1: VALUE1
KEY2: VALUE2
YAML: Pod and Configmap
Using a Pod YAML, under the “spec” section, under the “containers” section,
at the same indent as “image”, add:
envFrom:
- configMapRef:
name: CONFIGMAPNAME
- OR -
env:
- name: ENVNAME
valueFrom:
configMapKeyRef:
name: CONFIGMAPNAME
key: ENVNAME
- OR -
volumes:
- name: app-config-volume
configMap:
name: CONFIGMAPNAME
name: CONFIGMAPNAME2
Command: Secret
kubectl create secret generic \
SECRETNAME –from-literal=KEY=VALUE
–from-literal=KEY=VALUE
kubectl create secret generic \
SECRETNAME –from-file=PATHANDFILE
YAML: Secret File
apiVersion: v1
kind: Secret
metadata:
name: app-secret
data:
KEY1: VALUE1
KEY2: VALUE2
Command: Base64 Encode and Decode
Encode: echo -n ‘VALUE’ | base64
- OR -
cat <FILENAME> | base64 -w 0
Decode: echo -n 'VALUE' | base64 --decode</FILENAME>
YAML: Pod and Secret File
Using a Pod YAML, under the “spec” section, under the “containers” section,
at the same indent as “image”, add:
envFrom:
- secretRef:
name: SECRETNAME
- OR -
env:
- name: SECRETKEYNAME
valueFrom:
configMapKeyRef:
name: SECRETNAME
key: SECRETKEYNAME
- OR -
volumes:
- name: app-secret-volume
configMap:
name: SECRETNAME
name: SECRETNAME2
Command: Execute within the Pod
kubectl exec PODNAME -it
- OR -
kubectl exec PODNAME – <command></command>
SYSParm: Pod Eviction Timeout
kube-controller-manager –pod-eviction-timeout
Command: Drain Node
kubectl drain NODENAME
then, kubectl uncordon NODENAME
Command: Cordon/Uncordon Node
Cordon: kubectl cordon NODENAME
Uncordon: kubectl uncordon NODENAME
Command: Alias kubectl
alias k=kubectl
Command: Latest Stable K8S Version
kubeadm upgrade plan
Command: Snapshot ETCD
Values can be found in etcd pod.
etcdctl snapshot save FILENAME.db \
–endpoints=”listen-client-urls” \ (exclude https:\)
–cacert=”trusted-ca-file” \
–cert=”cert-file” \
–key=”key-file”
View Status: etcdctl snapshot status FILENAME.db
Command: ETCD Restore
a) etcdctl snapshot restore –data-dir NEWDIRECTORYNAME SNAPSHOTFILENAME
b) vi /etc/kubernetes/manifests/etcd.yaml
c) change the data-data directory in the volumes section
d) save will recreate etcd pod
Command: Switching Context
1) kubectl config view
2) kubectl config set-context –current –namespace=<NewNamespace></NewNamespace>
YAML: Pod and Volumes
Using a Pod YAML, under the “spec” section, under the “containers” section,
at the same indent as “image”, add:
volumeMounts:
- mountPath: CONTAINERDIRPATH
name: VOLUMENAME
Also, at the same indent as “containers”, add:
volumes:
- name: VOLUMENAME
hostPath:
path: HOSTDIRPATH
YALM: Persistent Volumes
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-persist-vol
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce (must match Access in PVC)
persistentVolumeReclaimPolicy: Retain
hostPath:
path: /pv/log
YAML: Persistent Volume Claims
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: CLAIMNAME (must match CLAIMNAME in Pod)
spec:
accessModes:
- ReadWriteOnce (must match Access in PV)
resources:
requests:
storage: 500Mi
YAML: Pods and Persistent Volume Claims
Using a Pod YAML, under the “spec” section, under the “containers” section,
at the same indent as “image”, add:
volumeMounts:
- mountPath: CONTAINERDIRPATH
name: VOLUMENAME
Also, at the same indent as “containers”, add:
volumes:
- name: VOLUMENAME
persistentVolumeClaim:
claimName: CLAIMNAME (must match CLAIMNAME in PVC)
YAML: StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Wait ForFirstConsumer
Object: TLS Certificates
Three Server Certificates:
1) API Server
2) ETCD Server
3) Kubelet Server
Four Client Certificates:
1) Admin
2) Kube Scheduler
3) Kube Controller Manager
4) Kube Proxy
Command: Inspect Service Logs
Kubeadm: kubectl logs etcd-master
- OR -
journalctl -u etcd.service -l
- OR -
docker ps -a
docker logs <CONTAINERID>
- OR -
crictl ps -a
crictl logs <CONTAINERID></CONTAINERID></CONTAINERID>
Command: Approve CSR
kubectl certificate approve <CSRNAME></CSRNAME>
Location: where is the default kubeconfig file ?
$HOME/.kube/config
Command: API Groups URL
Available API Groups: http://localhost:6443 -k
Supported Resource Groups: http://localhost:6443/apis -k | grep “name”
Be aware: start kubectl proxy service so you don’t need to enter certificate info
1) kubectl proxy
2) http://localhost:8001 -k
Location: API Server Authorization Mode
In /etc/kubernetes/manifests/kube-aip-server.yaml, add:
“–authorization-mode=”
Command: RBAC
1) kubectl get roles
2) kubectl get rolebindings
3) kubectl describe role <ROLENAME></ROLENAME>
Command: Check My Access
kubectl auth can-i create deployments –as <USERNAME></USERNAME>
Command: Cluster Roles
1) kubectl get clusterroles
2) kubectl get clusterrolebindings
3) kubectl describe clusterrole <CLUSTERROLENAME></CLUSTERROLENAME>
Command: List All Resources
kubectl api-resources
Command: Service Account
1) kubectl create serviceaccount <SVCACCTNAME>
2) kubectl create token <SVCACCTNAME></SVCACCTNAME></SVCACCTNAME>
Command: Private Repository
1) kubectl create secret docker-registry <DOCKERREGREDS> \
--docker-server= \
--docker-username= \
--docker-password= \
-- docker-email=
2) Using a Pod YAML, under the "spec" section, at the same indent as "containers" section, add:</DOCKERREGREDS>
imagePullSecrets:
- name: <DOCKERREGREDS></DOCKERREGREDS>
Command: Install Weave Net
kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’)”
Object: CoreDNS
DNS solution
Configuration file is located at: /etc/coredns/Corefile
Command: Return IP Address of Service or Pod
Service: host <SERVICENAME> (may be partially qualified)
Pod: host 10-244-2-5.default.pod.cluster.local</SERVICENAME>
Command: View Ingress Controller
kubectl get ingress
Kubeadm Steps
1) Provision VMs
2) Designate Master node
3) Install Docker on all nodes
4) Install kubeadm on all nodes
5) Initialize Master node
- verify IP Addr (ifconfig eth0) when creating kubeadm inti command
6) Set up networking solution (Pod Network)
7) Join Worker nodes to Master node
Command: Check Control Plane Services
service kube-apiserver status
Kubectl Cheat Sheet
K8S Doc»_space; Reference»_space; Command Line Tool»_space; kubectl Cheat Sheet
Command: Using JSON Path Queries
kubectl get deployments.apps
-o=custom-columns=’COLHEADER:JSONPATH,COLHEADER:JSONPATH’ –sort-by=JSONPATH
(remember to exclude “.items” portion of JSONPATH)
kubectl get deployments.apps
-o=jsonpath=’{range .items[*]}{JSONPATH}{“\t”}{JSONPATH}