12. Practice Exam I Flashcards

1
Q
  1. Manage storage for application configuration and data
    Create a project called storage-test

Create new app called storage-test-app

$ oc new-app –name storage-test-app –image quay.io/redhattraining/hello-world-nginx

Check available storage classes in cluster
$ oc get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
lvms-vg1 topolvm.io Delete WaitForFirstConsumer …
nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate …
Add NFS volume to running app

delete pod, and log into the new pod to make sure the data still exists

A

Manage storage for application configuration and data
Create a project called storage-test

Create new app called storage-test-app

$ oc new-app –name storage-test-app –image quay.io/redhattraining/hello-world-nginx

We will be using a given NFS storage filer (IP address-based)

Create a PV in the following format

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
path: /
server: 172.17.0.2
persistentVolumeReclaimPolicy: Retain
from documentation.

It is easy to do this with the GUI, otherwise, you need to create a yaml file from scratch Storage->PersistentVolumeClaims screenshot
Create PersistenVolumeClaim that binds to the created PV screenshot

Fill out form with the following info

PersistentVolumeClaim name: storage-test-pvc
Access mode: Single user (RWO)
Size: 1Gi
Volume mode: Filesystem
Note that to ensure the PVC binds to the correct PV, you need to add the volumeName tag to the yaml

yaml looks like this

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: storage-test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeName: pv0001 <——— VERY IMPORTANT!!!!!
storageClassName: “”
from documentation.

add PVC to storage-test-app deployment (Using GUI)
From the deployment:
Actions->Add storage

(0)Use existing claim: storage-test-pvc

Mount path: /mnt/storage-test

[Save]
App will redeploy

log into pod and test storage
$ oc rsh storage-test-app-54bdc95c84-tq4zx /bin/bash
bash-4.4$ ls /mnt
storage-test

$ echo “hello”>/mnt/storage-test/hello.txt

$ cat /mnt/storage-test/hello.txt
hello
delete pod, and log into the new pod to make sure the hello.txt file still exists
$ oc delete pod storage-test-app-54bdc95c84-tq4zx
pod “storage-test-app-54bdc95c84-tq4zx” deleted

$ oc get pods
NAME READY STATUS RESTARTS AGE
storage-test-app-54bdc95c84-sllnm 1/1 Running 0 12s

$ oc rsh storage-test-app-54bdc95c84-sllnm cat /mnt/storage-test/hello.txt
hello

The above is very simple, but if only given a storageclass and nothing else, and dynamically provisioned PVs isn’t available in your environment, look at information around it’s FQDN/IP and mount point, and create a static pv with that information first, and get the PVC to point to it. Make sure your PVC spec includes the storageClassName tag.

back to main

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Configure applications for reliability
    This excercise assumes you have an application with existing readyness and health endpoints.
    I found a project that can be deployed with this functionality

Create new project and deploy application
oc new-project health-and-readyness
Now using project “health-and-readyness” on server “https://api.crc.testing:6443”.

oc new-app \
–name probes –context-dir probes –build-env \
npm_config_registry=http://nexus-common.apps.na410.prod.nextcle.com/repository/nodejs \
nodejs:16-ubi8~https://github.com/tsrana/DO288-apps
Wait for app to start (can take > 4 minutes in CRC)

oc get pods llevy-mac:

NAME READY STATUS RESTARTS AGE
pod/probes-1-build 0/1 Completed 0 7m40s
pod/probes-5f84f4b7f9-jschr 1/1 Running 0 6m29s
Expose service and test the ready and healthz endpoints

❯ oc expose service/probes
route.route.openshift.io/probes exposed

❯ curl probes-health-and-readyness.apps-crc.testing
Hello! This is the index page for the app.

❯ curl probes-health-and-readyness.apps-crc.testing/ready
Ready for service requests…

❯ curl probes-health-and-readyness.apps-crc.testing/healthz
OK

❯ curl probes-health-and-readyness.apps-crc.testing/ready -i
HTTP/1.1 200 OK
x-powered-by: Express
content-type: text/html; charset=utf-8
content-length: 30
etag: W/”1e-ANVsNjd6wx5bS7ZhAUO+mQ”
date: Fri, 08 Sep 2023 17:34:28 GMT
keep-alive: timeout=5
set-cookie: fbc6880d0c2a839105b7441f06647354=d910f70e9695219c85a4bb38b9f325f7; path=/; HttpOnly
cache-control: private

Ready for service requests…

❯ curl probes-health-and-readyness.apps-crc.testing/healthz -i
HTTP/1.1 200 OK
x-powered-by: Express
content-type: text/html; charset=utf-8
content-length: 3
etag: W/”3-02+PlCXEqAAK2cSpcYWspQ”
date: Fri, 08 Sep 2023 17:34:34 GMT
keep-alive: timeout=5
set-cookie: fbc6880d0c2a839105b7441f06647354=d910f70e9695219c85a4bb38b9f325f7; path=/; HttpOnly
cache-control: private

OK
Add health and readyness probes to deployment

Test probes by going to the url
http://probes-health-and-readyness.apps-crc.testing/flip?op=kill

This will put the app in an unhealthy state and it will redeploy in a few seconds.
Monitor with the watch oc get pods command

A
  1. Configure applications for reliability
    This excercise assumes you have an application with existing readyness and health endpoints.
    I found a project that can be deployed with this functionality

Create new project and deploy application
oc new-project health-and-readyness
Now using project “health-and-readyness” on server “https://api.crc.testing:6443”.

oc new-app \
–name probes –context-dir probes –build-env \
npm_config_registry=http://nexus-common.apps.na410.prod.nextcle.com/repository/nodejs \
nodejs:16-ubi8~https://github.com/tsrana/DO288-apps
Wait for app to start (can take > 4 minutes in CRC)

oc get pods llevy-mac:

NAME READY STATUS RESTARTS AGE
pod/probes-1-build 0/1 Completed 0 7m40s
pod/probes-5f84f4b7f9-jschr 1/1 Running 0 6m29s
Expose service and test the ready and healthz endpoints

❯ oc expose service/probes
route.route.openshift.io/probes exposed

❯ curl probes-health-and-readyness.apps-crc.testing
Hello! This is the index page for the app.

❯ curl probes-health-and-readyness.apps-crc.testing/ready
Ready for service requests…

❯ curl probes-health-and-readyness.apps-crc.testing/healthz
OK

❯ curl probes-health-and-readyness.apps-crc.testing/ready -i
HTTP/1.1 200 OK
x-powered-by: Express
content-type: text/html; charset=utf-8
content-length: 30
etag: W/”1e-ANVsNjd6wx5bS7ZhAUO+mQ”
date: Fri, 08 Sep 2023 17:34:28 GMT
keep-alive: timeout=5
set-cookie: fbc6880d0c2a839105b7441f06647354=d910f70e9695219c85a4bb38b9f325f7; path=/; HttpOnly
cache-control: private

Ready for service requests…

❯ curl probes-health-and-readyness.apps-crc.testing/healthz -i
HTTP/1.1 200 OK
x-powered-by: Express
content-type: text/html; charset=utf-8
content-length: 3
etag: W/”3-02+PlCXEqAAK2cSpcYWspQ”
date: Fri, 08 Sep 2023 17:34:34 GMT
keep-alive: timeout=5
set-cookie: fbc6880d0c2a839105b7441f06647354=d910f70e9695219c85a4bb38b9f325f7; path=/; HttpOnly
cache-control: private

OK
Add health and readyness probes to deployment
From web console Deployments->Action-Edit Health Checks screenshot
add readyness probe with the /ready Path and click the check mark screenshot

add the livelyness probe with the /healthz Path and click the check mark screenshot

CLick [Save]
App will redeploy with probes

oc describe deployment probes
Name: probes
Namespace: health-and-readyness

Pod Template:
Labels: deployment=probes
Annotations: openshift.io/generated-by: OpenShiftNewApp
Containers:
probes:
Image: image-registry.openshift-image-registry.svc:5000/health-and-readyness/probes@sha256:a1ae413af592e1cde5bb573b25ac0f3cfc4fc6ed5ff01a4ede4e381a0a1131e8
Port: 8080/TCP
Host Port: 0/TCP
Liveness: http-get http://:8080/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/ready delay=0s timeout=1s period=10s #success=1 #failure=3

Test probes by going to the url
http://probes-health-and-readyness.apps-crc.testing/flip?op=kill

This will put the app in an unhealthy state and it will redeploy in a few seconds.
Monitor with the watch oc get pods command

command line opertions

$ oc set probe deployment probes –liveness \
–get-url=http://:8080/healthz \
–initial-delay-seconds=2 –timeout-seconds=2

deployment.apps/probes probes updated

$ oc set probe deployment probes –readiness \
–get-url=http://:8080/ready \
–initial-delay-seconds=2 –timeout-seconds=2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. Manage authentication and authorization (Can be done in CRC)
    Tasks
    Configure the cluster to use and HTPasswd identity provider
    create user accounts for: admin, leader, developer and qa-engineer
    Create a secret called cluster-users-secret using the htpasswd credentails
    create an identiy provider called cluster-users that reads the cluster-users-secret secret
    Account permissions
    admin should be able to modify the cluster
    leader should be able to create projects
    developer and qa-engineer should not be able to modify the cluster
    No other user should be able to create a project
    Default account cleanup
    Remove the kubeadmin account
    Project creation
    Create three projects: front-end, back-end and app-db
    leader user will be the admin of the projects
    qa-engineer user will have view access to the app-db project
    Group management
    As admin create three user groups: leaders, developers and qa

Add the leader user to the leaders group

Add the developer user to the developers group

Add the qa-engineer to theqa group

The leaders group should have edit access to back-end and app-db

The qa grou shou;f be able to view the front-end project but not edit it

back to main

A
  1. Manage authentication and authorization (Can be done in CRC)
    Tasks
    Configure the cluster to use and HTPasswd identity provider
    create user accounts for: admin, leader, developer and qa-engineer
    login as kubeadmin
    oc login kubeadmin

Use htpasswd to create account logins
❯ htpasswd -c -B -b users.htpasswd admin redhat
Adding password for user admin

❯ htpasswd -B -b users.htpasswd leader redhat
Adding password for user leader

❯ htpasswd -B -b users.htpasswd developer redhat
Adding password for user developer

❯ htpasswd -B -b users.htpasswd qa-engineer redhat
Adding password for user qa-engineer
Create a secret called cluster-users-secret using the htpasswd credentails
❯ oc create secret generic cluster-users-secret –from-file htpasswd=users.htpasswd -n openshift-config
secret/cluster-users-secret created
create an identiy provider called cluster-users that reads the cluster-users-secret secret
Note I like to use the web-console (as kubeadmin)for this step, since via the command line requires creating yaml file, and I can’t remember the format, so GUI it is!

Administration->Cluster Settings->Configuration->Oauth

Identity pvoviders->Add->HTPasswd

use cluster-users as the name, and just type something random in the bottom window Add

Edit the yaml in the GUI and edit so the blog looks like this


- htpasswd:
fileData:
name: cluster-users-secret
mappingMethod: claim
name: cluster-users
type: HTPasswd

and click Save

watch oc get pods -n openshift-authentication to make sure the pod relaunches

Every 2.0s: oc get pods -n openshift-authentication server-name: Thu Sep 7 14:07:05 2023

NAME READY STATUS RESTARTS AGE
oauth-openshift-7874464f79-6kpvq 1/1 Terminating 0 37m
oauth-openshift-86dd74b64-tzbs6 0/1 Pending 0 23s
.
.
.
NAME READY STATUS RESTARTS AGE
oauth-openshift-86dd74b64-tzbs6 1/1 Running 0 61s
The above can take a few minutes to relaunch

login to each account to make sure authentication works

oc login -u admin -p redhat
Login successful.

You don’t have any projects. You can try to create a new project, by running

oc new-project <projectname> Account permissions log back in as kubeadmin

admin should be able to modify the cluster
oc adm policy add-cluster-role-to-user cluster-admin admin
clusterrole.rbac.authorization.k8s.io/cluster-admin added: “admin”
log back in as admin, and you should be able to perform the following

leader should be able to create projects
oc adm policy add-cluster-role-to-user self-provisioner leader
clusterrole.rbac.authorization.k8s.io/self-provisioner added: “leader”
developer and qa-engineer should not be able to modify the cluster
❯ oc adm policy add-role-to-user view developer -n openshift-config
clusterrole.rbac.authorization.k8s.io/view added: “developer”

❯ oc adm policy add-role-to-user view qa-enginer -n openshift-config
clusterrole.rbac.authorization.k8s.io/view added: “qa-engineer”
No other user should be able to create a project
oc delete rolebinding self-provisioner
rolebinding.rbac.authorization.k8s.io “self-provisioner” deleted
Default account cleanup
Remove the kubeadmin account
(for 4.12) oc delete secrets -n kube-system kubeadmin

(for 4.13)

❯ oc delete user kubeadmin
user.user.openshift.io “kubeadmin” deleted

❯ oc delete clusterrolebinding kubeadmin
clusterrolebinding.rbac.authorization.k8s.io “kubeadmin” deleted
Project creation
Create three projects: front-end, back-end and app-db
oc new-project …

leader user will be the admin of the projects
❯ oc adm policy add-role-to-user admin leader -n front-end
clusterrole.rbac.authorization.k8s.io/admin added: “leader”

❯ oc adm policy add-role-to-user admin leader -n back-end
clusterrole.rbac.authorization.k8s.io/admin added: “leader”

❯ oc adm policy add-role-to-user admin leader -n app-db
clusterrole.rbac.authorization.k8s.io/admin added: “leader”
qa-engineer user will have view access to the app-db project
❯ oc adm policy add-role-to-user view qa-engineer -n app-db
clusterrole.rbac.authorization.k8s.io/view added: “qa-engineer”
Group management
As admin create three user groups: leaders, developers and qa
❯ oc adm groups new leaders
group.user.openshift.io/leaders created

❯ oc adm groups new developers
group.user.openshift.io/developers created

❯ oc adm groups new qa
group.user.openshift.io/qa created
Add the leader user to the leaders group ❯ oc adm groups add-users leaders leader

Add the developer user to the developers group oc adm groups add-users developers developer

Add the qa-engineer to theqa group oc adm groups add-users qa qa-engineer

The leaders group should have edit access to back-end and app-db

❯ oc adm policy add-role-to-group edit leaders -n back-end
clusterrole.rbac.authorization.k8s.io/edit added: “leaders”

❯ oc adm policy add-role-to-group edit leaders -n app-db
clusterrole.rbac.authorization.k8s.io/edit added: “leaders”
The qa group should be able to view the front-end project but not edit it
oc adm policy add-role-to-group view qa -n front-end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Configure network security (edge with cert/key and maybe CA cert?)

Configure service account
creating a service account with scc to allow app to run
log in as developer

create new project appsec-scc

deploy an app without exposing a route oc new-app –name nginx –image nginx

note that the application fails to start

❯ oc get pods
NAME READY STATUS RESTARTS AGE
nginx-69567d67f8-nwvp2 0/1 CrashLoopBackOff 2 (21s ago) 42s
check logs pod logs
❯ oc logs pod/nginx-69567d67f8-nwvp2
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
.
.
.
nginx: [warn] the “user” directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2023/09/07 23:19:46 [emerg] 1#1: mkdir() “/var/cache/nginx/client_temp” failed (13: Permission denied)
nginx: [emerg] mkdir() “/var/cache/nginx/client_temp” failed (13: Permission denied)
we need to create a service account and assign the anyuid SCC to it to get the app to run

login as admin

create the nginx-sa service account

assign the anyuid SCC to nginx-sa

log back in as developer (or leader)

assign the service account to the deployment

This will cause the app to re-deploy and the app should run now

❯ oc get pods
NAME READY STATUS RESTARTS AGE
nginx-58c5496cd8-69s58 1/1 Running 0 87s
expose the service
create an edge route (https)
create a secure route (edge)
create a tls passthrough route
create a tls secret

create passthrough route

A
  1. Configure network security (edge with cert/key and maybe CA cert?)

Configure service account
creating a service account with scc to allow app to run
log in as developer

create new project appsec-scc

deploy an app without exposing a route oc new-app –name nginx –image nginx

note that the application fails to start

❯ oc get pods
NAME READY STATUS RESTARTS AGE
nginx-69567d67f8-nwvp2 0/1 CrashLoopBackOff 2 (21s ago) 42s
check logs pod logs
❯ oc logs pod/nginx-69567d67f8-nwvp2
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
.
.
.
nginx: [warn] the “user” directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2023/09/07 23:19:46 [emerg] 1#1: mkdir() “/var/cache/nginx/client_temp” failed (13: Permission denied)
nginx: [emerg] mkdir() “/var/cache/nginx/client_temp” failed (13: Permission denied)
we need to create a service account and assign the anyuid SCC to it to get the app to run
login as admin
create the nginx-sa service account
❯ oc create sa nginx-sa
serviceaccount/nginx-sa created
assign the anyuid SCC to nginx-sa
❯ oc adm policy add-scc-to-user anyuid -z nginx-sa
clusterrole.rbac.authorization.k8s.io/system:openshift:scc:anyuid added: “nginx-sa”
log back in as developer (or leader)

assign the service account to the deployment

❯ oc set serviceaccount deployment/nginx nginx-sa
deployment.apps/nginx serviceaccount updated
This will cause the app to re-deploy and the app should run now

❯ oc get pods
NAME READY STATUS RESTARTS AGE
nginx-58c5496cd8-69s58 1/1 Running 0 87s
expose the service
oc expose service/nginx
route.route.openshift.io/nginx exposed
create an edge route (https)
prereq: generate tls .crt and .key from inside the provided certs directory (run openssl-comands.sh)
create a secure route (edge)
❯ oc create route edge nginx-https –service nginx –hostname nginx-https.apps-crc.testing –key <file>.key --cert <file>.crt
route.route.openshift.io/nginx-https created
this route will be accessible with https://nginx-https.apps-crc.testing test with the curl -s -k https://<URL> command</URL></file></file>

back to main

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. Enable developer self-service create a default project template with a limit range (max-min-default-request) make that the default setting when someone creates a new project

Enable developer self-service
Create a project template-builder

The workloads in the project cannot request a total of more than 1 GiB of RAM, and they cannot use more than 2 GiB of RAM.

Each workload in the project has the following properties:

Default memory request of 256 MiB

Default memory limit of 512 MiB

Minimum memory request of 128 MiB

Maximum memory usage of 1 GiB
Create a project temlate definition with the same properties

Create and configure the project template

Create a project to make sure it works as intended and check resourcequotas and limitranges to see if present

A
  1. Enable developer self-service create a default project template with a limit range (max-min-default-request) make that the default setting when someone creates a new project

Enable developer self-service
Create a project template-builder
The workloads in the project cannot request a total of more than 1 GiB of RAM, and they cannot use more than 2 GiB of RAM.
create a quota for this:
oc create quota memory –hard=requests.memory=1Gi,limits.memory=2Gi -n template-builder
Each workload in the project has the following properties:
Default memory request of 256 MiB

Default memory limit of 512 MiB

Minimum memory request of 128 MiB

Maximum memory usage of 1 GiB
Create limit range memory in using the web console
apiVersion: v1
kind: LimitRange
metadata:
name: memory
namespace: template-test
spec:
limits:
- defaultRequest:
memory: 256Mi
default:
memory: 512Mi
max:
memory: 1Gi
min:
memory: 128Mi
type: Container
test by deploying an app and scaling past quota
oc create deployment -n template-test test-limits –image quay.io/redhattraining/hello-world-nginx

oc scale –replicas=10 deployment/test-limits

oc get all
Create a project temlate definition with the same properties
create bootstrap template
$ oc adm create-bootstrap-project-template -o yaml >template.yaml
output the created quota and limitranges names memory and redirect to&raquo_space; template.yaml
move quota and limit above parameters and replace all instances of template-builder with ${PROJECT_NAME}
be sure to indent these lines 2 spaces
be sure apiVersion object tags have the - in front and are in alignment with the included - apiVersion: rbac… object
remove creationTimeStampand other unnecessary tags/values
creationTimestamp

resourceVersion

uid

status
(see: template-example.yaml for example results)

Create and configure the project template
create in openshift-config namespace
$ oc create -f template.yaml -n openshift-config
template.template.openshift.io/project-request created
use the oc edit command to change the cluster project configuration
oc edit projects.config.openshift.io cluster
(note: name will tab-complete)

add template information under spec: tag
spec:
projectRequestTemplate:
name: project-request
once saved, watch the pods in openshift-apiserver for an update (can take over a minute)
watch oc get pod -n openshift-apiserver
Create a project to make sure it works as intended and check resourcequotas and limitranges to see if present

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Configure application security
    This is an excercise where you configure security between pods.

Scenario A: Pods within a namespace/project can only talk to one another
Scenario B: Pods between namespaces/projects can talk to one another
create a project called network-policy
create two deployments/apps (hello, test) with the same image
oc new-app –name hello –image quay.io/redhattraining/hello-world-nginx

oc new-app –name test –image quay.io/redhattraining/hello-world-nginx

expose the hello service
❯ oc expose service/hello
route.route.openshift.io/hello exposed

❯ curl hello-network-policy.apps-crc.testing

<html>
<body>
<h1>Hello, world from nginx!</h1>
</body>
</html>

get wide output from pods,services,routes to get internal pod IP address and cluster IP addresses

Access the hello pod from the test pod using hello’s pod and cluster IP

create a second project called different-project and deploy the same image with the app name sample-app

❯ oc new-project different-project
Now using project “different-project” on server “https://api.crc.testing:6443”.

❯ oc new-app –name sample-app –image quay.io/redhattraining/hello-world-nginx
access the hello pod from the sample-app pod via pod and cluster IP

switch back to network-policy project and create a “deny-all” network policy

test to make sure the pods cant access one another in the project (test trying to connect to hello pod and network IP)

validate pod from different-project can no longer access network-policy/hello

switch back to network-policy project and create a policy that will allow different-project/sample-app to access pods

create lable network=different-project for project different-project

access the hello pod from the sample-app pod via pod and cluster IP

A
  1. Configure application security
    This is an excercise where you configure security between pods.

Scenario A: Pods within a namespace/project can only talk to one another
Scenario B: Pods between namespaces/projects can talk to one another
create a project called network-policy
create two deployments/apps (hello, test) with the same image
oc new-app –name hello –image quay.io/redhattraining/hello-world-nginx

oc new-app –name test –image quay.io/redhattraining/hello-world-nginx

expose the hello service
❯ oc expose service/hello
route.route.openshift.io/hello exposed

❯ curl hello-network-policy.apps-crc.testing

<html>
<body>
<h1>Hello, world from nginx!</h1>
</body>
</html>

get wide output from pods,services,routes to get internal pod IP address and cluster IP addresses
❯ oc get pods,services,routes -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/hello-76bfc67544-ldwmj 1/1 Running 0 10m 10.217.0.171 crc-2zx29-master-0 <none> <none>
pod/test-b4d8668db-b2l8s 1/1 Running 0 10m 10.217.0.172 crc-2zx29-master-0 <none> <none></none></none></none></none>

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/hello ClusterIP 10.217.5.160 <none> 8080/TCP 10m deployment=hello
service/test ClusterIP 10.217.4.111 <none> 8080/TCP 10m deployment=test</none></none>

NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
route.route.openshift.io/hello hello-network-policy.apps-crc.testing hello 8080-tcp Nonee
Access the hello pod from the test pod using hello’s pod and cluster IP
❯ oc rsh test-b4d8668db-b2l8s curl 10.217.0.171:8080

<html>
<body>
<h1>Hello, world from nginx!</h1>
</body>
</html>

❯ oc rsh test-b4d8668db-b2l8s curl 10.217.5.160:8080

<html>
<body>
<h1>Hello, world from nginx!</h1>
</body>
</html>

create a second project called different-project and deploy the same image with the app name sample-app
❯ oc new-project different-project
Now using project “different-project” on server “https://api.crc.testing:6443”.

❯ oc new-app –name sample-app –image quay.io/redhattraining/hello-world-nginx
access the hello pod from the sample-app pod via pod and cluster IP
❯ oc rsh sample-app-5fc755d58-bgbcc curl 10.217.0.171:8080

<html>
<body>
<h1>Hello, world from nginx!</h1>
</body>
</html>

❯ oc rsh sample-app-5fc755d58-bgbcc curl 10.217.5.160:8080

<html>
<body>
<h1>Hello, world from nginx!</h1>
</body>
</html>

switch back to network-policy project and create a “deny-all” network policy via GUI (otherwise create yaml from scratch…good luck) log in as leader
switch to Adminstrator view
Networking->NetworkPolicies
screen shot
Create Network Policy call it deny-all and click Create screen shot

(yasl if you are inclned)

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: network-policy
spec:
podSelector: {}
test to make sure the pods cant access one another in the project (test trying to connect to hello pod and network IP)
❯ oc rsh test-b4d8668db-b2l8s curl 10.217.5.160:8080
^Ccommand terminated with exit code 130

❯ oc rsh test-b4d8668db-b2l8s curl 10.217.0.171:8080
^Ccommand terminated with exit code 130
validate pod from different-project can no longer access network-policy/hello
❯ oc project different-project
Now using project “different-project” on server “https://api.crc.testing:6443”.

❯ oc rsh sample-app-5fc755d58-bgbcc curl 10.217.0.171:8080
^Ccommand terminated with exit code 130
switch back to network-policy project and create a policy that will allow different-project/sample-app to access pods
From Gui: Networking->NetworkPolicies->Create NetworkPolicy screenshot screen shot resulting yaml

…(output omitted)
spec:
podSelector:
matchLabels:
deployment: hello
ingress:
- ports:
- protocol: TCP
port: 8080
from:
- podSelector:
matchLabels:
deployment: sample-app
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: different-project (note this comes from the label found in the “describe project <project> command)
policyTypes:
- Ingress
create lable network=different-project for project different-project
(as admin)</project>

❯ oc login -u admin

Using project “different-project”.

❯ oc label namespace different-project network=different-project
namespace/different-project labeled
access the hello pod from the sample-app pod via pod and cluster IP
oc rsh sample-app-5fc755d58-bgbcc curl 10.217.0.171:8080

<html>
<body>
<h1>Hello, world from nginx!</h1>
</body>
</html>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Quotas - Resource Limits - Scale - Autoscale
    Deploy hello-nginx app to test with in project limits-scale
    ❯ oc new-project limits-scale

❯ oc new-app –name hello –image quay.io/redhattraining/hello-world-nginx
scale deployment to 2 pods

Create an autoscale policy so that the deployment has a minimum 2 pods and a max of 4 pod and scales when cpu is over 75%

Create resource limits for deployment
[] request cpu 100m
[] limit cpu 200m
[] request memory 20Mi
[] limit cpu 100Mi

Create quota: cpu 3, memory 1G, configmaps 2 in the curent project

A
  1. Quotas - Resource Limits - Scale - Autoscale
    Deploy hello-nginx app to test with in project limits-scale
    ❯ oc new-project limits-scale

❯ oc new-app –name hello –image quay.io/redhattraining/hello-world-nginx
scale deployment to 2 pods
❯ oc scale deployment hello –replicas 2
deployment.apps/hello scaled

❯ oc get pods
NAME READY STATUS RESTARTS AGE
hello-76bfc67544-9tf98 1/1 Running 0 2m15s
hello-76bfc67544-xzpkl 1/1 Running 0 7s
Create an autoscale policy so that the deployment has a minimum 2 pods and a max of 4 pod and scales when cpu is over 75%
❯ oc autoscale deployment hello –min 2 –max 4 –cpu-percent 75
horizontalpodautoscaler.autoscaling/hello autoscaled

❯ oc get horizontalpodautoscalers.autoscaling
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hello Deployment/hello <unknown>/75% 2 4 2 82s
Create resource limits for deployment
[] request cpu 100m
[] limit cpu 200m
[] request memory 20Mi
[] limit memory 100Mi
> oc set resource deployment hello --limits=cpu=200m,memory=100Mi --requests=cpu=100m,memory=20Mi</unknown>

(note: if app isn’t running due to cpu request exceeded, lower the request amount)

Create quota: cpu 3, memory 1G, configmaps 2 in the curent project
login as admin

❯ oc create quota project-quota –hard=cpu=3,memory=1Gi,configmaps=2 -n limits-scale
resourcequota/project-quota created

❯ oc get quota
NAME AGE REQUEST LIMIT
project-quota 12m configmaps: 2/2, cpu: 200m/3, memory: 40Mi/1Gi

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. cron jobs in OpenShift
    Create a cron job say-hello in a project cron-test called that runs at 6:02 am every Sunday

cron job runs in a pod created from the image quay.io/redhattraining/hello-world-nginx

cron job runs the command echo Hello from the OpenShift cluster

A
  1. cron jobs in OpenShift
    Create a cron job say-hello in a project cron-test called that runs at 6:02 am every Sunday
    cron job runs in a pod created from the image quay.io/redhattraining/hello-world-nginx
    cron job runs the command echo Hello from the OpenShift cluster
    Create the cron-test project
    From the project, in Administrator view, go to Workloads->CronJobs
    Click [Create Cronjob]
    Fill out the yaml with the following config
    apiVersion: batch/v1
    kind: CronJob
    metadata:
    name: say-hello
    namespace: cron-test
    spec:
    schedule: ‘2 6 * * 0’
    jobTemplate:
    spec:
    template:
    spec:
    containers:
    - name: hello
    image: quay.io/redhattraining/hello-world-nginx
    args:
    - /bin/sh
    - ‘-c’
    - date; echo Hello from the OpenShift cluster
    restartPolicy: OnFailure

Once deployed, run the command oc get all to see the job in the project. Note you can change the schedule to see it run more often. */1 * * * * will make it run every minute and look at the pod logs for the message.

You can also add service accounts to a cronjob.
You can edit successful job run history in the yaml once you have created a job with the default values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly