Pluralsight - Reliable Google Cloud Infrastructure: Design and Process Flashcards
SLO / SLI abbreviations
Service Level Indicators
- measurable and time-bound
- latency / requests per second
- availability etc
- it’s basically a KPI
Service Level Objectives
- the goal you want to achieve by completing a certain number of SLIs
- must be achievable
Service Level Agreement
- contract to deliver a service that specifies consequences if the service isn’t delivered
Key questions to ask
Who are the users?
Who are the developers?
Who are the stakeholders?
What does the system do?
What are the main features?
Why is the system needed?
When do the users need and/or want the solution?
When can the developers be done?
How will the system work?
How many users will there be?
How much data will there be?
Stateless service
Means that it gets its data from the stateful environment.
They work in combination with the stateful ones = those that are attached to persistent disks for example.
The stateless backend can for example scale up or down, depending on the demand coming from front-end directed to it by the load balancer.
12 factor app
- Codebase - should be tracked in a version control environment
- Dependency declaration / isolation (tracking is done by Maven, Java or pip). Dependencies can be isolated by being packed into a container. Container Registry can store the images.
- Configuration - every app has different configuration environments: Dev, test, production
- Backing service - eg DB, caches should be accessed by URL
- Build, release, run - software development process should be split into these parts.
- Processes - execute app as one or more stateless process
- Port-binding - services should be exposed to a port
- Concurrency - apps should be able to scale up/down
- Disposability - apps should be able to handle failures
- Dev/prod parity - keep them as similar as possible
- Logs - awareness of health of your apps
- Admin processes - usually one-off or automated
REST - representational state transfer
Design of microservices based on REST
Done to achieve loosely coupled independent services (i.e. barely related to each other/barely linking to each other).
Versioned contracts concept - each time you change your app, you create a new version, but you keep a certificate for an older one in case other apps rely on the old version –> so you ensure backward compatibility
At low levels apps communicate via HTTPS using text-based payloads:
- Client makes GET, POST, PUT or DELETE requests
- Body of the request is formatted as JSON or XML
- Results turned as JSON, XML or HTML
REST supports loosely coupled logic, but it needs a lot of engineering, if there are customised requests from clients, there must be custom made REST APIs to correctly call on the elements requested.
- Uniform interface is a key
- Paging should be consistent
- URIs consistency
Batch APIs
- collection of resources representations sent to the person requesting info in the form of JSON files
HTTP - it is a protocol: Design of microservices based on HTTP
HTTP requests consist of:
- VERB: GET, PUT, POST, DELETE
- Uniform Resource Identifier (URI)
- Header (metadata about the msg)
- Request body
GET - retrieve a resource
POST - request a creation of a new resource
PUT - create a new resource or alter existing data, can only be done after POST
DELETE
API / services - each API makes available some collection of data
gRPC
OpenAPI
- Each google cloud service exposes a REST API
- Service endpoint example: https://compute.googleapis.com
- collections include instances, instanceGroups, instanceTemplates
- verbs (GET…)
Use OpenAPI to expose apps to clients (i.e. make the apps available to the clients).
gRPC
- binary protocol developed at Google
- useful for internal microservice communications
- based on HTTP/2
- supported by many gcloud services such as Global Load Balancer, cloud endpoints for microservices, GKE (using Envoy Proxy)
————————————————-
Tools for managing APIs
Cloud Endpoints
- helps develop, deploy and manage API
Apigee
- built for enterprises (for on/off premises, any public cloud usage)
- Both of the above provide user authentication, monitoring, securing, OpenAPI and gRPC
Gcloud services are also APIs in some way:
Compute Engine API:
- has collections (instances, instance groups, networks, subnetworks etc)
– for each collection various methods are used to manage the data
Continuous Integration
- Code is written, pushed to GitHub
- Unit tests are passed
- Build deployment package - create docker image / dockerfile
- The image is saved in the Container Registry where it can be deployed
- Additional step: Quality analysis by tools such as SonarQube
Note: each microservice should have its own repo
CI provided by Google
-
Cloud Source Repository
- like GitHub on gcloud
- these are managed git repos -
Cloud Build
- building software quickly
- Docker-build service, alternative to using Docker Build command
- gcloud builds submit –tag gcr.io/project_id/img_name
- they can create dependencies, run unit tests…
- executes build steps you define, it’s like executing commands in a script -
Build triggers
- watch your repo or build container
- support Maven, custom builds and Docker
- the triggers start the builds automatically when the changes are made to the source code -
Container/Artifact Registry
- provides a secure private Docker image repo on Gcloud. The images are stored in Cloud Storage. Can use IAM rules to set who can access.
- can use docker push or docker pull<img></img> -
Binary Authorisation
- allows you to enforce deploying only trusted containers into GKE
- for example an ‘attestor’ verifies that the image is coming from a trusted repo
- uses Kritis Signer for the vulnerabilities assessment
Note: You can create a VM to test the container/image. In the VM options select container and in the ‘name’ put the name of the container from the ‘History’ in Cloud Build.
Note: Build configs can be specified in the Dockerfile or Cloud Build file.
Note: To connect to Artifact Registry, will need to do git config –global –> with email and name, and
Storage options
Relational
1. Cloud SQL - vertical scaling (incr the no. of VMs); global scalability
2. Cloud Spanner - horizontal scaling (eg adding nodes); global scalability
3. Alloy DB - analytical processing, gen AI
File
1. Filestore - latency sensitive - unstructured data
NoSQL
1. Firestore -scaling with no limits, good for user profiles, game states; non-relational data analytics without caching
2. Cloud BigTable - horizontal scaling - eventual consistency (the updates to files are not immediately seen by all the users); good for read/write, financial services, low latency, unlike BigQuery
Object
1. Cloud Storage - scaling with no limits, binary project data; unstructured data
Block
1. Persistent Disk (snapshots are backups of persistent disk)
Warehouse
1. BigQuery
In memory
1. Memorystore - vertical scaling - eventual consistency; caching, gaming; non-relational data analytics with caching
HTTPs + Cloud CDN
When global Load Balancers are used, it’s best to use a SSL certificate to protect data.
With HTTPs LB you should use Cloud CDN - caching content closest to the user.
Network Connectivity (VPNs)
VPC Peering
- good to connect different VPCs together regardless of whether they are in the same org or not but the subnet ranges CANNOT overlap
Cloud VPN
- connects your on-premises network to Gcloud VPC through an IPsec VPN Tunnel
- one VPN gateway encrypts the traffic, another decrypts
- good for low volumes of data
- Classic VPN
– has 99.9% availability
– single interface
– single external IP
– static routes are available
- HA VPN
– has 99.99% SLA availability
– 2 interfaces
– 2 external IPs
– must use BGP routing = dynamic routing –> can create active/active or active/passive routing configs
- static (classic VPN) or dynamic routes using Cloud Router - allows to update which tunnel info goes through without actually changing the configurations
Note:
HA VPN allows to use Dedicated and Partner Interconnects. Dedicated - allows direct connection to the colocation facility. Partner - for lower bandwidth requirements. These options allow to use Internal IPs. Must use Cloud Routing = BGP ! Can also use Private Google Access.
Gcloud Deployment methods (GKE vs VMs vs App Engine etc)
If you have specific machine requirements
- use VMs
No specific OS requirements, do you need containers?
- YES: use GKE (Kubernetes if you want to customise; Cloud Run if not - then google managers LBs, autoscaling, clusters, health checks)
No specific OS and no containers, is your service event-driven?
- YES: Cloud Functions
- NO: App Engine (LBs, autoscaling, all the infrastructure - are all managed by Google, you just focus on the code)
Dockerfiles in:
App Engine
Kubernetes
Cloud Run
App Engine
- when we have a Dockerfile - can build it:
docker build -t test-python . –> dot at the end indicates the build should be done in the cwd
- Create image in App Engine
gcloud app create –region=us-west1 - Deploy image
gcloud app deploy –version=one –quiet - Now if you go to App Engine, you will see a link top right corner –> if clicked the image/app will run on the App Engine in a new window
- If I create a new version, then App Engine will replicate this too; new version can be created in the same deploy command as above after I change whatever it is that I wanna change. Can add –no-promote flag which will ensure that the first version is still the one running. Not v2. Maybe we just wanna test v2 for now.
- In the ‘Versions’ section can click ‘Split Traffic’ and divert traffic to whichever version we want.
Kubernetes
- Once you create a cluster from the console, connect to it either through console or cloud shell:
gcloud container clusters get-credentials cluster-1 –zone us-central1-c –project project_name - Show machines in the cluster:
kubectl get nodes - We will need a new file called kubernetes-config.yaml with certain settings
- To push the image:
**gcloud builds submit –tag us-west1-docker.pkg.dev/$DEVSHELL_PROJECT_ID/devops-demo/devops-image:v0.2 ** - Once the image is built, copy the image name that cloud shell gives and paste into the kubernetes-config.yaml file in the images part
- Then we apply the changes
kubectl apply -f kubernetes-config.yaml - can check:
kubectl get pods / kubectl get services
—————————————
Cloud Run
- Create a new image:
gcloud builds submit –tag us-west1-docker.pkg.dev/$DEVSHELL_PROJECT_ID/devops-demo/cloud-run-image:v0.1 . - In the console, go to Cloud Run –> Create Service
- Create a service based on the most recent image in ‘select’ image part
- make the service publicly available by ticking ‘Allow unauthenticated invocations’
- the service should start and we can click the newly available link