GCP Cloud Associate Udemy Flashcards
What is App Engine?
App Engine is a fully managed, serverless platform for developing and hosting web applications at scale.
What is App Engine’s built-in traffic splitting feature?
By deploying a new version of the application within the same App Engine environment and using the GCP Console to configure traffic splitting, you can easily direct a specified percentage of requests to the new version. This approach allows for gradual rollout and A/B testing without affecting the overall infrastructure or moving to a different compute service. It’s a straightforward and efficient way to test new versions with a subset of users, adhering to best practices for safe deployment and iteration.
Main difference between compute engine model and the app engine model?
Compute Engine provides IaaS (Infrastructure as a Service), requiring more manual setup and management of the compute resources compared to the PaaS (Platform as a Service) model of App Engine.
What would splitting traffic between two app engine applications require (as opposed to splitting traffic between versions of the same app engine)?
App Engine’s traffic splitting is designed to work within a single application across different versions, not between separate App Engine applications. Splitting traffic between separate apps would require a custom solution or an external load balancer, complicating the process beyond the intended simplicity and efficiency of using App Engine’s built-in traffic management features.
What is a kubernetes snapshot
- Kubernetes volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume.
- A volume snapshot in Kubernetes is equivalent to taking a backup of your data in other storage systems.
What is a persistent volume in GKE?
PersistentVolume resources are used to manage durable storage in a cluster. In GKE, a PersistentVolume is typically backed by a persistent disk.
What is a NFS?
Network File System (NFS) is a distributed file system protocol for shared storage. The NFS shared storage protocol defines the way files are stored and retrieved from storage devices across networks. Filesotre is a NFS solution on Google Cloud.
What is Filestore?
Filestore instances are fully managed NFS file servers on Google Cloud for use with applications running on Compute Engine virtual machine (VM) instances, Google Kubernetes Engine clusters, external datastores such as Google Cloud VMware Engine, or your on-premises machines.
What is a node in GKE
A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the control plane. A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster.
What is a pod in GKE?
Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.
A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker), and some shared resources for those containers. Those resources include:
Shared storage, as Volumes
Networking, as a unique cluster IP address
Information about how to run each container, such as the container image version or specific ports to use
What are local SSD for GKE?
Local solid-state drives (SSDs) are fixed-size SSD drives, which can be mounted to a single Compute Engine VM. You can use Local SSD on GKE to get highly performant storage that is not persistent (ephemeral) that is attached to every node in your cluster. Local SSDs also provide higher throughput and lower latency than standard disks.
What is kubernetes NodePort?
NodePort service in Kubernetes is a service that is used to expose the application to the internet from where the end-users can access it. If you create a NodePort Service Kubernetes will assign the port within the range of (30000-32767). The application can be accessed by end-users using the node’s IP address.
What is Kubernetes Ingress?
Kubernetes Ingress is an API object that helps developers expose their applications and manage external access by providing http/s routing rules to the services within a Kubernetes cluster.
What are advantages of kubernetes ingress?
It can simplify production environments because it facilitates a simple method of establishing rules to route traffic rather than creating specialized load balancers or manually exposing each service within a node.
How does kubernetes ingress allows you to expose your application to the public using HTTPS on a public IP address in Google Kubernetes Engine (GKE)?
Using a Kubernetes Ingress allows you to define HTTP and HTTPS routes to your services and enables SSL termination, ensuring secure communication. The Ingress controller automatically configures a Cloud Load Balancer to route external traffic to the appropriate service endpoints.
what is a kubernetes ClusterIP?
ClusterIP is the default service type in Kubernetes, and it provides internal connectivity between different components of our application. Kubernetes assigns a virtual IP address to a ClusterIP service that can solely be accessed from within the cluster during its creation. ClusterIP services are an excellent choice for internal communication between different components of our application that don’t need to be exposed to the outside world.
What is kubernetes DNS?
DNS stands for Domain Name System. Kubernetes DNS is a built-in service within the Kubernetes platform, designed to provide name resolution for services within a Kubernetes cluster. It simplifies the communication process between different services and pods within the cluster by allowing the use of hostnames instead of IP addresses. It plays a crucial role in enabling service discovery for pods to locate and communicate with other services within the cluster
What is a kubernetes HAProxy?
A kubernetes HAProxy is an ingress controller that implements that adds and removes routes in its underlying HAProxy load balancer configuration when it detects that pods have been added or removed from the cluster.
What is a VPC network peering?
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. This is done by sharing a VPC network from one project with the other project.
To enable traffic between multiple groups of Compute Engine instances running in different GCP projects, each group within its own VPC why would this not work: Verify that both projects are in a GCP Organization. Create a new VPC and add all instances.
Creating a new VPC and adding all instances to it won’t enable communication between instances in different projects and VPCs. VPCs are isolated network environments within a project and cannot span multiple projects.
Difference between IAM service viewer and IAM project viewer?
The IAM project Viewer role provides read-only access to all project resources without the ability to modify them
The IAM service Viewer role provides read-only access to specific Google Cloud services rather than the entire project.
What is a GKE node pool?
A node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification. Each node in the pool has a Kubernetes node label, cloud.google.com/gke-nodepool , which has the node pool’s name as its value.
You can add a new node pool to a GKE Standard cluster using the gcloud CLI, the Google Cloud console, or Terraform. GKE also supports node auto-provisioning, which automatically manages the node pools in your cluster based on scaling requirements.
How can you deploy services to specific node pools?
When you define a Service, you can indirectly control which node pool it is deployed into. The node pool is not dependent on the configuration of the Service, but on the configuration of the Pod.
You can explicitly deploy a Pod to a specific node pool by setting a nodeSelector in the Pod manifest. This forces a Pod to run only on nodes in that node pool. For an example see, Deploying a Pod to a specific node pool.
You can specify resource requests for the containers. The Pod only runs on nodes that satisfy the resource requests. For example, if the Pod definition includes a container that requires four CPUs, the Service does not select Pods running on nodes with two CPUs.
While creating a separate Kubernetes cluster with GPU-enabled nodes is a valid approach, it introduces unnecessary complexity and overhead. Managing multiple clusters increases administrative overhead and may result in underutilization of resources. Leveraging GKE’s capabilities to add GPU-enabled node pools to the existing cluster provides a more streamlined and cost-effective solution.
what does gcloud compute networks subnets expand-ip-range do?
The gcloud compute networks subnets expand-ip-range
command allows you to increase the IP range of an existing subnet in Google Cloud without needing to delete or recreate it. This method ensures that all VMs within the subnet can still reach each other without additional routes, as they remain within the same subnet but with an expanded address range. It’s a straightforward process that minimizes disruptions and maintains network connectivity.
While Shared VPC allows for resources in different projects to communicate over the same network, creating a new project is an unnecessary step when you can simply expand the current subnet’s IP range.
You cannot overwrite an existing subnet by creating a new one with the same starting IP address. Instead, you should expand the IP range of the existing subnet.