GCP Cloud Associate Udemy Flashcards

1
Q

What is App Engine?

A

App Engine is a fully managed, serverless platform for developing and hosting web applications at scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is App Engine’s built-in traffic splitting feature?

A

By deploying a new version of the application within the same App Engine environment and using the GCP Console to configure traffic splitting, you can easily direct a specified percentage of requests to the new version. This approach allows for gradual rollout and A/B testing without affecting the overall infrastructure or moving to a different compute service. It’s a straightforward and efficient way to test new versions with a subset of users, adhering to best practices for safe deployment and iteration.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Main difference between compute engine model and the app engine model?

A

Compute Engine provides IaaS (Infrastructure as a Service), requiring more manual setup and management of the compute resources compared to the PaaS (Platform as a Service) model of App Engine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What would splitting traffic between two app engine applications require (as opposed to splitting traffic between versions of the same app engine)?

A

App Engine’s traffic splitting is designed to work within a single application across different versions, not between separate App Engine applications. Splitting traffic between separate apps would require a custom solution or an external load balancer, complicating the process beyond the intended simplicity and efficiency of using App Engine’s built-in traffic management features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a kubernetes snapshot

A
  • Kubernetes volume snapshots let you create a copy of your volume at a specific point in time. You can use this copy to bring a volume back to a prior state or to provision a new volume.
  • A volume snapshot in Kubernetes is equivalent to taking a backup of your data in other storage systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a persistent volume in GKE?

A

PersistentVolume resources are used to manage durable storage in a cluster. In GKE, a PersistentVolume is typically backed by a persistent disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a NFS?

A

Network File System (NFS) is a distributed file system protocol for shared storage. The NFS shared storage protocol defines the way files are stored and retrieved from storage devices across networks. Filesotre is a NFS solution on Google Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Filestore?

A

Filestore instances are fully managed NFS file servers on Google Cloud for use with applications running on Compute Engine virtual machine (VM) instances, Google Kubernetes Engine clusters, external datastores such as Google Cloud VMware Engine, or your on-premises machines.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a node in GKE

A

A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the control plane. A Node can have multiple pods, and the Kubernetes control plane automatically handles scheduling the pods across the Nodes in the cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a pod in GKE?

A

Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker), and some shared resources for those containers. Those resources include:

Shared storage, as Volumes
Networking, as a unique cluster IP address
Information about how to run each container, such as the container image version or specific ports to use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are local SSD for GKE?

A

Local solid-state drives (SSDs) are fixed-size SSD drives, which can be mounted to a single Compute Engine VM. You can use Local SSD on GKE to get highly performant storage that is not persistent (ephemeral) that is attached to every node in your cluster. Local SSDs also provide higher throughput and lower latency than standard disks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is kubernetes NodePort?

A

NodePort service in Kubernetes is a service that is used to expose the application to the internet from where the end-users can access it. If you create a NodePort Service Kubernetes will assign the port within the range of (30000-32767). The application can be accessed by end-users using the node’s IP address.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is Kubernetes Ingress?

A

Kubernetes Ingress is an API object that helps developers expose their applications and manage external access by providing http/s routing rules to the services within a Kubernetes cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are advantages of kubernetes ingress?

A

It can simplify production environments because it facilitates a simple method of establishing rules to route traffic rather than creating specialized load balancers or manually exposing each service within a node.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does kubernetes ingress allows you to expose your application to the public using HTTPS on a public IP address in Google Kubernetes Engine (GKE)?

A

Using a Kubernetes Ingress allows you to define HTTP and HTTPS routes to your services and enables SSL termination, ensuring secure communication. The Ingress controller automatically configures a Cloud Load Balancer to route external traffic to the appropriate service endpoints.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what is a kubernetes ClusterIP?

A

ClusterIP is the default service type in Kubernetes, and it provides internal connectivity between different components of our application. Kubernetes assigns a virtual IP address to a ClusterIP service that can solely be accessed from within the cluster during its creation. ClusterIP services are an excellent choice for internal communication between different components of our application that don’t need to be exposed to the outside world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is kubernetes DNS?

A

DNS stands for Domain Name System. Kubernetes DNS is a built-in service within the Kubernetes platform, designed to provide name resolution for services within a Kubernetes cluster. It simplifies the communication process between different services and pods within the cluster by allowing the use of hostnames instead of IP addresses. It plays a crucial role in enabling service discovery for pods to locate and communicate with other services within the cluster

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is a kubernetes HAProxy?

A

A kubernetes HAProxy is an ingress controller that implements that adds and removes routes in its underlying HAProxy load balancer configuration when it detects that pods have been added or removed from the cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is a VPC network peering?

A

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. This is done by sharing a VPC network from one project with the other project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

To enable traffic between multiple groups of Compute Engine instances running in different GCP projects, each group within its own VPC why would this not work: Verify that both projects are in a GCP Organization. Create a new VPC and add all instances.

A

Creating a new VPC and adding all instances to it won’t enable communication between instances in different projects and VPCs. VPCs are isolated network environments within a project and cannot span multiple projects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Difference between IAM service viewer and IAM project viewer?

A

The IAM project Viewer role provides read-only access to all project resources without the ability to modify them

The IAM service Viewer role provides read-only access to specific Google Cloud services rather than the entire project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a GKE node pool?

A

A node pool is a group of nodes within a cluster that all have the same configuration. Node pools use a NodeConfig specification. Each node in the pool has a Kubernetes node label, cloud.google.com/gke-nodepool , which has the node pool’s name as its value.

You can add a new node pool to a GKE Standard cluster using the gcloud CLI, the Google Cloud console, or Terraform. GKE also supports node auto-provisioning, which automatically manages the node pools in your cluster based on scaling requirements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How can you deploy services to specific node pools?

A

When you define a Service, you can indirectly control which node pool it is deployed into. The node pool is not dependent on the configuration of the Service, but on the configuration of the Pod.

You can explicitly deploy a Pod to a specific node pool by setting a nodeSelector in the Pod manifest. This forces a Pod to run only on nodes in that node pool. For an example see, Deploying a Pod to a specific node pool.

You can specify resource requests for the containers. The Pod only runs on nodes that satisfy the resource requests. For example, if the Pod definition includes a container that requires four CPUs, the Service does not select Pods running on nodes with two CPUs.

While creating a separate Kubernetes cluster with GPU-enabled nodes is a valid approach, it introduces unnecessary complexity and overhead. Managing multiple clusters increases administrative overhead and may result in underutilization of resources. Leveraging GKE’s capabilities to add GPU-enabled node pools to the existing cluster provides a more streamlined and cost-effective solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

what does gcloud compute networks subnets expand-ip-range do?

A

The gcloud compute networks subnets expand-ip-range command allows you to increase the IP range of an existing subnet in Google Cloud without needing to delete or recreate it. This method ensures that all VMs within the subnet can still reach each other without additional routes, as they remain within the same subnet but with an expanded address range. It’s a straightforward process that minimizes disruptions and maintains network connectivity.

While Shared VPC allows for resources in different projects to communicate over the same network, creating a new project is an unnecessary step when you can simply expand the current subnet’s IP range.

You cannot overwrite an existing subnet by creating a new one with the same starting IP address. Instead, you should expand the IP range of the existing subnet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

You want to configure 10 Compute Engine instances for availability when maintenance occurs. Your requirements state that these instances should attempt to automatically restart if they crash. Also, the instances should be highly available including during system maintenance. What should you do?

A
  • Option A: Create an instance template for the instances. Set the ‘Automatic Restart’ to on. Set the ‘On-host maintenance’ to Migrate VM instance. Add the instance template to an instance group.
    • Why Correct: This option aligns with the requirement of configuring Compute Engine instances for availability during maintenance. Enabling ‘Automatic Restart’ ensures that instances attempt to restart automatically if they crash, enhancing availability. Setting ‘On-host maintenance’ to ‘Migrate VM instance’ ensures that instances are migrated to other hosts during maintenance events, minimizing downtime. Additionally, using an instance group allows for easier management and scaling of instances.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is GCP autohealing?

A

Autohealing enables you to select which health check service will be used to determine if an Instance needs to be replaced due to unhealthiness. If an instance fails the health check selected, it is automatically replaced with a new instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

what does this do? Set Content-Type metadata to application/pdf on the PDF file objects.

A

Setting the Content-Type metadata to application/pdf on the PDF file objects instructs the browser on how to handle the file. When the correct Content-Type is specified, modern web browsers will attempt to display the PDF file inline within the browser window rather than prompting the user to save the file locally. This ensures a seamless user experience where users can view PDF files directly within the browser.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is cloud CDN?

A

Enabling Cloud CDN (Content Delivery Network) improves website performance by caching content closer to users. Cloud CDN is primarily used for caching and delivering static content more efficiently, rather than controlling how browsers handle specific file types.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is live migration?

A

Live migration is a feature that Google Cloud uses to migrate your VMs from one host to another for maintenance and infrastructure management without downtime. It is not a tool that customers can use to upgrade the memory or other resources of their VMs. This process is automatic and transparent, not user-initiated for resource upgrades.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What is metadata in google cloud VMs used for?

A

Metadata in Google Cloud VMs is used to store information about the instance or to configure how instances interact with other Google Cloud services. Adjusting metadata will not change the actual hardware or resource allocation of the VM, such as its memory capacity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What is a CIDR?

A

Classless Inter-Domain Routing (CIDR) is an IP address allocation method that improves data routing efficiency on the internet. Every machine, server, and end-user device that connects to the internet has a unique number, called an IP address, associated with it. Devices find and communicate with one another by using these IP addresses. Organizations use CIDR to allocate IP addresses flexibly and efficiently in their networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Advantages of sharing VPC between resources?

A

The critical aspect here is the single VPC setup, which inherently allows all resources within it to communicate using internal IP addresses without the need for additional routing setup.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What do different CIDR ranges ensure?

A

Different CIDR ranges ensure that the IP address spaces do not overlap, preventing any potential addressing conflicts. Using the same CIDR range for both subnets is not possible within a single VPC. Each subnet must have a unique CIDR block to prevent IP address conflicts and to ensure proper network segmentation and routing within the VPC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

How do VMs in different VPCs communicate?

A

Creating two separate VPCs for production and test environments might seem like a good idea for isolation. However, this setup complicates internal communication as VMs in different VPCs cannot directly communicate using internal IP addresses without setting up VPC peering or additional routing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What is a health check on port 443 commonly used for?

A

For HTTPS traffic autoscaling. By configuring the managed instance group to use this health check, it will continuously monitor the health of the VMs serving the HTTPS web application. Unhealthy VMs will be detected, and the managed instance group will automatically recreate them to maintain the desired instance count, ensuring high availability and reliability of the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are Google’s best practices for managing IAM roles and permissions at scale?

A
  1. By creating a Google group, you simplify the management of access permissions, making it easier to add or remove members as the team changes.
  2. The BigQuery dataViewer role grants sufficient permissions to view datasets and perform queries. This role does not, however, allow for job management or dataset modifications, focusing on query execution and data viewing, fitting the requirement for members to perform queries. This setup enhances security and manageability by grouping permissions and managing them through a single group assignment.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is the BigQuery jobUser role?

A

the BigQuery jobUser role primarily allows for managing and running jobs, which might not provide sufficient permissions for viewing or querying data within datasets. This role is more restrictive in terms of accessing data directly, which might not fully meet the data science team’s needs for querying and data analysis. The role’s focus on job management over data viewing makes it less suitable for the stated requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Deploying a new instance in the europe-west1 region while ensuring access to the existing application hosted on a Compute Engine instance in the us-central1 region, following Google-recommended practices:

A
  1. Create a VPC and a subnetwork in europe-west1.
  2. Expose the application with an internal load balancer.
  3. Create the new instance in the new subnetwork and use the load balancer’s address as the endpoint.
    4.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

How to quickly disable logs from a development GKE container with the minimum number of steps

A
  1. Go to the Logs ingestion window in Stackdriver Logging. - Accessing the Logs ingestion window in Stackdriver Logging allows you to manage log sources and configurations.
  2. Disable the log source for the GKE container resource. - Disabling the log source specifically for the GKE container resource ensures that logs from the container are no longer ingested, addressing the issue quickly and directly.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Gradually deploying a new version of a web application deployed as a managed instance group while ensuring that the available capacity does not decrease:

A
  1. Perform a rolling-action start-update with maxSurge set to 1 and maxUnavailable set to 0.

Using a rolling update with maxSurge set to 1 ensures that the new version of the application is gradually rolled out while maintaining the current capacity. With maxSurge set to 1, each new instance is added before the old one is removed, preventing any decrease in available capacity. Setting maxUnavailable to 0 ensures that there is no decrease in the number of available instances during the update process.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What does this do? gcloud compute instance-groups managed rolling-action start-update.

A

gcloud compute instance-groups managed rolling-action start-update updates instances in a managed instance group, according to the given versions and the given update policy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What does maxSurge control?

A

controls the maximum number of additional pods that can be created during a rolling update. It specifies the number or percentage of pods above the desired replica count that can be temporarily created.

42
Q

Implementing a database solution that can scale with user growth with minimum configuration changes:

A

Cloud Spanner

43
Q

What is Cloud Spanner?

A

Cloud Spanner is a horizontally scalable, globally distributed, and strongly consistent relational database service offered by Google Cloud. It is designed to scale seamlessly with user growth without requiring significant configuration changes. Cloud Spanner offers automatic scaling capabilities, allowing it to handle increasing user loads and data volumes effortlessly. It provides relational semantics, making it suitable for applications that store relational data from users. Additionally, Cloud Spanner offers high availability and strong consistency guarantees, making it a robust choice for global applications.

44
Q

What is Cloud SQL?

A

Cloud SQL is a fully managed relational database service that offers scalability, it may require more manual configuration changes to handle significant increases in user growth compared to Cloud Spanner. Cloud SQL instances have limitations in terms of scalability and geographic distribution, which might necessitate more frequent adjustments as the user base expands globally.

45
Q

What is Cloud Firestore?

A

Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud. It is a NoSQL document database that can scale with user growth, but it’s not specifically designed for relational data storage. While it offers scalability, it may not be the optimal choice for storing relational data from users, especially if the application’s primary data model relies heavily on relational structures.

46
Q

What is cloud datastore?

A

Cloud Datastore is a NoSQL document database service that can scale horizontally to handle user growth. However, it is not specifically designed for relational data storage, and it may require more effort to model relational data compared to Cloud Spanner. While it can scale with user growth, Cloud Datastore may not offer the same level of relational semantics and consistency guarantees as Cloud Spanner.

47
Q

To enable a Compute Engine instance in a different Virtual Private Cloud (VPC) to connect to an application running in Google Kubernetes Engine (GKE) with the least effort and complexity, let’s review the options provided:

A
  1. In GKE, create a Service of type LoadBalancer that uses the application’s Pods as backend. 2. Add an annotation to this service: cloud.google.com/load-balancer-type: Internal. 3. Peer the two VPCs together. 4. Configure the Compute Engine instance to use the address of the load balancer that has been created.*
48
Q

Give me an example in which you would use a coldline storage bucket

A

Storing logs from Cloud Audit in a Coldline Storage bucket is a cost-effective solution for long-term retention. Coldline Storage offers low storage costs, especially for data that is accessed infrequently but needs to be retained for an extended period, such as audit logs. By exporting logs to a Coldline Storage bucket, you can fulfill the requirement to store audit log files for three years while minimizing storage costs.

49
Q

What is cloud pub/sub?

A

Google Cloud Pub/Sub is a fully-managed, scalable, global and secure messaging service that allows you to send and receive messages among applications and services. You can use Cloud Pub/Sub’s to integrate decoupled systems and components hosted on Google Cloud Platform or elsewhere on the Internet.

50
Q

Running a caching HTTP reverse proxy on GCP while minimizing costs:

A

Option A: Create a Cloud Memorystore for Redis instance with 32-GB capacity.

51
Q

What is GCP redis?

A

Redis Cloud on Google Cloud is a fully managed, real-time data platform. Built for speed, Redis is the most popular NoSQL database among developers.

52
Q

What is cloud memory for redis?

A

Cloud Memorystore for Redis is a fully managed in-memory data store service that provides a highly available and scalable solution for caching. This option minimizes costs by utilizing a managed service, eliminating the need for managing infrastructure and ensuring optimal performance for a latency-sensitive website.

53
Q

What is a bare metal server?

A

In computer networking, a bare-metal server is a physical computer server that is used by one consumer, or tenant, only.

54
Q

To provide access to Cloud Storage for an application hosted on bare-metal servers in a data center without public IP addresses or internet access, let’s evaluate the options based on Google-recommended practices and the constraints provided:

A

This option leverages Google Cloud’s Private Google Access feature, which allows resources in a VPC that do not have public IP addresses to access Google Cloud services like Cloud Storage without exposing these resources to the internet. By creating a VPN tunnel or using Interconnect, you establish a private connection between your on-premises network and Google Cloud. The use of Cloud Router for custom route advertisement enables on-premises resources to route traffic destined for Google Cloud services through this private connection. Configuring your DNS to resolve Google Cloud service URLs to restricted.googleapis.com ensures that access to Cloud Storage and other Google services is restricted to internal, private access, adhering to your security policies.

55
Q

What is Cloud Run?

A

Cloud Run is a managed compute platform that lets you run containers directly on top of Google’s scalable infrastructure. You can deploy code written in any programming language on Cloud Run if you can build a container image from it.

56
Q

To deploy an application on Cloud Run that processes messages from a Cloud Pub/Sub topic while adhering to Google-recommended practices, let’s analyze the provided options:

A
  • Option C: 1. Create a service account. 2. Give the Cloud Run Invoker role to that service account for your Cloud Run application. 3. Create a Cloud Pub/Sub subscription that uses that service account and uses your Cloud Run application as the push endpoint.
57
Q

Deploying a containerized application in a new project, with the goal of minimizing costs:

A

Cloud Run is a serverless platform that allows you to run containers without managing the underlying infrastructure. It automatically scales your application based on incoming traffic, ensuring that you only pay for the resources consumed during request processing. Since the application receives very few requests per day, Cloud Run’s pay-as-you-go pricing model makes it cost-effective, as you are billed only for the CPU and memory resources used during request handling. Additionally, Cloud Run has a generous free tier, which can accommodate low-traffic applications without incurring any costs.

App Engine Flexible Environment allows you to deploy containerized applications and automatically manages the underlying infrastructure. However, it may not be the most cost-effective option for a low-traffic application compared to Cloud Run. App Engine Flexible instances are billed based on instance hours, which can lead to higher costs if the application receives very few requests per day.

58
Q

To address the question regarding the correct approach for granting the support team the necessary permissions to monitor the environment in Google Cloud Platform without accessing table data, specifically when using Cloud Spanner, let’s analyze each option individually.

A

Add the support team group to the roles/monitoring.viewer role.

This is the correct approach because the roles/monitoring.viewer role grants access to view monitoring data for all Google Cloud services, which is exactly what’s needed for the support team. This role allows the team to monitor the operational health of the environment without granting them access to the actual data within the tables. It aligns with Google’s best practices for least privilege, ensuring team members have only the permissions necessary for their role.

59
Q

To consolidate all GCP costs of both organizations onto a single invoice effectively and as of tomorrow, we need to select an option that allows for immediate and straightforward consolidation without unnecessary complexity. Let’s evaluate the options:

A

Link the acquired company’s projects to your company’s billing account.

This option is the most direct and efficient way to consolidate billing across the two companies. By linking the acquired company’s projects to your existing company’s billing account, all charges would be consolidated onto a single invoice. This action can be taken relatively quickly, making it feasible to start consolidating costs by tomorrow. It does not require the migration of projects between organizations, which can be a complex and time-consuming process. Additionally, this approach maintains the existing project and organization structures, minimizing disruption.

60
Q

To send all logs from Compute Engine instances to a BigQuery dataset called platform-logs efficiently and cost-effectively, let’s evaluate the options provided:

A

In Stackdriver Logging (now called Google Cloud Logging), create a filter to view only Compute Engine logs. 2. Click Create Export. 3. Choose BigQuery as Sink Service, and the platform-logs dataset as Sink Destination. This option directly leverages Google Cloud’s native logging capabilities to filter and export logs from Compute Engine instances to BigQuery. By creating a logs export in Cloud Logging with a specific filter for Compute Engine logs and setting BigQuery as the sink, all logs are automatically forwarded to the specified dataset. This approach minimizes cost by eliminating the need for intermediate services or manual processing. It is efficient, leveraging Google Cloud’s built-in mechanisms for logs management and analysis, and aligns with best practices for log analysis workflows.

61
Q

What is a DaemonSet?

A

DaemonSet is a Kubernetes feature that lets you run a Kubernetes pod on all cluster nodes that meet certain criteria. Every time a new node is added to a cluster, the pod is added to it, and when a node is removed from the cluster, the pod is removed.

62
Q

What is a kube-system namespace?

A

A Kubernetes namespace provides the scope for Pods, Services, and Deployments in the cluster. Users interacting with one namespace do not see the content in another namespace.

63
Q

To address the question regarding the deployment of a DaemonSet in the kube-system namespace of a Google Kubernetes Engine cluster using Deployment Manager, let’s evaluate each option:

A

Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet. This option leverages Deployment Manager’s capability to interact with the Kubernetes Engine cluster’s API directly, allowing for the creation of resources within the cluster. By adding the cluster’s API as a Type Provider in Deployment Manager, you can define the DaemonSet resource within the same deployment configuration used for creating the cluster. This approach ensures simplicity and consistency by keeping all resource definitions centralized within Deployment Manager, minimizing the need for additional services or manual interventions.

64
Q

To enable authentication for your on-premises application using Google Cloud Platform services like AutoML, it is essential to securely manage and utilize service account credentials. Here’s the most suitable approach based on the options provided:

A

Use gcloud to create a key file for the service account that has appropriate permissions.

This approach is the standard practice for authenticating on-premises applications with Google Cloud services. By using the gcloud command-line tool to create a key file for the service account, you generate a JSON key file that your application can use to authenticate API requests. This method ensures that your on-premises application has the necessary permissions to access Google Cloud services like AutoML, while keeping the authentication process secure. The key file should be stored and used securely to prevent unauthorized access.

65
Q

To address the requirement of allowing Google Kubernetes Engine (GKE) cluster nodes to download container images from Container Registry stored in a separate project, let’s examine each option.

A

In the project where the images are stored, grant the Storage Object Viewer IAM role to the service account used by the Kubernetes nodes.

This option grants the necessary permissions to the service account used by the Kubernetes nodes to access objects (container images) stored in Cloud Storage within the project where the images are stored. By assigning the Storage Object Viewer IAM role, the Kubernetes nodes’ service account gains read access to the container images in Container Registry, allowing them to pull the images during pod deployment in the GKE cluster. This approach follows the principle of least privilege by granting only the required permissions for accessing container images.

66
Q

To address the requirement of setting up a Windows VM on Compute Engine and enabling Remote Desktop Protocol (RDP) access, let’s examine each option:

A
  • Option B: After the VM has been created, use gcloud compute reset-windows-password to retrieve the login credentials for the VM.
    • Why Correct: This option allows you to reset the password for the Windows VM using the gcloud compute reset-windows-password command. When you create a Windows VM on Compute Engine, a random password is generated for the initial login. Using this command, you can securely retrieve the automatically generated password without compromising security. This approach ensures that you have the necessary credentials to log in to the Windows VM via RDP after it has been provisioned.
67
Q

To address the requirement of configuring SSH access to a single Compute Engine instance for users in the dev1 group, let’s examine each option:

A
  • Option A: Set metadata to enable-oslogin=true for the instance. Grant the dev1 group the compute.osLogin role. Direct them to use the Cloud Shell to SSH to that instance.
    • Why Correct: This option leverages Google Cloud’s OS Login feature, which allows users to authenticate using their Google Cloud credentials rather than SSH keys. By setting the instance metadata to enable-oslogin=true, you enable OS Login for the instance. Granting the dev1 group the compute.osLogin role ensures that members of the dev1 group have permission to use OS Login on the instance. Users can then SSH to the instance using the Cloud Shell or any SSH client that supports OS Login.
68
Q

Options provided for the question regarding the listing of enabled Google Cloud Platform (GCP) APIs for a project using the gcloud command line.

A
  • Option A: Run gcloud projects list to get the project ID, and then run gcloud services list --project <project ID>.
    • Why Correct: The gcloud projects list command lists all the projects available to your current account. Once you have identified the correct project ID, you can use the gcloud services list --project <project ID> command to list all the enabled services and APIs for the specified project. It’s a straightforward method for retrieving the information needed without any additional configuration steps.
69
Q

To address the need for an automated process to list all compute instances in both development and production projects on a daily basis, let’s examine each option:

A
  • Option A: Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources.
  • Why Correct: This option involves using gcloud, the command-line interface for Google Cloud Platform, to set up configurations for both development and production projects. By writing a script to switch between these configurations and execute the gcloud compute instances list command for each, you can retrieve a list of compute instances from both projects. This approach is straightforward, scriptable, and aligns well with the requirement of automating the process to list compute instances in multiple projects.
70
Q

To enable bursting to Google Cloud while ensuring direct communication between workloads on Google Cloud and on-premises infrastructure using a private IP range, let’s analyze each option:

A
  • Option D: Set up Cloud VPN between the infrastructure on-premises and Google Cloud.
    • Why Correct: Cloud VPN establishes a secure, encrypted connection between on-premises networks and Google Cloud Virtual Private Cloud (VPC) networks, allowing for private communication using private IP ranges. With Cloud VPN, you can extend your on-premises network to Google Cloud, enabling direct communication between workloads without exposing them to the public internet.
71
Q

What is dataproc?

A

Dataproc is a managed Spark and Hadoop service that lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. Dataproc automation helps you create clusters quickly, manage them easily, and save money by turning clusters off when you don’t need them.

72
Q

Advantages of creating a project using Cloud SDK

A

Creating a new project using the Cloud SDK allows for programmatic and consistent project creation, which is suitable for automated or scripted workflows.

Enabling the required API explicitly in the new project ensures that the necessary services are available for the instance creation process.

Specifying the new project during instance creation ensures that the instance is created within the intended project context.

73
Q

What is a Preemptible VM instance?

A

Preemptible VM instances are available at much lower price—a 60-91% discount—compared to the price of standard VMs. However, Compute Engine might stop (preempt) these instances if it needs to reclaim the compute capacity for allocation to other VMs.

Since the batch process can be restarted if interrupted and runs in an offline mode, a Preemptible VM is suitable for this scenario. Preemptible VMs offer significant cost savings compared to standard Compute Engine VMs, making them an ideal choice for intermittent, non-critical batch processing tasks like the one described. The ability to restart the task if it’s interrupted aligns well with the nature of preemptible instances, which Google may terminate if it requires access to those resources.

74
Q

What is cloud bigtable?

A
  • Cloud Bigtable is a fully managed, scalable NoSQL database service designed for high throughput and low-latency applications.
  • By ingesting the data into Cloud Bigtable, you can efficiently handle high volumes of events per hour per device.
  • Creating a row key based on the event timestamp allows for efficient and atomic retrieval of data based on the time of the event.
    • Cloud Bigtable is optimized for time-series data storage and retrieval, making it well-suited for scenarios where event timestamps are critical for data analysis and billing purposes.
    • With Cloud Bigtable, you can achieve the required atomicity for storing and retrieving individual signals efficiently.
    • Cloud Bigtable offers scalability, reliability, and consistent performance, making it suitable for handling the demands of a large-scale construction equipment rental business.
75
Q

For setting up application performance monitoring (APM) across multiple Google Cloud projects A, B, and C, while monitoring CPU, memory, and disk metrics, the most suitable option is:

A
  • Creating a workspace under project A in an APM tool such as Stackdriver or Cloud Monitoring allows you to consolidate monitoring data from multiple projects into a single pane of glass.
    • Enabling the API is a prerequisite for accessing and collecting monitoring data from the projects.
    • By creating a workspace under project A and adding projects B and C to it, you can centralize monitoring and view metrics for CPU, memory, and disk across all projects seamlessly.
    • This approach provides a unified view of performance metrics, making it easier to analyze and troubleshoot issues across multiple projects.
    • It allows for efficient management and monitoring of resources without the need to switch between different project contexts.
76
Q

To ensure that no public Internet traffic can be routed to the new Compute Engine instance while it’s connected to your WAN over a Virtual Private Network (VPN), the most appropriate option is:

A
  • By creating the instance without a public IP address, you prevent it from being accessible from the public Internet.
    • Instances without public IP addresses can only communicate within the VPC network or through the VPN connection to your on-premises network, ensuring that external Internet traffic cannot reach them.
    • This approach aligns with the principle of least privilege, as it restricts access to the instance to only the necessary communication channels.
77
Q

To share proposed infrastructure changes with your team while following Google’s recommended best practices, the most appropriate option is:

A
  • **Option B: Use Deployment Manager templates to describe the proposed changes and store them in Cloud Source Repositories.
  • Why Correct:
    • Deployment Manager templates are ideal for describing infrastructure changes in a declarative format, providing a clear and reproducible way to manage infrastructure as code.
    • Storing the Deployment Manager templates in Cloud Source Repositories enables version control, collaboration, and tracking changes over time.
    • Cloud Source Repositories provides Git repositories that integrate seamlessly with popular version control workflows and tools, facilitating team collaboration and code reviews.
    • Using Deployment Manager with Cloud Source Repositories aligns with Google’s recommended best practices for managing infrastructure changes in a scalable, collaborative, and version-controlled manner.
78
Q

The optimal approach to deploy additional pods requiring n2-highmem-16 nodes without downtime in Google Kubernetes Engine (GKE) is:

A
  • **Option B: Create a new Node Pool and specify machine type n2-highmem-16. Deploy the new pods.
  • Why Correct:
    • Creating a new Node Pool allows you to add nodes with the desired machine type (n2-highmem-16) to the existing cluster without disrupting the running pods.
    • By deploying the new pods to the new Node Pool, you ensure that they utilize the desired machine type without affecting the existing pods running on the current node pool.
    • This approach enables seamless scalability and resource allocation for different workload requirements without downtime or disruption to the existing application.
    • After verifying that the new pods are running successfully on the new node pool, you can safely decommission the old node pool if no longer needed.
79
Q

The most efficient way to join data from Cloud Spanner and Cloud Bigtable for specific users for an ad hoc request is:

A
  • **Option D: Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.
  • Why Correct:
    • BigQuery provides seamless integration with Cloud Storage and Cloud Bigtable, allowing you to create external tables that reference data stored in these services without the need for data movement or replication.
    • By creating external tables in BigQuery for both Cloud Storage and Cloud Bigtable, you can query the data directly from these sources without the overhead of copying or transferring data.
    • You can perform efficient joins between the two external tables based on user fields using SQL queries in BigQuery, enabling you to combine data from Cloud Spanner and Cloud Bigtable for specific users.
    • Applying filters in the SQL query allows you to retrieve only the relevant data, optimizing query performance and reducing processing overhead.
    • This approach minimizes data movement, reduces latency, and provides flexibility in querying and analyzing data across different Google Cloud services.
80
Q

To adjust the design of an application hosted on Compute Engine VMs in us-central1-a for zonal failure tolerance, reduce downtime, and minimize costs, let’s evaluate the provided options:

A
  • **Option A: Create Compute Engine resources in us-central1-b.
       - Balance the load across both us-central1-a and us-central1-b.**
  • Why Correct: Creating Compute Engine instances in an additional zone (us-central1-b) and using load balancing across instances in both zones (us-central1-a and us-central1-b) provides high availability by distributing the risk between two different zones. If one zone fails, the other can continue to serve the application, thereby eliminating downtime due to a single zone failure. Load balancing also allows for seamless traffic management between zones, which helps maintain application uptime in the event of a zone disruption. This approach is cost-effective because it doesn’t require provisioning resources in multiple regions and can be done within the same region, minimizing network egress costs.
81
Q

To review who has been granted the Project Owner role in a Google Cloud Platform project, you should:

A
  • **Option D: Use the command gcloud projects get-iam-policy to view the current role assignments.
  • Why Correct:
    • The gcloud projects get-iam-policy command allows you to retrieve the IAM policy for a project, which includes information about who has been granted the Project Owner role.
    • By using this command, you can directly query the IAM policy and view the current role assignments, including Project Owner role assignments, in a structured format.
    • This approach provides a straightforward and efficient way to review permissions without the need for navigating through the console or enabling audit logs.
82
Q

To address the question regarding managing IP address exhaustion in a subnet with multiple VPC-native Google Kubernetes Engine (GKE) clusters, let’s evaluate each option:

A
  • Option B: Add an alias IP range to the subnet used by the GKE clusters.
    • Why Correct: When the IP addresses in a subnet are exhausted, adding an alias IP range allows for the creation of additional IP addresses within the same subnet. By adding an alias IP range, you can expand the available IP address pool without creating new subnets or VPCs, ensuring that the existing GKE clusters can grow in nodes when needed. This solution minimizes management overhead and maintains the simplicity of managing resources within the same network configuration.
83
Q

To address the question about configuring egress ports in a new VPC behind a firewall while minimizing open ports, let’s evaluate each option:

A
  • Option A: Set up a low-priority (65534) rule that blocks all egress and a high-priority rule (1000) that allows only the appropriate ports.
    • Why Correct: This option implements the principle of least privilege by default, blocking all egress traffic with a low-priority rule and only allowing necessary egress ports with a high-priority rule. By blocking all egress traffic initially, you reduce the attack surface and potential for unauthorized data egress. Then, by allowing only specific ports with a higher priority rule, you ensure that only essential outbound connections are permitted, minimizing security risks.
84
Q

To ensure that users can query datasets in BigQuery without the risk of accidentally deleting them, the focus should be on assigning the appropriate roles or creating custom roles that align with these specific permissions. Let’s evaluate the options:

A
  • Option C: Create a custom role by removing delete permissions, and add users to that role only.
    • Why Correct: This approach is the most flexible and secure way to precisely control access permissions, adhering to the principle of least privilege. By creating a custom role and explicitly removing delete permissions (such as bigquery.datasets.delete), you can tailor the permissions to allow querying of datasets without the ability to delete them. This method ensures that users have only the permissions they need to perform their tasks, minimizing the risk of accidental or unauthorized actions. It follows Google-recommended practices for managing access control by customizing roles to meet specific organizational needs and security policies.
85
Q

To test your application locally on your laptop with Cloud Datastore, follow these steps:

A
  • Option D: Install the cloud-datastore-emulator component using the gcloud components install command.
    • Why Correct: The Cloud SDK provides a Datastore emulator that allows you to run Datastore locally on your laptop for testing purposes. Installing the cloud-datastore-emulator component using the gcloud components install command will set up the emulator environment on your machine. This emulator replicates the Cloud Datastore environment, enabling you to develop and test your application locally without interacting with the actual Cloud Datastore service. It provides a convenient way to validate your application’s behavior without incurring costs or affecting the production environment.
86
Q

What is Datastore?

A

Datastore is a schemaless database, which allows you to worry less about making changes to your underlying data structure as your application evolves. Datastore provides a powerful query engine that allows you to search for data across multiple properties and sort as needed.

87
Q

To address the question of how to monitor spending across multiple sandbox projects in Google Cloud Platform (GCP) and receive notifications if any individual project exceeds $500 per month, let’s evaluate each option:

A
  • Option C: Create a budget per project and configure budget alerts on all of these budgets.
    • Why Correct: This option allows for granular monitoring and notification of spending for each individual developer’s sandbox project. By creating a budget specifically for each project and configuring budget alerts accordingly, you can receive notifications when any project exceeds the specified spending threshold, in this case, $500 per month. This approach ensures that you can promptly identify and address any excessive spending at the project level while providing autonomy to individual developers in managing their budgets.
88
Q

What is GCP Pub/Sub?

A

Google Cloud Pub/Sub is a fully-managed, scalable, global and secure messaging service that allows you to send and receive messages among applications and services. You can use Cloud Pub/Sub’s to integrate decoupled systems and components hosted on Google Cloud Platform or elsewhere on the Internet.

89
Q

What is Cloud SQL and what are their properties?

A
  • Option B: Cloud SQL
    • Why Correct: Cloud SQL is Google Cloud’s fully managed relational database service that supports PostgreSQL, among other database engines like MySQL and SQL Server. Since the first version of the application is implemented in PostgreSQL, Cloud SQL would allow for a straightforward migration with minimal code changes due to its compatibility with PostgreSQL. Cloud SQL supports ACID transactions, strong consistency, and is optimized for transactional workloads with relational data models, making it the most appropriate choice for the application’s requirements. Additionally, Cloud SQL provides the performance needed for fast queries and transactional updates, ensuring that the application’s performance and consistency requirements are met.
90
Q

To deploy a Docker image as a workload on Google Kubernetes Engine (GKE), let’s evaluate each option:

A
  • Option D: Upload the image to Container Registry and create a Kubernetes Deployment referencing the image.
    • Why Correct: Google Cloud’s Container Registry is designed specifically for storing Docker container images and integrates seamlessly with GKE. By uploading the Docker image to Container Registry, you ensure that it’s easily accessible and managed within Google Cloud Platform. Creating a Kubernetes Deployment referencing the image allows you to define the desired state for the application, such as the number of replicas and resource requirements, and Kubernetes will manage the deployment and scaling of the application pods based on the specified configuration.
91
Q

To visualize CPU and network metrics for all three Google Cloud projects together while following Google-recommended practices, let’s evaluate each option:

A
  • Option B:
    1. Create a Cloud Monitoring Dashboard.
    2. Select the CPU and Network metrics from the three projects.
    3. Add CPU and network charts for each of the three projects.
    • Why Correct: This option aligns with Google’s recommended practices for visualizing metrics across multiple projects using Cloud Monitoring. By creating a single dashboard and selecting metrics from all three projects, you can aggregate and visualize CPU and network metrics together, providing a comprehensive view of resource usage across the projects. This approach is efficient and allows for centralized monitoring without the need for additional projects or complex setups.
92
Q

To implement the policy of requiring users to log into Google Cloud with their Active Directory (AD) identity while following Google-recommended practices, let’s evaluate each option:

A
  • Option A: Sync Identities with Cloud Directory Sync, and then enable SAML for single sign-on.
    • Why Correct: This option aligns with Google-recommended practices for integrating Active Directory with Google Cloud. Cloud Directory Sync (formerly known as Google Cloud Directory Sync) allows for the synchronization of user identities from on-premises Active Directory to Google Cloud. Enabling SAML (Security Assertion Markup Language) for single sign-on (SSO) ensures that users can authenticate with their AD credentials and access Google Cloud services seamlessly, maintaining security and compliance.
93
Q

To immediately change the storage class of an existing Google Cloud bucket to reduce service costs for infrequently accessed files and for all future files, let’s evaluate each option:

A
  • Option B: Use the gsutil to rewrite the storage class for the bucket. Set up Object Lifecycle management on the bucket.
    • Why Correct: This option allows you to immediately change the storage class of the existing bucket using gsutil, which is the Google Cloud command-line tool. By rewriting the storage class, you can efficiently transition infrequently accessed files to a lower-cost storage class. Additionally, setting up Object Lifecycle management enables automated rules for transitioning objects to different storage classes based on defined criteria, such as age or frequency of access. This combination ensures cost-effective storage management for both existing and future files in the bucket.
94
Q

To set up the billing configuration for a new Google Cloud customer who wants to group resources based on common IAM policies, let’s evaluate each option:

A
  • Option B: Use folders to group resources that share common IAM policies.
    • Why Correct: Google Cloud’s folder hierarchy allows you to organize and manage resources hierarchically, which aligns well with the requirement to group resources based on common IAM policies. Folders enable you to apply IAM policies at the folder level, allowing you to efficiently manage permissions for multiple resources within the same organizational unit. This approach provides a structured and scalable way to manage billing and access control based on organizational requirements.
95
Q

To deploy a managed MongoDB environment on Google Kubernetes Engine (GKE) with a support SLA, let’s evaluate each option:

A
  • Option B: Deploy MongoDB Atlas from the Google Cloud Marketplace
    • Why Correct: MongoDB Atlas is a fully managed MongoDB service that provides support SLAs, automated backups, monitoring, and scalability features. Deploying MongoDB Atlas from the Google Cloud Marketplace ensures seamless integration with Google Cloud Platform (GCP) and simplifies the management of MongoDB clusters. This option aligns with the requirement for a managed MongoDB environment with support SLA.
96
Q

To address the question about adding new users to Cloud Identity while avoiding conflicting accounts, let’s examine each option individually:

A
  • Option A: Invite the user to transfer their existing account
    • Why Correct: This option aligns with Google’s recommended practice of allowing users to transfer their existing Google accounts to Cloud Identity. By transferring their accounts, users can retain their existing data, settings, and preferences while seamlessly integrating with the organization’s Cloud Identity domain. It ensures continuity for users and simplifies the account management process.
97
Q

To address the question about preparing to create a Cloud Spanner instance for a globally distributed application, let’s examine each option individually:

A
  • Option D: Enable the Cloud Spanner API
    • Why Correct: Before creating a Cloud Spanner instance, you need to ensure that the Cloud Spanner API is enabled in your project. Enabling the API allows you to interact with Cloud Spanner programmatically and perform tasks such as creating and managing instances. This step is essential as it enables the necessary functionality for provisioning and configuring Cloud Spanner resources.
98
Q

To address the question about ensuring that future CLI commands by default address a specific Google Kubernetes Engine (GKE) cluster, let’s examine each option individually:

A
  • Option A: Use the command gcloud config set container/cluster dev.
    • Why Correct: This command sets the default GKE cluster for future CLI commands to the cluster named “dev”. By using gcloud config set, you can specify configuration properties for the Google Cloud CLI, including the default GKE cluster. This ensures that all subsequent commands targeting GKE will apply to the specified cluster by default.
99
Q

To address the question about specifying the service account each Compute Engine instance uses when calling Google Cloud APIs, let’s examine each option individually:

A
  • Option A: When creating the instances, specify a Service Account for each instance
    • Why Correct: This option allows you to explicitly define the service account that each Compute Engine instance will use when interacting with Google Cloud APIs during the instance creation process. By specifying the service account at creation time, you ensure that the instance is provisioned with the correct identity and permissions from the outset.
100
Q

To address the question about connecting applications in Google Cloud to an on-premises database server without needing to change the IP configuration in all applications when the database IP changes, let’s examine each option individually:

A
  • Option B: Create a private zone on Cloud DNS, and configure the applications with the DNS name.
    • Why Correct: This option involves creating a private DNS zone in Cloud DNS specifically for the on-premises database server. By assigning a DNS name to the database server, applications in Google Cloud can reference the database using the DNS name instead of the IP address. This approach abstracts the underlying IP configuration, allowing the database IP to change without affecting the applications. When the database IP changes, you only need to update the DNS record in Cloud DNS, ensuring seamless connectivity between the applications and the database server.
101
Q

What is an external load balancer designed for?

A

The External Network Load Balancer in Google Cloud is designed to handle both TCP and UDP traffic, making it suitable for applications that require direct access via an IP address from the Internet, like a multiplayer mobile game using UDP for communication. This type of load balancer allows you to scale your application across multiple VMs while presenting a single frontend IP address to the clients. It is ideal for scenarios where low latency and high performance are critical, as it operates at the network layer (Layer 4) and provides simple pass-through of packets without modifying them, ensuring efficient delivery of UDP packets for gaming purposes.

102
Q

Best practice in deploying containerized applications:

A
  • Option D: Create and deploy a Deployment per microservice.
    • Why Correct: Deploying a Deployment resource for each microservice is a common and recommended approach in Kubernetes for managing containerized applications. Deployments provide declarative updates, rollbacks, and scaling of applications. By creating a Deployment for each microservice, you can easily manage the lifecycle of each service, including scaling them individually based on demand. Deployments also support advanced features such as rolling updates, health checks, and resource constraints, making them well-suited for microservices architecture.
103
Q
A