PCA Review Deck Flashcards

1
Q

How can you expand the set of metrics that a project can access by adding other Google Cloud projects?

A

By default, a Google Cloud project has visibility only to the metrics it stores. However, you can expand the set of metrics that a project can access by adding other Google Cloud projects to the project’s metrics scope. The metrics scope defines the set of Google Cloud projects whose metrics the current Google Cloud project can access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the best practices for scoping projects when you have multiple projects you want to monitor?

A

We recommend that you use a new Cloud project or one without resources as the scoping project when you want to view metrics for multiple Cloud projects or AWS accounts.

When a metrics scope contains monitored projects, to chart or monitor only those metrics stored in the scoping project, you must specify filters that exclude metrics from the monitored projects. The requirement to use filters increases the complexity of chart and alerting policy, and it increases the possibility of a configuration error. The recommendation ensures that these scoping projects don’t generate metrics, so there are no metrics in the projects to chart or monitor.

The previous example follows our recommendation. The scoping project, AllEnvironments, was created and then the Staging and Production projects were added as monitored projects. To view or monitor the combined metrics for all projects, you use the metrics scope for the AllEnvironments project. To view or monitor the metrics stored in the Staging project, you use the metrics scope for that project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how do you organizing cloud operation Workspaces by environment

A

Organizing by environment means that Workspaces are aligned to environments such as development, staging, and production. In this case, projects are included in separate Workspaces based on their function in the environment. For example, splitting the projects along development and staging/production environments would result in two Workspaces: one for development and one for staging/production, as shown.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a metric?

A

Operations Suite supports the alerts creation based on predefined metrics.

A metric is a defined measurement using a resource based on a regular period of time. Metrics leverage mathematical calculations to measure outcomes.
Examples of metrics available using Operations Suite, and specifically the Stackdriver API, include maximum, minimum, average, and mean. Each of these mathematical calculations might evaluate CPU utilization, memory usage, and network activity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a cloud monitoring workspace?

A

Cloud Monitoring workspace in Google Cloud is a central place where you can monitor and manage metrics, dashboards, logs, uptime checks, alerts, and other observability data across one or more Google Cloud projects or AWS accounts.

Key Features:
Centralized monitoring: You can view and analyze metrics from multiple projects or AWS accounts in one place.

Dashboards: Create custom dashboards to visualize metrics.

Alerting policies: Define conditions to trigger alerts and route notifications.

Logs-based metrics: Create metrics from log data using Cloud Logging.

Uptime checks & SLOs: Monitor service availability and define service level objectives (SLOs).

Integration with Cloud Logging and Cloud Trace.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the rules regarding provisioning a workspace?

A

A Workspace can manage and monitor data for one or more GCP projects.

A project, however, can only be associated with a single Workspace.

*Monitoring Editor

*Monitoring Admin

*Project Owner

Before you create a new Workspace, you need to identify who in the organization has roles in a given project.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the GCP best practices for workspaces when you have to monitor multiple projects?

A

✅ Best Practices for GCP Monitoring Workspaces
1. Use a Dedicated Monitoring Project (Scoping Project)
Create a separate project just for the Monitoring workspace (e.g., monitoring-central).

This keeps monitoring resources isolated from your workloads and simplifies IAM permissions.

  1. Link Monitored Projects to the Central Workspace
    Add all workload-related GCP projects as monitored projects in the workspace.

You can monitor up to 100 projects per workspace.

  1. Apply the Principle of Least Privilege
    Grant read-only roles (e.g., roles/monitoring.viewer) to users who only need dashboard or alert access.

Use roles/monitoring.admin carefully — it’s powerful.

Limit write access to alerts, dashboards, and notification channels.

  1. Standardize Monitoring and Alerting Across Projects
    Use templates or automation (Terraform, Deployment Manager) to standardize dashboards and alerts.

Define SLOs and uptime checks with consistent naming and structure.

  1. Integrate Logging for Context
    Combine Cloud Logging with Monitoring by enabling logs-based metrics.

Store logs in centralized logging buckets for analysis and correlation.

  1. Use Groups and Resource Labels
    Use resource groups or labels (e.g., env=prod, team=payments) to filter and organize resources in dashboards and alerts.

Makes multi-project dashboards easier to manage.

  1. Plan for Growth
    For large orgs, consider multiple workspaces, e.g., by business unit or environment, with automation to manage consistency.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the 3 types of zonal/regional clusters?

A

Single-zone clusters
A single-zone cluster has a single control plane running in one zone. This control plane manages workloads on nodes running in the same zone.

Multi-zonal clusters
A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones. During an upgrade of the cluster or an outage of the zone where the control plane runs, workloads still run. However, the cluster, its nodes, and its workloads cannot be configured until the control plane is available. Multi-zonal clusters balance availability and cost for consistent workloads. If you want to maintain availability and the number of your nodes and node pools are changing frequently, consider using a regional cluster.

Regional clusters
A regional cluster has multiple replicas of the control plane, running in multiple zones within a given region. Nodes in a regional cluster can run in multiple zones or a single zone depending on the configured node locations. By default, GKE replicates each node pool across three zones of the control plane’s region. When you create a cluster or when you add a new node pool, you can change the default configuration by specifying the zone(s) in which the cluster’s nodes run. All zones must be within the same region as the control plane.
https://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What Cloud Storage systems are there for granting users permission to access your buckets and objects?

A

Cloud Storage offers two systems for granting users permission to access your buckets and objects: IAM and Access Control Lists (ACLs). These systems act in parallel - in order for a user to access a Cloud Storage resource, only one of the systems needs to grant the user permission.
IAM - grant permissions at the bucket and project levels.
ACLs - used only by Cloud Storage and have limited permission options, per-object basis.

uniform bucket permissioning system
disables ACLs
Resources granted exclusively through IAM. After you enable uniform bucket-level access, you can reverse your decision for 90 days.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What do you need to do to protect your org after you creat4 a billing account and setup projects?
Why?

A

When an organization resource is created, all users in your domain are granted the Billing Account Creator and Project Creator roles by default. These default roles allow your users to start using Google Cloud immediately, but are not intended for use in regular operation of your organization resource.
the organization resource.
Removing default roles from the organization resource
After you designate your own Billing Account Creator and Project Creator roles, you can remove these roles from the organization resource to restrict those permissions to specifically designated users. To remove roles from the organization resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You want to deploy an application to a Kubernetes Engine cluster using a manifest file called my-app.yaml.

What command would you use?

A

kubectl apply -f my-app.yaml
kubectl apply -k dir

Explanation
Part of the app management commands.

The correct answer is to use the “kubectl apply -f” with the name of the deployment file. Deployments are Kubernetes abstractions and are managed using kubectl, not gcloud. The other options are not valid commands. For more information, see https://kubernetes.io/docs/reference/kubectl/overview/.

The command set kubectl apply is used at a terminal’s command-line window to create or modify Kubernetes resources defined in a manifest file. This is called a declarative usage. The state of the resource is declared in the manifest file, then kubectl apply is used to implement that state.

In contrast, the command set kubectl create is the command you use to create a Kubernetes resource directly at the command line. This is an imperative usage. You can also use kubectl create against a manifest file to create a new instance of the resource. However, if the resource already exists, you will get an error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Kubernetes Engine collects application logs by default when the log data is written where?

A

app logs: STDOUT and STDERR

In addition to cluster audit logs, and logs for the worker nodes, GKE automatically collects application logs written to either STDOUT or STDERR. If you’d prefer not to collect application logs, you can also now choose to collect only system logs. Collecting system logs are critical for production clusters as it significantly accelerates the troubleshooting process. No matter how you plan to use logs, GKE and Cloud Logging make it simple and easy–simply start your cluster, deploy your applications and your logs appear in Cloud Logging!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Where does GKE collect Cluster logs?

A

By default, GKE clusters are natively integrated with Cloud Logging (and Monitoring). When you create a GKE cluster, both Monitoring and Cloud Logging are enabled by default. That means you get a monitoring dashboard specifically tailored for Kubernetes and your logs are sent to Cloud Logging’s dedicated, persistent datastore, and indexed for both searches and visualization in the Cloud Logs Viewer.

If you have an existing cluster with Cloud Logging and Monitoring disabled, you can still enable logging and monitoring for the cluster. That’s important because with Cloud Logging disabled, a GKE-based application temporarily writes logs to the worker node, which may be removed when a pod is removed, or overwritten when log files are rotated. Nor are these logs centrally accessible, making it difficult to troubleshoot your system or application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Where would you view your GKE logs?

A

Cloud Logging, and its companion tool Cloud Monitoring, are full featured products that are both deeply integrated into GKE. In this blog post, we’ll go over how logging works on GKE and some best practices for log collection. Then we’ll go over some common logging use cases, so you can make the most out of the extensive logging functionality built into GKE and Google Cloud Platform.

Cloud Logging console – You can see your logs directly from the Cloud Logging console by using the appropriate logging filters to select the Kubernetes resources such as cluster, node, namespace, pod or container logs. Here are some sample Kubernetes-related queries to help get you started.

GKE console – In the Kubernetes Engine section of the Google Cloud Console, select the Kubernetes resources listed in Workloads, and then the Container or Audit Logs links.

Monitoring console – In the Kubernetes Engine section of the Monitoring console, select the appropriate cluster, nodes, pod or containers to view the associated logs.

gcloud command line tool – Using the gcloud logging read command, select the appropriate cluster, node, pod and container logs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the difference between Regional and global IP addresses?

A

When you list or describe IP addresses in your project, Google Cloud labels addresses as global or regional, which indicates how a particular address is being used. When you associate an address with a regional resource, such as a VM, Google Cloud labels the address as regional. Regions are Google Cloud regions, such as us-east4 or europe-west2.

For more information about global and regional resources, see Global, regional, and zonal resources in the Compute Engine documentation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

As a developer using GCP, you will need to set up a local development environment. You will want to authorize the use of gcloud commands to access resources. What commands could you use to authorize access?

A

gcloud init
Explanation
Gcloud init will authorize access and perform other common setup steps. Gcloud auth login will authorize access only. Gcloud login and gcloud config login are not valid commands.

You can also run gcloud init to change your settings or create a new configuration.

gcloud init performs the following setup steps:

Authorizes the gcloud CLI to use your user account credentials to access Google Cloud, or lets you select an account if you have previously authorized access
Sets up a gcloud CLI configuration and sets a base set of properties, including the active account from the step above, the current project, and if applicable, the default Compute Engine region and zone
https://cloud.google.com/sdk/docs/initializing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

gcloud auth login
Authorize with a user account without setting up a configuration.

A

gcloud auth login [ACCOUNT] [–no-activate] [–brief] [–no-browser] [–cred-file=CRED_FILE] [–enable-gdrive-access] [–force] [–no-launch-browser] [–update-adc] [GObtains access credentials for your user account via a web-based authorization flow. When this command completes successfully, it sets the active account in the current configuration to the account specified. If no configuration exists, it creates a configuration named default.
If valid credentials for an account are already available from a prior authorization, the account is set to active without rerunning the flow.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You have a Cloud Datastore database that you would like to backup. You’d like to issue a command and have it return immediately while the backup runs in the background. You want the backup file to be stored in a Cloud Storage bucket named my-datastore-backup. What command would you use?

A

gcloud datastore export gs://my-datastore-backup –async

Explanation
The correct command is gcloud datastore export gs://my-datastore-backup –async. Export, not backup, is the datastore command to save data to a Cloud Storage bucket. Gsutil is used to manage Cloud Storage, not Cloud Datastore. For more information, see https://cloud.google.com/datastore/docs/export-import-entities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How do you setup a database for export?

A

Before you begin
Before you can use the managed export and import service, you must complete the following tasks.

Enable billing for your Google Cloud project. Only Google Cloud projects with billing enabled can use the export and import functionality.

Create a Cloud Storage bucket in the same location as your Firestore in Datastore mode database. You cannot use a Requester Pays bucket for export and import operations.

Assign an IAM role to your user account that grants the datastore.databases.export permission, if you are exporting data, or the datastore.databases.import permission, if you are importing data. The Datastore Import Export Admin role, for example, grants both permissions.

If the Cloud Storage bucket is in another project, give your project’s default services account access to the bucket.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Authorize with a user account
Use the following gcloud CLI commands to authorize access with a user account:

A

Command Description
gcloud init Authorizes access and performs other common setup steps.
gcloud auth login Authorizes access only.

During authorization, these commands obtain account credentials from Google Cloud and store them on the local system.
The specified account becomes the active account in your configuration.
The gcloud CLI uses the stored credentials to access Google Cloud. You can have any number of accounts with stored credentials for a single gcloud CLI installation, but only one account is active at a time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

A manager in your company is having trouble tracking the use and cost of resources across several projects. In particular, they do not know which resources are created by different teams they manage. What would you suggest the manager use to help better understand which resources are used by which team?

A

Labels are key-value pairs attached to resources and used to manage them. The manager could use a key-value pair with the key ‘team-name’ and the value the name of the team that created the resource. Audit logs do not necessarily have the names of teams that own a resource. Traces are used for performance monitoring and analysis. IAM policies are used to control access to resources, not to track which team created them.
For more information, see
https://cloud.google.com/resource-manager/docs/creating-managing-labels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You have created a target pool with instances in two zones which are in the same region. The target pool is not functioning correctly. What could be the cause of the problem?

A

The target pool is missing a health check.
Target pools must have a health check to function properly. Nodes can be in different zones but must be in the same region. Cloud Monitoring and Cloud Logging are useful but they are not required for the target pool to function properly. Nodes in a pool have the same configuration. For more information, see https://cloud.google.com/load-balancing/docs/target-pools

n summary, choose Target Pools for simpler, region-specific load balancing tasks with Network Load Balancers, and opt for Instance Groups when your requirements include HTTP(S) routing, global distribution, or autoscaling capabilities. This decision largely hinges on the complexity of your application’s deployment and the specific features of Google Cloud’s load balancing that you intend to leverage.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is an External NLB Target pool based load balancer look like?

A

Google Cloud external TCP/UDP Network Load Balancing (after this referred to as Network Load Balancing) is a regional, pass-through load balancer. A network load balancer distributes external traffic among virtual machine (VM) instances in the same region.

You can configure a network load balancer for TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.

A network load balancer can receive traffic from:

Any client on the internet
Google Cloud VMs with external IPs
Google Cloud VMs that have internet access through Cloud NAT or instance-based NAT

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is a target pool?

A

Target pools
A target pool resource defines a group of instances that should receive incoming traffic from forwarding rules. When a forwarding rule directs traffic to a target pool, Cloud Load Balancing picks an instance from these target pools based on a hash of the source IP and port and the destination IP and port. Each target pool operates in a single region and distributes traffic to the first network interface (nic0) of the backend instance. For more information about how traffic is distributed to instances, see the Load distribution algorithm section in this topic.

The network load balancers are not proxies. Responses from the backend VMs go directly to the clients, not back through the load balancer. The load balancer preserves the source IP addresses of packets. The destination IP address for incoming packets is the regional external IP address associated with the load balancer’s forwarding rule.

For architecture details, see network load balancer with a target pool backend.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What are Health checks?

A

Health checks ensure that Compute Engine forwards new connections only to instances that are up and ready to receive them. Compute Engine sends health check requests to each instance at the specified frequency. After an instance exceeds its allowed number of health check failures, it is no longer considered an eligible instance for receiving new traffic.

To allow for graceful shutdown and closure of TCP connections, existing connections are not actively terminated. However, existing connections to an unhealthy backend are not guaranteed to remain viable for long periods of time. If possible, you should begin a graceful shutdown process as soon as possible for your unhealthy backend.

The health checker continues to query unhealthy instances, and returns an instance to the pool when the specified number of successful checks occur. If all instances are marked as UNHEALTHY, the load balancer directs new traffic to all existing instances.

Network Load Balancing relies on legacy HTTP health checks to determine instance health. Even if your service does not use HTTP, you must run a basic web server on each instance that the health check system can query.

Legacy HTTPS health checks aren’t supported for network load balancers and cannot be used with most other types of load balancers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

A client has asked for your advice about building a data transformation pipeline. The pipeline will read data from Cloud Storage and Cloud Spanner, merge data from the two sources and write the data to a BigQuery data set. The client does not want to manage servers or other infrastructure, if possible. What GCP service would you recommend?

A

Cloud Data Fusion

Cloud Data Fusion is a managed service that is designed for building data transformation pipelines. https://cloud.google.com/data-fusion/docs/how-to
What is Cloud Data Fusion?

bookmark_border
Cloud Data Fusion is a fully managed, cloud-native, enterprise data integration service for quickly building and managing data pipelines.

The Cloud Data Fusion web UI lets you to build scalable data integration solutions to clean, prepare, blend, transfer, and transform data, without having to manage the infrastructure.

Cloud Data Fusion is powered by the open source project CDAP. Throughout this page, there are links to the CDAP documentation site, where you can find more detailed information.

Choosing Dataflow Over Data Fusion Complex Data Processing Needs: If the pipeline requires complex, custom processing logic or needs to handle a mix of batch and real-time data efficiently, Dataflow is generally the better choice. Performance Considerations: For high-volume, performance-sensitive applications that require extensive scalability and rapid processing, Dataflow's auto-scaling and processing capabilities make it more suitable. Developer Control: If the team has strong programming capabilities and prefers to maintain control over every aspect of the pipeline’s behavior and performance, Dataflow's programming model offers more flexibility. In summary, if your client requires a robust, scalable solution capable of handling complex transformations and real-time processing with precise control over pipeline execution, Google Cloud Dataflow would be the recommended choice over Google Cloud Data Fusion.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What are firewall rules logging for?
How do you enable them.

A

Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. For example, you can determine if a firewall rule designed to deny traffic is functioning as intended. Firewall Rules Logging is also useful if you need to determine how many connections are affected by a given firewall rule.

You enable Firewall Rules Logging individually for each firewall rule whose connections you need to log. Firewall Rules Logging is an option for any firewall rule, regardless of the action (allow or deny) or direction (ingress or egress) of the rule.

Firewall Rules Logging logs traffic to and from Compute Engine virtual machine (VM) instances. This includes Google Cloud products built on Compute Engine VMs, such as Google Kubernetes Engine (GKE) clusters and App Engine flexible environment instances.

When you enable logging for a firewall rule, Google Cloud creates an entry called a connection record each time the rule allows or denies traffic. You can view these records in Cloud Logging, and you can export logs to any destination that Cloud Logging export supports.

Each connection record contains the source and destination IP addresses, the protocol and ports, date and time, and a reference to the firewall rule that applied to the traffic.

Firewall Rules Logging is available for both VPC firewall rules and hierarchical firewall policies.
https://cloud.google.com/vpc/docs/firewall-rules-logging

How to Enable Firewall Rules Logging
Enabling firewall rules logging in GCP involves modifying the configuration of the firewall rules to include logging. Here’s how you can enable logging for firewall rules:

Via Google Cloud Console
Navigate to the VPC Network:

Open the Google Cloud Console.

Go to the “VPC network” section under “Networking”.

Firewall Rules:

Click on “Firewall” to view the list of existing firewall rules.

Edit or Create a Rule:

To edit an existing rule, click on the name of the rule and then click the “EDIT” button at the top of the page.

To create a new rule, click on “CREATE FIREWALL RULE”.

Enable Logging:

Scroll down to the “Logs” section.

Check the box for “Log” to enable logging. You can choose either:

Log all sessions: Logs every session the rule applies to.

Log denied sessions only: Only logs sessions that the firewall rule denied.

Save the Changes:

Click “Save” if editing or “Create” if you are creating a new rule with logging enabled.

Via gcloud Command Line
You can also enable logging for firewall rules using the gcloud command-line tool. Here’s how to update an existing rule to enable logging:

bash
Copy code
gcloud compute firewall-rules update [FIREWALL_RULE_NAME] –enable-logging –logging-metadata=[LOGGING_OPTION]
Replace [FIREWALL_RULE_NAME] with the name of your firewall rule.

The –logging-metadata option can be set to exclude-all, include-all, or custom-metadata based on how much metadata you want to include in the logs.

Considerations
Costs: Enabling logging for firewall rules can generate a large volume of data, especially if you log all sessions for high-traffic rules. This can lead to increased costs related to log ingestion and storage.

Performance: While generally minimal, logging can impact the performance of your network, especially under high load or when logging extensively.

By carefully enabling and managing firewall rules logging, you can significantly enhance your network’s security posture and operational transparency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the difference between cloud logging and cloud monitoring?

A

Cloud Logging and Cloud Monitoring provide your IT Ops/SRE/DevOps teams with out-of-the box observability needed to monitor your infrastructure and applications.
Cloud
Logging automatically ingests Google Cloud audit and platform logs so that you can get started right away.

Cloud Logging
Purpose:

Cloud Logging is primarily focused on capturing, storing, and managing log data from Google Cloud services, virtual machines, and applications. It provides a way to view, search, and analyze log data generated by resources running on GCP and even from external sources.

Key Features:

Log Storage: Logs data from GCP resources, including Compute Engine, App Engine, Cloud SQL, and others. It supports custom log data from your applications.

Log Management: Provides tools for managing log data retention, access, and organization. You can create logs-based metrics to monitor log data and trigger alerts.

Log Analysis: Offers powerful querying capabilities through a built-in query language, allowing you to filter, search, and analyze log entries.

Integration with BigQuery: Allows exporting logs to BigQuery for advanced analytics and extended storage.

Cloud Monitoring
Purpose:

Cloud Monitoring focuses on collecting metrics, events, and metadata from Google Cloud services. It provides visualization and alerting tools that help you understand your application’s performance and health.

Key Features:

Metrics Collection: Gathers data across GCP resources and applications. This includes system metrics (like CPU usage, network traffic), custom metrics, and external metrics.

Dashboards and Visualization: Provides customizable dashboards to visualize metrics and understand trends, system behaviors, and performance in real-time.

Alerting: Offers alerting capabilities based on specific conditions in the monitored data. You can set up notifications via email, SMS, or other methods if metrics cross certain thresholds.

Uptime Checks: Can configure regular checks to monitor the availability and responsiveness of web applications and public URLs from around the globe.

Interaction and Use Cases
Log Data in Monitoring: Cloud Logging and Cloud Monitoring are integrated in a way that you can use logs-based metrics (created in Cloud Logging) in Cloud Monitoring for visualization and alerting. This helps in observing the occurrences of specific log entries over time.

Incident Management: When an incident occurs (like an application error), Cloud Logging provides the detailed logs that help diagnose the issue, whereas Cloud Monitoring might be the tool that alerts you to the anomaly in the metrics that indicates an issue is occurring.

Conclusion
While Cloud Logging is more about the detailed diagnostic data (logs), Cloud Monitoring focuses on the operational health and performance (metrics) of your services. Together, they provide a comprehensive view of your applications and infrastructure, helping you maintain performance and troubleshoot issues more effectively. In practice, they often operate in tandem to ensure that you can not only detect issues through monitoring but also drill down into detailed log data to investigate and resolve those issues.

Cloud Monitoring provides a view of all Google Cloud metrics at zero cost and integrates with a variety of providers for non Google Cloud monitoring.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

A client of yours wants to deploy a stateless application to Kubernetes cluster. The replication controller is named my-app-rc. The application should scale based on CPU utilization; specifically when CPU utilization exceeds 80%. There should never be fewer than 2 pods or more than 6. What command would you use to implement autoscaling with these parameters?

A

kubectl autoscale rc my-app-rc –min=2 –max=6 –cpu-percent=80

The correct command is to use kubectl autoscale specifying the appropriate min, max, and cpu percent.
When you use kubectl autoscale, you specify a maximum and minimum number of replicas for your application, as well as a CPU utilization target.
For example, to set the maximum number of replicas to six and the minimum to four, with a CPU utilization target of 50% utilization, run the following command:
kubectl autoscale deployment my-app –max 6 –min 4 –cpu-percent 50
In this command, the –max flag is required. The –cpu-percent flag is the target CPU utilization over all the Pods. This command does not immediately scale the Deployment to six replicas, unless there is already a systemic demand.

After running kubectl autoscale, the HorizontalPodAutoscaler object is created and targets the application. When there is a change in load, the object increases or decreases the application’s replicas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Horizontal Pod Autoscaling

A

In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.

Horizontal scaling means that the response to increased load is to deploy more Pods. This is different from vertical scaling, which for Kubernetes would mean assigning more resources (for example: memory or CPU) to the Pods that are already running for the workload.

If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale back down.

Horizontal pod autoscaling does not apply to objects that can’t be scaled (for example: a DaemonSet.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

How to scale a deployed application in Google Kubernetes Engine (GKE).

A

When you deploy an application in GKE, you define how many replicas of the application you’d like to run. When you scale an application, you increase or decrease the number of replicas.

Each replica of your application represents a Kubernetes Pod that encapsulates your application’s container(s).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

How do you create a cloud billing account - what are the prerequisites?

A

Creating a Google Cloud Platform (GCP) Cloud Billing Account involves several steps and prerequisites. It’s important to ensure that you meet all requirements and understand the billing process to manage your GCP resources effectively. Here’s a guide on how to set up a Cloud Billing account in GCP and the prerequisites you’ll need:

Prerequisites
Google Account: You need a Google Account to access the Google Cloud Platform. This can be your existing Google account, like a Gmail account, or a Google Workspace account if you’re using it within an organization.

Organization: Although not mandatory for all use cases, it’s beneficial to have a GCP Organization set up if you’re managing billing for multiple projects or for enterprise use. This requires a Google Workspace or Cloud Identity account.

Payment Method: You must have a valid credit card, bank account (in some countries), or other payment methods accepted by Google to set up a billing account. Payment methods might vary by country.

Permissions: If you are not the owner of the GCP account but an admin or a manager, you need the appropriate permissions to create or manage billing accounts. Typically, this would be the Billing Account Administrator role (roles/billing.admin).

Steps to Create a Cloud Billing Account
Sign in to the Google Cloud Console:

Visit the Google Cloud Console.

Sign in with your Google Account.

Navigate to the Billing Section:

In the Cloud Console, navigate to the hamburger menu (three horizontal lines in the top left corner) and select “Billing”.

Create a New Billing Account:

If you don’t have an existing billing account, click on “Create Account”.

If you already have a billing account but need to set up a new one, first select “Manage billing accounts”, and then click “Add billing account” or “Create account”.

Choose Account Type:

Choose whether this billing account will be for self-service or invoiced billing. Self-service billing is typical and requires a credit card. Invoiced billing usually applies to larger organizations with special arrangements.

Enter Account Information:

Provide the necessary details for your billing account such as name, address, and payment method. This information is critical for tax calculations and invoice details.

Confirm Payment Method:

Enter your payment details. Your credit card or bank details will be verified. Google might make a small, refundable charge to verify your card.

Set Up Billing for Projects:

Once the billing account is created, you can link it to one or more GCP projects. Go to “My Projects”, select a project, and then link it to your billing account from the project settings.

Review and Complete:

Review all details and complete the setup. Make sure that all information is correct as this affects billing and service delivery.

After Setup
Once your billing account is established, you can manage budgets, monitor costs, and analyze spending through the Billing section of the Google Cloud Console. Setting alerts and budget thresholds is highly recommended to avoid unexpected charges.

Creating and managing a GCP billing account is straightforward but requires attention to detail, especially concerning payment information and account management roles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

A photographer wants to share images they have stored in a Cloud Storage bucket called free-photos-on-gcp. What command would you use to allow all users to read these files?

A

gsutil iam ch allUsers:objectViewer gs://free-photos-on-gcp

gsutil is used with Cloud Storage, not gcloud so the gcloud ch option is wrong. The term objectViewer is the correct way to grant read access to objects in a bucket.

https://cloud.google.com/storage/docs/gsutil/commands/iam

Ch
The iam ch command incrementally updates Cloud IAM policies. You can specify multiple access grants or removals in a single command. The access changes are applied as a batch to each url in the order in which they appear in the command line arguments. Each access change specifies a principal and a role that is either granted or revoked.

You can use gsutil -m to handle object-level operations in parallel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

An auditor is reviewing your GCP use. They have asked for access to any audit logs available in GCP. What audit logs are available for each project, folder, and organization?

A

Types of audit logs
Cloud Audit Logs provides the following audit logs for each Cloud project, folder, and organization:

Admin Activity audit logs
Data Access audit logs
System Event audit logs
Policy Denied audit logs

Cloud Audit Logs maintain three audit logs: Admin Activity logs, Data Access logs, and System Event logs. There is no such thing as a Policy Access log, a User Login log, or a Performance Metric log in GCP Audit Logs. For more information, see https://cloud.google.com/logging/docs/audit

ypes of Audit Logs in GCP
Admin Activity Logs:

Description: These logs record operations that modify the configuration or metadata of a resource. Admin activity logs are always enabled and do not incur any charges.

Typical Entries: Changes in service settings, VM management operations (starts, stops), and modifications to IAM roles or permissions.

Availability: Available by default for all GCP resources across projects, folders, and organizations.

Data Access Logs:

Description: These logs record access to user-provided data. For most services, Data Access logs are not enabled by default due to their potential volume and sensitivity.

Subcategories:

Read: Accesses that may return user data.

Write: Operations that modify user data.

Admin Read: Administrative read operations that may return metadata.

Enablement: Must be explicitly enabled for most services except for BigQuery, where they are automatically enabled.

Note: Data Access logs may incur costs, so it’s essential to manage them considering the potential volume.

System Event Logs:

Description: These logs record actions taken by the Google Cloud system (not user-initiated) that modify the configuration or metadata of resources.

Typical Entries: Automated system maintenance actions, like auto-scaling events.

Availability: Always enabled and available for all resources, similar to Admin Activity logs.

Policy Denied Logs:

Description: These logs record attempts to perform operations that are denied by service-specific policies.

Availability: Must be explicitly enabled and are particularly useful for security and compliance monitoring.

How to Access Audit Logs
To provide an auditor access to these logs, you can use several approaches depending on their needs:

Google Cloud Console: For manual inspection and basic queries, auditors can use the Logs Explorer in the Google Cloud Console. Ensure they have the necessary permissions, typically roles like roles/logging.viewer or roles/logging.privateLogViewer for private Data Access logs.

BigQuery Export: For more extensive analysis, audit logs can be exported to BigQuery. This allows auditors to run complex queries and perform in-depth analysis of the audit data over time.

Cloud Storage Export: For long-term storage and archival, logs can be exported to Cloud Storage. This is useful for compliance with regulations that require retaining audit logs for extended periods.

Pub/Sub: For real-time access or integration with external tools, audit logs can be exported to Pub/Sub and then to a SIEM (Security Information and Event Management) system or other analysis tools.

Permissions and Roles for Auditors
To access these logs, auditors will need specific IAM roles. At a minimum, the roles/logging.viewer role allows viewing all non-private logs. For access to Data Access logs or other sensitive logs, consider roles like roles/logging.privateLogViewer. If the auditor needs to audit across the organization, ensure they have these roles at the organization level or appropriately delegated down to folders and projects.

Providing this access and understanding what each type of audit log contains are crucial for an effective auditing process in GCP.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Before you can use your domain with Cloud Identity, you need to verify that you own it.
What is a domain, why verify, how verify?

A

Cloud Identity provides domain verification records, which are added to DNS settings for the domain. IAM is used to control access granted to identities, it is not a place to manage domains. The billing account is used for payment tracking, it is not a place to manage domains. Resources do have metadata, but that metadata is not used to manage domains. For more information on verifying domains, see https://cloud.google.com/identity/docs/verify-domain.

Your domain is your web address, as in your-company.com. Verifying your domain prevents anyone else from using it with Cloud Identity.

Why verify?
Verifying your domain is the first step in setting up Cloud Identity for your business. If you are the person who signed up for Cloud Identity, this makes you the administrator of your new account. You need to verify that you own your business domain before you can use Cloud Identity. This ensures your account is secure and that no one else can use services from your domain.

How do I verify?
You verify your domain through your domain host (typically where you purchased your domain name). Your domain host maintains records (DNS settings) that direct internet traffic to your domain name. (Go to Identify your domain host.)

Cloud Identity gives you a verification record to add to your domain’s DNS settings. When Cloud Identity sees the record exists, your domain ownership is confirmed. The verification record doesn’t affect your website or email.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

How can you setup an organizational policy restriction on geographic location?

A

Restricting Resource Locations

Create a policy at the organization level of the resource hierarchy
that includes a constraint using a Resource Location Restriction.

This guide describes how to set an organization policy that includes the resource locations constraint.

You can limit the physical location of a new resource with the **Organization Policy Service **resource locations constraint.

You can use the location property of a resource to identify where it is deployed and maintained by the service. For data-containing resources of some Google Cloud services, this property also reflects the location where data is stored. This constraint allows you to define the allowed Google Cloud locations where the resources for supported services in your hierarchy can be created.

After you define resource locations, this limitation will apply only to newly-created resources. Resources you created before setting the resource locations constraint will continue to exist and perform their function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

A startup is implementing an IoT application that will ingest data at high speeds. The architect for the startup has decided that data should be ingested in a queue that can store the data until the processing application is able to process it. The architect also wants to use a managed service in Google Cloud. What service would you recommend?

A

Cloud Pub/Sub is a queuing service that is used to ingest data and store it until it can be processed. Bigtable is a NoSQL database, not a queueing service. Cloud Dataflow is a stream and batch processing service, not a queueing service. Cloud Dataproc is a managed Spark/Hadoop service.

For more information, see https://cloud.google.com/pubsub/docs/overview.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

You have a set of snapshots that you keep as backups of several persistent disks. You want to know the source disk for each snapshot.
What commands would you use to get that information?

A

gcloud compute snapshots list (find the name of the snapshot)
gcloud snapshots describe “snapshot name”

To run gcloud compute snapshots describe, you’ll need the name of a snapshot. To list existing snapshots by name, run:

gcloud compute snapshots list
To display specific details of an existing Compute Engine snapshot (like its creation time, status, and storage details), run:

gcloud compute snapshots describe SNAPSHOT_NAME –format=”table(creationTimestamp, status, storageBytesStatus)”

The correct command is gcloud compute snapshots describe which shows information about the snapshot, including source disk, creation time, and size. The other options are not valid gcloud commands. For more information, see https://cloud.google.com/sdk/gcloud/reference/compute/snapshots/describe

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You have deployed a sole tenant node in Compute Engine. How will this restrict what VMs run on that node?

A

Only VMs from the same project will run on the node.

Explanation
On a sole tenant node, only VMs from the same project will run on that node. They do not need to use the same operating system. Sole tenant nodes are not restricted to a single VM. VMs from the same organization but different projects will not run on the same sole tenant instance. For more information, see https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes
Sole-tenancy lets you have exclusive access to a sole-tenant node, which is a physical Compute Engine server that is dedicated to hosting only your project’s VMs. Use sole-tenant nodes to keep your VMs physically separated from VMs in other projects, or to group your VMs together on the same host hardware, as shown in the following diagram.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

A group of developers are creating a multi-tiered application. Each tier is in its own project. The developer would like to work with a common VPC network. What would you use to implement this?

A

Create a shared VPC

A shared VPC allows projects to share a common VPC network. VPNs are used to link VPCs to on premises networks. Routes and firewall rules are not sufficient for implementing a common VPC. Firewall rules are not used to load balance, they are used to control the ingress and egress of traffic on a network.

https://cloud.google.com/vpc/docs/shared-vpc and https://cloud.google.com/composer/docs/how-to/managing/configuring-shared-vpc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

A new team member has just created a new project in GCP. What role is automatically granted to them when they create the project?

A

roles/owner

Explanation
When you create a project, you are automatically granted the roles/owner role. The owner role includes permissions granted by roles/editor, roles/viewer, and roles/browser. For more information, see
https://cloud.google.com/resource-manager/docs/access-control-proj

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What is Cloud Function?

A

Cloud Function lets you deploy snippets of code (functions) written in a limited set of programming languages, to natively handle HTTP requests or events from many GCP sources.

Cloud Functions lets you establish triggers on a wide variety of events that can come from a variety of Cloud and Firebase products.

Cloud Functions are limited with respect to the libraries, languages, and runtimes supported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

How it is different from Cloud Run and App Engine?
Cloud Functions server

A

Cloud Functions server instances handle requests in a serial manner, which is not configurable whereas Cloud Run instances handle requests in a parallel manner, and the level of parallelism is configurable.

Cloud Functions allow you to choose from a set of programming languages and runtimes that is not configurable without requiring that you do anything other than deploying your code whereas Cloud Run allows you to choose any kind of backend configuration, but it requires that you supply a docker configuration that creates the runtime environment (which is more work).

App Engine is more suitable for applications, which have numerous functionalities inter-related even unrelated with each other e.g. microservices, while cloud functions are more events-based functions and perform some single-purpose action.

It is easy to replicate Cloud Functions on Google App Engine, but replicating an App Engine application on Cloud Functions would be complicated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What is Auto Scaling?

A

Let’s understand Auto Scaling with the help of an example, imagine you being a web developer and you have developed a web application, now you are ready to go live on a single front-end server.
You have different layers in your applications like the web layer (front end), business layer, and database layer. On day 1, you are assuming 10 concurrent users which will ideally use 50% of your CPU Utilization but as the demand increase, you might see an increase in users from 10 to 20 or maybe more during peak time, also at some point in time, you might see a very fewer user. If you add some front-end server manually then it can be a huge overhead if your application is big and again you have to decrease the server manually. To overcome the scenario, AutoScaler came to the rescue, where you just define the instance template means the configuration of every server and instance group where you define your scaling policy. Here we are going to show you CPU Utilization over 80% policy. Autoscaling is mostly used with Load Balancer to have a single IP of all the running instances. We will cover Load Balancer in the next lab.

Compute Engine offers both managed and unmanaged instance groups, only managed instance groups can be used for Autoscaling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What are the different autoscaling policies available for the different instance groups?

A

While creating an Instance group, you must specify which autoscaling policy and utilization level the Autoscaler should use to determine when to scale the group. There are three policies:

Average CPU utilization.

HTTP load balancing.

Cloud Monitoring metrics.

The Autoscaler keeps on collecting usage details based on the chosen policy, and then compares actual utilization to your target utilization, and uses this information to determine whether the instance group needs to remove instances or add instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

High availability in Compute Engine is ensured by several different mechanisms and practices, what are they?

A

Hardware Redundancy and Live Migration
Live migration is not available for preemptible VMs, however, but preemptible VMs are not designed to be highly available. At the time of this writing, VMs with GPUs attached are not available to live migrate.
Managed Instance Groups
High availability also comes from the use of redundant VMs. Managed instance groups are the best way to create a cluster of VMs, all running the same services in the same configuration. A managed instance group uses an instance template to specify the configuration of each VM in the group. Instance templates specify machine type, boot disk image, and other VM configuration details.
Multiple Regions and Global Load Balancing
Beyond the regional instance group level, you can further ensure high availability by running your application in multiple regions and using a global load balancer to distribute workload. This would have the added advantage of allowing users to connect to an application instance in the closest region, which could reduce latency. You would have the option of using the HTTP(S), SSL Proxy, or TCP Proxy load balancers for global load balancing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

High Availability in Kubernetes Engine
Kubernetes Engine is a managed Kubernetes service and how is it highly available?

A

VMs in a GKE Kubernetes cluster are members of a managed instance group, so they have all the high availability features described previously.
Kubernetes continually monitors the state of containers and pods. Pods are the smallest unit of deployment in Kubernetes; they usually have one container, but in some cases a pod may have two or more tightly coupled containers. If pods are not functioning correctly, they will be shut down and replaced
Kubernetes Engine clusters can be zonal or regional. To improve availability, you can create a regional cluster in GKE, the managed service that distributes the underlying VMs across multiple zones within a region. GKE replicates control plane servers and nodes across zones.
Control plane servers run several services including the API server, scheduler, and resource controller and, when deployed to multiple zones, provide for continued availability in the event of a zone failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

High Availability in App Engine and Cloud Functions
App Engine and Cloud Functions are fully managed compute services how do they become highly available?

A

Users of these services are not responsible for maintaining the availability of the computing resources. The Google Cloud Platform ensures the high availability of these services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

AVAILABILITY VS. DURABILITY

A

Availability should not be confused with durability, which is a measure of the probability that a stored object will be inaccessible at some point in the future. A storage system can be highly available but not durable.

For example, in Compute Engine, locally attached storage is highly available because of the way Google manages VMs. If there was a problem with the local storage system, VMs would be live migrated to other physical servers. Locally attached drives are not durable, though. If you need durable drives, you could use Persistent Disk or Cloud Filestore, the fully managed file storage service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

How are these made better available:
Persistent disks (PDs) are SSDs and hard disk drives that can be attached to VMs.

A

These disks provide block storage so that they can be used to implement filesystems and database storage.

Persistent disks continue to exist even after the VMs shut down.

One of the ways in which persistent disks enable high availability is by supporting online resizing.

GCP offers both zone persistent disks and regional persistent disks. Regional persistent disks are replicated in two zones within a region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Self-Managed Databases are made available by?

A

When running and managing a database, you will need to consider how to maintain availability if the database server or underlying VM fails.
Redundancy is the common approach to ensuring availability in databases. How you configure multiple database servers will depend on the database system you are using.
Cloud SQL use replicas - additional regions, read
Bigtable has support for regional replication, which improves availability.
EHR Healthcare uses a combination of relational and NoSQL databases.
Cloud Memorystore is a high availability cache service in Google Cloud that supports both Memcached and Redis. This managed cache service can be used to improve availability of data that requires low latency access.
Cloud Spanner - add additional nodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Network Availability
When network connectivity is down, applications are unavailable. There are two primary ways to improve network availability?

A

Use redundant network connections
Use Premium Tier networking
Redundant network connections can be used to increase the availability of the network between an on-premises data center and Google’s data center.
One type of connection is a Dedicated Interconnect, which can be used with a minimum of 10 Gbps throughput and does not traverse the public internet.
A Dedicated Interconnect is possible when both your network and the Google Cloud network have a point of presence in a common location, such as a data center.
Partner Interconnect. When your network does not share a common point of presence with the Google Cloud network, you have the option of using a Partner Interconnect. When using a Partner Interconnect, you provision a network link between your data center and a Google network point of presence.
Data within the GCP can be transmitted among regions using the public internet or Google’s internal network. The latter is available as the Premium Network Tier, which costs more than the Standard Network Tier, which uses the public internet.

Data within the GCP can be transmitted among regions using the public internet or Google’s internal network. The latter is available as the Premium Network Tier, which costs more than the Standard Network Tier, which uses the public internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

How can you scale applications?

A

Scalability
Scalability is the process of adding and removing infrastructure resources to meet workload demands efficiently. Different kinds of resources have different scaling characteristics.
VMs in a managed instance group scale by adding or removing instances from the group.
Autoscaling can be configured to scale based on several attributes, including the following:

  • Average CPU utilization
  • HTTP load balancing utilization
  • Customer monitoring metrics
    Kubernetes scales pods based on load and configuration parameters.
    NoSQL databases scale horizontally, but this introduces issues around consistency.
    Relational databases can scale horizontally, but that requires server clock synchronization if strong consistency is required among all nodes.
    Cloud Spanner uses the TrueTime service, which depends on atomic clocks and GPS signals to ensure a low, upper bound on the difference in time reported by clocks in a distributed system.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What is a GKE Deployment?

A

A deployment specifies updates for pods and ReplicaSets, which are sets of identically configured pods running at some point in time.

An application may be run in more than one deployment at a time. This is commonly done to roll out new versions of code. A new deployment can be run in a cluster, and a small amount of traffic can be sent to it to test the new code in a production environment without exposing all users to the new code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

How can you scale managed data?

A

Managed services, such as Cloud Storage and BigQuery, ensure that storage is available as needed.
In the case of BigQuery, even if you do not scale storage directly, you may want to consider partitioning data to improve query performance. Partitioning organizes data in a way that allows the query processor to scan smaller amounts of data to answer a query.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

How do we manage reliability in GCP?

A

Reliability is a measure of the likelihood of a system being available and able to meet the needs of the load on the system. When analyzing technical requirements, it is important to look for reliability requirements.
Reliability
Reliability is a measure of the likelihood of a system being available and able to meet the needs of the load on the system. When analyzing technical requirements, it is important to look for reliability requirements. As with availability and scalability, these requirements may be explicit or implicit.

Designing for reliability requires that you consider how to minimize the chance of system failures. For example, we employ redundancy to mitigate the risk of a hardware failure leaving a crucial component unavailable. We also use DevOps best practices to manage risks with configuration changes and when managing infrastructure as code. These are the same practices that we employ to ensure availability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

What is reliability engineering?

A

As an architect, you should consider ways to support reliability early in the design stage. This should include the following:

Identifying how to monitor services. Will they require custom metrics?

Considering alerting conditions. How do you balance the need for early indication that a problem may be emerging with the need to avoid overloading DevOps teams with unactionable alerts?

Using existing incident response procedures with the new system.

Does this system require any specialized procedures during an incident?
For example, if this is the first application to store confidential, personally identifying information, you may need to add procedures to notify the information security team if an incident involves a failure in access controls.
Implementing a system for tracking outages and performing post-mortems to understand why a disruption occurred.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

What are the differences between availability, scalability, and reliability?

A

*High availability is the continuous operation of a system at sufficient capacity to meet the demands of ongoing workloads. Availability is usually measured as a percentage of time that a system is available.

  • Scalability is the process of adding and removing infrastructure resources to meet workload demands efficiently.

*Reliability is a measure of how likely it is that a system will be available and capable of meeting the needs of the load on the system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Understand how redundancy is used to improve availability.

A

Compute, storage, and network services all use redundancy combined with autohealing or other forms of autorepair to improve availability.
Clusters of identically configured VMs behind a load balancer is an example of using redundancy to improve availability.
Making multiple copies of data is an example of redundancy used to improve storage availability.
Using multiple direct connections between a data center and Google Cloud is an example of redundancy in networking.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

What predefined roles are available for Monitoring?

A

Monitoring Viewer
View Monitoring data and configuration information. For example, principals with this role can view custom dashboards and alerting policies.

Monitoring Editor
View Monitoring data, and create and edit configurations. For example, principals with this role can create custom dashboards and alerting policies.

Monitoring Admin
View Monitoring data, create and edit configurations, and modify the metrics scope.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

WHy would you choose a TCP/UDP Internal load balancer?

A

First it isn’t a website, that would be HTTP
It is some type of service that has an open TCP/UDP port
Example database
Finally - not a proxy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

List different services in Kubernetes

A

kubectl get services command to list services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

How could you create a compute resource to take on a temporary job?

A

Create a cluster or node pool with preemptible VMs
You can use the Google Cloud CLI to create a cluster or node pool with preemptible VMs.

Create a cluster or node pool with preemptible VMs
You can use the Google Cloud CLI to create a cluster or node pool with preemptible VMs.

To create a cluster with preemptible VMs, run the following command:

gcloud container clusters create CLUSTER_NAME \
–preemptible
Replace CLUSTER_NAME with the name of your new cluster.

To create a node pool with preemptible VMs, run the following command:

gcloud container node-pools create POOL_NAME \
–cluster=CLUSTER_NAME \
–preemptible
Replace POOL_NAME with the name of your new node pool.

gcloud container clusters create CLUSTER_NAME \
–preemptible
Preemptible VM instances are available at much lower price—a 60-91% discount—compared to the price of standard VMs. However, Compute Engine might stop (preempt) these instances if it needs to reclaim the compute capacity for allocation to other VMs. Preemptible instances use excess Compute Engine capacity, so their availability varies with usage.

If your apps are fault-tolerant and can withstand possible instance preemptions, then preemptible instances can reduce your Compute Engine costs significantly. For example, batch processing jobs can run on preemptible instances. If some of those instances stop during processing, the job slows but does not completely stop. Preemptible instances complete your batch processing tasks without placing additional workload on your existing instances and without requiring you to pay full price for additional normal instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

How does a Horizontal Pod Autoscaler manager a workload in GKE?

A

The Horizontal Pod Autoscaler changes the shape of your Kubernetes workload by automatically increasing or decreasing the number of Pods in response to the workload’s CPU or memory consumption, or in response to custom metrics reported from within Kubernetes or external metrics from sources outside of your cluster.

Horizontal Pod autoscaling cannot be used for workloads that cannot be scaled, such as DaemonSets.

Overview
When you first deploy your workload to a Kubernetes cluster, you may not be sure about its resource requirements and how those requirements might change depending on usage patterns, external dependencies, or other factors. Horizontal Pod autoscaling helps to ensure that your workload functions consistently in different situations, and allows you to control costs by only paying for extra capacity when you need it.

It’s not always easy to predict the indicators that show whether your workload is under-resourced or under-utilized. The Horizontal Pod Autoscaler can automatically scale the number of Pods in your workload based on one or more metrics of the following types:

Actual resource usage: when a given Pod’s CPU or memory usage exceeds a threshold. This can be expressed as a raw value or as a percentage of the amount the Pod requests for that resource.

Custom metrics: based on any metric reported by a Kubernetes object in a cluster, such as the rate of client requests per second or I/O writes per second.

This can be useful if your application is prone to network bottlenecks, rather than CPU or memory.

External metrics: based on a metric from an application or service external to your cluster.

For example, your workload might need more CPU when ingesting a large number of requests from a pipeline such as Pub/Sub. You can create an external metric for the size of the queue, and configure the Horizontal Pod Autoscaler to automatically increase the number of Pods when the queue size reaches a given threshold, and to reduce the number of Pods when the queue size shrinks.

You can combine a Horizontal Pod Autoscaler with a Vertical Pod Autoscaler, with some limitations.

How horizontal Pod autoscaling works
Each configured Horizontal Pod Autoscaler operates using a control loop. A separate Horizontal Pod Autoscaler exists for each workflow. Each Horizontal Pod Autoscaler periodically checks a given workload’s metrics against the target thresholds you configure, and changes the shape of the workload automatically.
imitations
Do not use the Horizontal Pod Autoscaler together with the Vertical Pod Autoscaler on CPU or memory. You can use the Horizontal Pod Autoscaler with the Vertical Pod Autoscaler for other metrics.
If you have a Deployment, don’t configure horizontal Pod autoscaling on the ReplicaSet or Replication Controller backing it. When you perform a rolling update on the Deployment or Replication Controller, it is effectively replaced by a new Replication Controller. Instead configure horizontal Pod autoscaling on the Deployment itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

What is Dataflow SQL?

A

Dataflow SQL lets you use your SQL skills to develop streaming Dataflow pipelines right from the BigQuery web UI. You can join streaming data from Pub/Sub with files in Cloud Storage or tables in BigQuery, write results into BigQuery, and build real-time dashboards using Google Sheets or other BI tools.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

How do you write a command to create a Cloud Function

A

gcloud functions deploy helloGreeting –trigger-http –region=us-central1 –runtime=nodejs6

gcloud functions deploy <name> --runtime <enter> --trigger-topic <name>.
Once the function is deployed, we can invoke it with the data as given below:</name></enter></name>

$ gcloud functions call –data ‘{“name”:”Romin”}’ helloGreeting
executionId: 36hzafyyt8cj
result: Hello Romin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

How would you manage a requirement to create an application that performs repetitive tasks on the cloud?

A

Create a service account in IAM for the specific project.
Assign the necessary roles to the specific service account.
Create

gcloud compute instance create <instance>\
--service-account <serviceaccount>\
--scopes <provide></provide></serviceaccount></instance>

Google’s best practice is not to use the default Compute Engine service account when utilizing service accounts with a VM instance. You should create a custom service account with only the necessary permissions required. The command line offered in this example also demonstrates the necessary second step once the custom service account is created. This answer illustrates that best practices are followed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

What is Point in Time Recovery for MySql, what does Cloud SQL -mnMySQL database use for point-in-time recovery?

A

Point-in-time recovery refers to recovery of data changes made since a given point in time. Typically, this type of recovery is performed after restoring a full backup that brings the server to its state as of the time the backup was made.
Point-in-time recovery uses binary logs. These logs update regularly and use storage space. The binary logs are automatically deleted with their associated automatic backup, which generally happens after about 7 days.

If the size of your binary logs are causing an issue for your instance:

You can increase the instance storage size, but the binary log size increase in disk usage might be temporary.

We recommend enabling automatic storage increase to avoid unexpected storage issues.

You can disable point-in-time recovery if you want to delete logs and recover storage. Decreasing the storage used does not shrink the size of the storage provisioned for the instance.

Logs are purged once daily, not continuously. Setting log retention to two days means that at least two days of logs, and at most three days of logs, are retained. We recommend setting the number of backups to one more than the days of log retention to guarantee a minimum of specified days of log retention.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Why do you need load balancer health checks and Managed Instance Group autohealing?

A

Managed instance group health checks proactively signal to delete and recreate instances that become UNHEALTHY.
Load balancing health checks help direct traffic away from non-responsive instances and toward healthy instances; these health checks do not cause Compute Engine to recreate instances.

You need both to get the job done.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

What are the Google Cloud Offerings?

A

Google Cloud offerings can be broadly categorised as compute, storage, big data, machine learning, and application services for web, mobile, analytics, and back-end solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

explain the difference between iaas and paas?

A

To put it simply, IaaS and PaaS are two different models of cloud computing that offer varying levels of control and responsibility. Here’s a breakdown:

IaaS (Infrastructure as a Service):
What it is:
IaaS provides you with fundamental computing infrastructure—servers, storage, networking—over the internet.
You essentially rent virtualized IT resources.
What you manage:
You have a high degree of control. You manage the operating systems, applications, data, and middleware.
The cloud provider manages the underlying infrastructure (servers, storage, networking).
Analogy:
Think of IaaS as renting an empty apartment. You get the space, and you’re responsible for furnishing it and maintaining everything inside.

Use cases:
Ideal for organizations that need maximum control over their infrastructure.
Useful for testing and development, website hosting, and big data processing.

PaaS (Platform as a Service):
What it is:
PaaS provides a platform that allows you to develop, run, and manage applications without the complexity of building and maintaining the underlying infrastructure.
The cloud provider handles the infrastructure and the platform.
What you manage:
You manage the applications and data.
The cloud provider manages the operating systems, middleware, and underlying infrastructure.
Analogy:
Think of PaaS as renting a furnished apartment with included utilities. You have the tools and environment to live there, but you don’t worry about the building’s maintenance.
Use cases:
Ideal for developers who want to focus on building applications.
Useful for streamlining development workflows, deploying applications quickly, and scaling applications easily.
Key Differences Summarized:

Control: IaaS gives you more control over the infrastructure, while PaaS gives you more control over the applications.
Responsibility: With IaaS, you’re responsible for more of the IT management, while with PaaS, the cloud provider takes on more of the responsibility.
Complexity: IaaS tends to be more complex to manage, while PaaS simplifies application development and deployment.
In essence, IaaS gives you the building blocks, and PaaS gives you a ready-made platform.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

What is the primary authentication method used in Google Cloud?

A

OAuth 2.0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

True or False: Google Cloud provides Identity and Access Management (IAM) to control access to resources.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Fill in the blank: Google Cloud’s ________ feature allows users to manage their encryption keys.

A

Cloud Key Management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Which Google Cloud service provides DDoS protection?

A

Cloud Armor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

What is the purpose of Google Cloud’s Security Command Center?

A

To provide visibility and control over security and data risks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

True or False: Google Cloud supports two-factor authentication (2FA).

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

What type of encryption does Google Cloud use for data at rest?

A

AES-256

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
80
Q

Which Google Cloud feature allows for real-time threat detection?

A

Cloud Security Scanner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
81
Q

What is the main purpose of Google Cloud’s VPC Service Controls?

A

To define a security perimeter around Google Cloud resources

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
82
Q

Fill in the blank: Google Cloud offers ________ to help monitor and respond to security incidents.

A

Cloud Logging

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
83
Q

Which Google Cloud service helps secure applications running on Google Kubernetes Engine?

A

Binary Authorization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
84
Q

True or False: Google Cloud’s IAM allows for both role-based and attribute-based access control.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
85
Q

What is the function of Google Cloud’s Data Loss Prevention (DLP) API?

A

To discover, classify, and protect sensitive data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
86
Q

Which Google Cloud feature provides automated security assessments?

A

Security Health Analytics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
87
Q

What does the Google Cloud Security Center offer?

A

Centralized security management and insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
88
Q

Fill in the blank: Google Cloud uses ________ to protect data in transit.

A

TLS (Transport Layer Security)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
89
Q

What is the purpose of Google Cloud’s Identity-Aware Proxy?

A

To provide secure access to applications without a VPN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
90
Q

True or False: Google Cloud allows users to set up custom security policies.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
91
Q

Which Google Cloud service provides a firewall for virtual machines?

A

Google Cloud Firewall

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
92
Q

What does Google Cloud’s Compliance Manager help organizations with?

A

Managing compliance with regulations and standards

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
93
Q

Fill in the blank: Google Cloud offers ________ for protecting sensitive data in databases.

A

Data encryption

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
94
Q

What is the function of Google Cloud’s Access Transparency?

A

To provide visibility into Google personnel access to customer data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
95
Q

True or False: Google Cloud provides a built-in security incident response plan.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
96
Q

Which feature in Google Cloud helps in managing user identities?

A

Cloud Identity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
97
Q

What type of assessments does Google Cloud’s Security Scanner perform?

A

Vulnerability assessments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
98
Q

Fill in the blank: Google Cloud’s ________ feature helps in monitoring API usage and security.

A

API Security

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
99
Q

What is the role of Google Cloud’s Threat Detection service?

A

To identify and respond to potential security threats

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
100
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
101
Q

What does IAM stand for in the context of GCP?

A

Identity and Access Management

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
102
Q

True or False: IAM allows you to manage who has access to GCP resources.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
103
Q

What are the three main components of IAM in GCP?

A

Roles, Permissions, and Policies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
104
Q

Fill in the blank: In GCP, a _____ is a collection of permissions.

A

Role

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
105
Q

Which IAM role grants the least privilege necessary to perform an action?

A

Primitive roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
106
Q

What is the purpose of a service account in GCP?

A

To provide an identity for applications and virtual machines to interact with GCP services.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
107
Q

True or False: Service accounts can be assigned roles just like user accounts.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
108
Q

What is a resource hierarchy in GCP?

A

A structure that organizes resources in a parent-child relationship.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
109
Q

List the four levels of resource hierarchy in GCP.

A

Organization, Folder, Project, and Resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
110
Q

What is the highest level in the GCP resource hierarchy?

A

Organization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
111
Q

Fill in the blank: Policies in IAM are defined at the _____ level.

A

Resource

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
112
Q

What command can be used to view IAM policies in GCP?

A

gcloud projects get-iam-policy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
113
Q

True or False: IAM roles can be customized in GCP.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
114
Q

What is the difference between predefined roles and custom roles in GCP?

A

Predefined roles are created by Google, while custom roles are defined by users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
115
Q

What type of IAM role is best suited for granting access to a specific resource?

A

Custom role

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
116
Q

Fill in the blank: A _____ is a set of permissions that can be assigned to users or service accounts.

A

Role

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
117
Q

What is the purpose of a Google Cloud Organization?

A

To manage multiple projects and resources in a centralized manner.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
118
Q

True or False: Users cannot have multiple roles in GCP.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
119
Q

What does the ‘Viewer’ role allow a user to do?

A

View resources but not make changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
120
Q

What is the function of the ‘Owner’ role in GCP?

A

To have full control over a project, including managing roles and permissions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
121
Q

Which IAM role would you assign to a user who needs to deploy applications but not manage IAM roles?

A

Editor

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
122
Q

Fill in the blank: Service accounts are identified by their _____ email address.

A

unique

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
123
Q

What is the command to create a service account in GCP?

A

gcloud iam service-accounts create

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
124
Q

True or False: IAM policies can be inherited from parent resources.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
125
Q

What is the main benefit of using service accounts over user accounts in applications?

A

Service accounts can be automated and do not require user interaction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
126
Q

What is the purpose of IAM audit logs in GCP?

A

To track changes and access to IAM policies and resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
127
Q

What does VPC stand for?

A

Virtual Private Cloud

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
128
Q

True or False: A VPC allows you to create a logically isolated section of the cloud.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
129
Q

What is the primary purpose of a firewall in a network?

A

To monitor and control incoming and outgoing network traffic based on predetermined security rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
130
Q

Fill in the blank: A _____ balances incoming traffic across multiple servers to ensure reliability and performance.

A

load balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
131
Q

What is the main function of a VPN?

A

To create a secure and encrypted connection over a less secure network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
132
Q

Multiple Choice: Which of the following is NOT a function of DNS? A) Domain name resolution B) Load balancing C) Caching D) URL forwarding

A

D) URL forwarding

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
133
Q

What does CDN stand for?

A

Content Delivery Network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
134
Q

True or False: A CDN can help reduce latency by caching content closer to users.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
135
Q

What protocol is commonly used for secure VPN connections?

A

IPsec

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
136
Q

Fill in the blank: A _____ is a device that forwards data packets between computer networks.

A

router

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
137
Q

Multiple Choice: Which of the following is a benefit of using a load balancer? A) Increased security B) Improved fault tolerance C) Reduced bandwidth D) All of the above

A

B) Improved fault tolerance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
138
Q

What is the role of a DNS server?

A

To translate domain names into IP addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
139
Q

True or False: Firewalls can be hardware-based, software-based, or both.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
140
Q

What is a common use case for a VPN?

A

Remote access for users to securely connect to a private network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
141
Q

Fill in the blank: A _____ can distribute traffic to multiple servers to ensure no single server becomes overwhelmed.

A

load balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
142
Q

Multiple Choice: Which of the following protocols is commonly used for DNS? A) HTTP B) FTP C) UDP D) TCP

A

C) UDP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
143
Q

What is the primary benefit of using a CDN?

A

To improve the delivery speed of content to users globally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
144
Q

True or False: A VPC can span multiple regions in a cloud service provider’s infrastructure.

A

False

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
145
Q

What mechanism does a firewall use to filter traffic?

A

Access control lists (ACLs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
146
Q

Fill in the blank: A _____ can prevent DDoS attacks by distributing incoming traffic.

A

load balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
147
Q

Multiple Choice: Which of the following is NOT a type of VPN? A) Remote-access VPN B) Site-to-site VPN C) Cloud VPN D) Local VPN

A

D) Local VPN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
148
Q

What does a CDN use to cache content?

A

Edge servers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
149
Q

True or False: DNS can also provide load balancing capabilities.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
150
Q

What is the difference between a public and private VPC?

A

A public VPC can be accessed from the internet, while a private VPC cannot.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
151
Q

Fill in the blank: A _____ allows users to connect securely over the internet to a private network.

A

VPN

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
152
Q

Multiple Choice: Which of the following is a common firewall type? A) Stateful B) Stateless C) Application-layer D) All of the above

A

D) All of the above

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
153
Q

What does GKE stand for?

A

Google Kubernetes Engine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
154
Q

True or False: GKE is a managed Kubernetes service provided by Google Cloud.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
155
Q

Fill in the blank: GKE simplifies the management of __________ clusters.

A

Kubernetes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
156
Q

Which service allows you to run containers without managing the underlying infrastructure?

A

Cloud Run

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
157
Q

What type of applications is Cloud Run designed for?

A

Stateless applications

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
158
Q

True or False: Cloud Build is used for continuous integration and continuous delivery (CI/CD).

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
159
Q

What is the primary purpose of Cloud Build?

A

To automate the building and testing of applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
160
Q

Which command-line tool is commonly used to interact with GKE?

A

kubectl

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
161
Q

What is the default compute engine used by GKE for nodes?

A

Google Compute Engine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
162
Q

Multiple Choice: What is a key feature of Cloud Run?
A) Supports only Java applications
B) Automatic scaling
C) Requires manual deployment
D) None of the above

A

B) Automatic scaling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
163
Q

Fill in the blank: GKE can manage __________ for you, including upgrades and scaling.

A

Kubernetes clusters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
164
Q

What is a container?

A

A lightweight, standalone, executable package that includes everything needed to run a piece of software.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
165
Q

True or False: Cloud Run can only deploy applications packaged in Docker containers.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
166
Q

What is the pricing model for Cloud Run?

A

Pay-as-you-go based on the resources used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
167
Q

Which of the following is NOT a benefit of using GKE?
A) High availability
B) Manual scaling
C) Integrated logging
D) Security features

A

B) Manual scaling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
168
Q

What is the purpose of a Kubernetes pod?

A

To group one or more containers that share the same storage and network resources.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
169
Q

Multiple Choice: What is a main advantage of using Cloud Build?
A) It is free
B) It integrates with Git repositories
C) It only supports Python
D) None of the above

A

B) It integrates with Git repositories

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
170
Q

Fill in the blank: Cloud Run can automatically scale down to __________ when there are no incoming requests.

A

zero instances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
171
Q

What is a service in the context of Cloud Run?

A

A configuration that specifies how to run a container.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
172
Q

True or False: GKE provides built-in monitoring and logging through Stackdriver.

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
173
Q

What is the role of a Kubernetes node?

A

To run the containers that make up the application.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
174
Q

What is the difference between Cloud Run and GKE?

A

Cloud Run is serverless and abstracts infrastructure management, while GKE provides more control over Kubernetes clusters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
175
Q

Multiple Choice: Which of the following is a deployment strategy in Kubernetes?
A) Rolling update
B) Complete refresh
C) Instant swap
D) None of the above

A

A) Rolling update

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
176
Q

What is the command to create a new Kubernetes cluster in GKE?

A

gcloud container clusters create

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
177
Q

Fill in the blank: GKE supports __________ for managing application deployments.

A

Helm charts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
178
Q

Which of the following Google Cloud tools provides a browser-based, interactive shell environment with the Google Cloud CLI pre-installed?

a) Google Cloud Console
b) Cloud Shell
c) Google Cloud CLI
d) Cloud Mobile App

A

b) Cloud Shell
Explanation: Cloud Shell is a virtual machine accessible directly from your browser. It comes pre-configured with the Google Cloud CLI and other essential tools, making it ideal for command-line interaction with Google Cloud.

179
Q

Question: To list your virtual machines and their details using the command-line, which tool would you use?

a) Google Cloud Console
b) Cloud Shell
c) Google Cloud CLI
d) Cloud Mobile App

A

Answer: c) Google Cloud CLI
Explanation: The Google Cloud CLI (gcloud) is a command-line tool that allows you to manage Google Cloud resources. The command gcloud compute instances list is a specific example of its use for listing VM instances. Cloud shell is a place that the cloud cli is preinstalled.

180
Q

Question: Which method of interacting with Google Cloud is most suitable for building your own automated resource management tools?

a) Google Cloud Console
b) Cloud Shell
c) Google Cloud CLI
d) Admin APIs

A

Answer: d) Admin APIs
Explanation: Admin APIs provide programmatic access to Google Cloud’s management functions. They are designed for developers who need to automate resource provisioning, configuration, and monitoring.

181
Q

Question: What is the primary function of the Cloud Mobile App?

a) To provide a web-based graphical user interface.
b) To offer a command-line interface.
c) To manage Google Cloud services from a mobile device.
d) To provide client libraries for application development.

A

Answer: c) To manage Google Cloud services from a mobile device.
Explanation: The Cloud Mobile App allows users to monitor and manage their Google Cloud resources directly from their Android or iOS devices. This includes tasks like managing Compute Engine instances, viewing logs, and monitoring billing.

182
Q

Question: Cloud Shell provides 10 GB of persistent disk storage. (True/False)

A

Answer: False (It provides 5 GB)
Explanation: Cloud Shell offers a temporary virtual machine with 5 GB of persistent disk storage for user files and configurations.

183
Q

Question: The Google Cloud console can be accessed through the Cloud Mobile App. (True/False)

A

Answer: False
Explanation: The Cloud Mobile App is a separate application designed for mobile devices. The Google Cloud console is a web-based interface accessed through a browser.

184
Q

Question: App APIs are optimized for supported languages like Node.js and Python. (True/False)

A

Answer: True
Explanation: App APIs are client libraries that provide language-specific interfaces for accessing Google Cloud services from applications. They are designed to be user-friendly and efficient for developers.

185
Q

Question: Describe the steps to navigate to “VM instances” within the Google Cloud console.

A

Answer:
Click the Navigation menu (three horizontal lines).
Hover over “Compute Engine” to open the submenu.
Click “VM instances” on the submenu.
Explanation: This describes the hierarchical navigation structure of the Google Cloud console, where services are organized into menus and submenus.

186
Q

Question: What are two key functionalities that the Cloud Mobile App provides?

A

Answer:
Managing Compute Engine instances (start, stop, SSH).
Viewing logs.
or setting up customizable graphs for key metrics.
or Alerts and incident management.
or Viewing billing information.
Explanation: The Cloud Mobile App offers a range of features for on-the-go management and monitoring of Google Cloud resources.

187
Q

Question: What is the difference between App APIs and Admin APIs?

A

Answer:
App APIs provide access to services and are optimized for application development in supported languages.
Admin APIs offer functionality for resource management and are used for building automated tools.
Explanation: App APIs are for integrating Google Cloud services into applications, while Admin APIs are for automating infrastructure management tasks.

188
Q

Question: Explain the difference between auto mode and custom mode VPC networks, and describe the default network’s characteristics.

A

Answer:

Auto mode networks automatically create a subnet in each region using predefined IP ranges, and the default network is an auto-mode network.
Custom mode networks provide complete control over subnet creation, allowing users to specify regions and IP ranges.
The default network is an auto mode network, with preset subnets and firewall rules, allowing ingress traffic for ICMP, RDP, and SSH traffic from anywhere, as well as ingress traffic from within the default network for all protocols and ports.

189
Q

Question: How do virtual machines in different regions within the same VPC network communicate, and what is the implication for VMs in different networks?

A

Answer:

Virtual machines in different regions within the same VPC network communicate using their internal IP addresses, leveraging Google’s global fiber network.
VMs in different networks must communicate using their external IP addresses by default, even if they are in the same region.

190
Q

Question: How do subnetworks relate to regions and zones, and what IP addresses are reserved within a subnet?

A

Answer:

Subnetworks are regional and can span multiple zones within a region.
The first and second addresses (.0 and .1), and the second-to-last and last addresses are reserved within a subnet.

191
Q

Question: What are the rules and limitations when expanding the IP address space of a subnet, and what are the implications for auto mode subnets?

A

Answer:

The new subnet must not overlap with other subnets, each IP range must be a unique valid CIDR block, and the new range must be larger than the original. Subnet ranges also cannot match, be narrower, or broader than a restricted range, span a valid RFC range and a privately used public IP address range, or span multiple RFC ranges.
Auto mode subnets start with a /20 IP range and can be expanded to a /16 IP range, but no larger. To expand larger than a /16 the auto mode subnet must be converted to a custom mode subnet.

192
Q

Question: What are the two types of IP addresses that can be assigned to a virtual machine in Google Cloud, and how do they differ in terms of assignment and usage?

A

Answer:

Internal IP address: Assigned via DHCP, automatically given to every VM and services that rely on VMs (like App Engine and GKE). It is used for internal communication within the same network, and names are resolved via internal DNS.
External IP address: Optional, used for externally facing VMs. It can be ephemeral (assigned from a pool) or static (reserved).

193
Q

Question: What are the implications of reserving a static external IP address in Google Cloud, and what is the requirement for using your own publicly routable IP address prefixes?

A

Answer:

Reserving a static external IP address without assigning it to a resource results in higher charges compared to in-use static or ephemeral external IP addresses.
To use your own publicly routable IP address prefixes, you must own and bring a /24 block or larger.

194
Q

Question: When creating a Compute Engine VM instance, what are the options for assigning internal and external IP addresses, and what happens to these addresses when the instance is stopped and restarted?

A

Answer:

Internal IP Address: Can be an ephemeral address (automatically assigned or custom selected within the subnet range) or a reserved static internal IP address. When the instance is stopped and restarted, the internal IP address remains the same.
External IP Address: Can be an ephemeral address (automatically assigned), a reserved static external IP address, or none (no external IP). When the instance is stopped and restarted, an ephemeral external IP address changes.

195
Q

Question: What are some important considerations regarding IP address ranges and instance quotas when planning your Compute Engine deployments, and what is the default behavior of external IP addresses?

A

Answer:

While a subnet may have a large IP address range, there are limits on the number of instances that can be created per network (quotas). Additionally, physical hardware limitations within a region or zone can affect instance availability.
By default, external IP addresses assigned to Compute Engine instances are ephemeral, meaning they change when the instance is stopped and restarted

196
Q

Question: How does Google Cloud handle the mapping between external and internal IP addresses for a VM, and how does this affect the VM’s operating system?

A

Answer:

The external IP address is mapped to the VM’s internal IP address transparently by VPC.
The operating system of the VM is unaware of the external IP address, only recognizing the internal IP address.

197
Q

Question: What are Alias IP Ranges in Google Cloud, and what are their primary benefits for managing multiple services running on a VM?

A

Answer:

Alias IP Ranges allow you to assign a range of internal IP addresses as aliases to a VM’s network interface.
They enable you to assign different IP addresses to multiple services running on a VM without defining separate network interfaces, simplifying the management of multiple applications or containers hosted on a single VM.

198
Q

Question: Explain how Google Cloud routes traffic both by default and with custom routes, and describe the relationship between routes and firewall rules.

A

Answer:

Default Routing: Every network has default routes for internal instance communication and external traffic.
Custom Routes: You can create custom routes to override default routes.
Routes and Firewall Rules: Routes determine the path of traffic based on destination IP addresses, but firewall rules must also allow the traffic for it to be delivered.

199
Q

Question: Describe the functionality and key aspects of Google Cloud firewall rules, including the concepts of ingress and egress rules, stateful behavior, and the implied “Deny all” ingress and “Allow all” egress rules.

A

Answer:

Firewall Rules: VPC networks act as distributed firewalls, controlling inbound (ingress) and outbound (egress) connections at the instance level.
Stateful Behavior: Once a connection is allowed, bidirectional traffic is permitted.
Implied Rules: If all firewall rules are deleted, an implied “Deny all” ingress and “Allow all” egress rule remains in effect.
Rule Components: Rules are defined by direction (ingress/egress), source/destination, protocol/port, action (allow/deny), priority, and rule assignment.

200
Q

Question: Under what circumstances is egress traffic from a Google Cloud Platform (GCP) virtual machine charged, and what are some examples of egress traffic that are not charged?

A

Answer:

Charged Egress: Egress between zones in the same region, egress within a zone via the external IP address, and egress between regions.
Uncharged Egress: Egress to the same zone via the internal IP address, egress to Google products (YouTube, Maps, Drive), and egress to other GCP services within the same region.

201
Q

Question: How does Google Cloud handle the pricing of static and ephemeral external IP addresses, and what is the cost difference between reserved but unassigned static IPs and those in use?

A

Answer:

Google Cloud charges for both static and ephemeral external IP addresses.
Reserved but unassigned static external IP addresses are charged at a higher rate than static and ephemeral external IP addresses that are actively in use.

202
Q

Question: What is the purpose of the GCP pricing calculator, and what are some of its key features and functionalities?

A

Answer:

The GCP pricing calculator is a web-based tool used to estimate the cost of a collection of Google Cloud resources.
Key features include the ability to specify resource consumption (e.g., instance type, region, egress traffic), adjust currency and time frame, and save or email cost estimates for future reference.

203
Q

Availability (Multiple Zones, Single Subnet)

Question: How does deploying virtual machines across multiple zones within a single subnetwork contribute to improved application availability and simplify security management?

A

Content: “If your application needs increased availability, you can place two virtual machines into multiple zones, but within the same subnet work as shown on this slide. Using a single sub-network allows you to to create a file a rule against the sub-network, in this case, 10.2.0.0/16. Therefore, by allocating VMs on a single subnet to separate zones, you get improved availability without additional security complexity. A regional managed instance group contains instances from multiple zones across the same region, which provides increased availability.”

Answer: Deploying VMs across multiple zones within the same subnet improves availability by protecting against zone-level failures, as the application is distributed across different physical locations. Security management is simplified because a single firewall rule can be applied to the entire subnetwork, rather than managing rules for individual zones.

204
Q

Question: What are the advantages of deploying resources across multiple regions, and how does a global load balancer enhance the performance and cost-effectiveness of this design?

A

Globalization (Multiple Regions, Load Balancing)

Content: “In the previous design we placed resources in different zones in a single region, which provides isolation for many types of infrastructure, hardware and software failures. Putting resources in different regions as shown on this slide provides an even higher degree of failure independence. This allows you to design robust systems with resources spread across different failure domains. When using a global load balancer like the HTTP load balancer, you can route traffic to the region that is closest to the user. This can result in better latency for users and lower network traffic costs for your project.”

Answer: Deploying resources across multiple regions increases failure independence, creating highly robust systems. A global load balancer enhances performance by routing traffic to the closest region, reducing latency for users. It also lowers network traffic costs by minimizing long-distance data transfers.

205
Q

Question: How does Cloud NAT improve security for virtual machines without public IP addresses, and what are its limitations regarding inbound connections?

A

Cloud NAT (Outbound Access, Security)

Content: “Now, as a general security best practice, I recommend only assigning internal IP addresses to your VM instances whenever possible. Cloud NAT is Google’s managed network address translation service. It lets you provision your application instances without public IP addresses, while also allowing them to access the internet in a controlled and efficient manner. This means your private instances can access the internet for updates, patching, configuration management, and more. However, Cloud NAT does not Implement inbound NAT. In other words, hosts outside your VPC network cannot directly access any of the private instances behind the cloud NAT gateway. This helps you keep your VPC networks isolated and secure.”

Answer: Cloud NAT improves security by allowing VMs without public IP addresses to access the internet for necessary tasks like updates, while preventing direct inbound access from external hosts. Its limitation is that it does not implement inbound NAT, meaning external hosts cannot initiate connections to VMs behind the Cloud NAT gateway.

206
Q

Question: What is the purpose of Private Google Access, and how does it differ from Cloud NAT in terms of enabling VM access to external resources?

A

Private Google Access (API Access, Subnet Basis)

Content: “Similarly, you should enable private Google access to allow VM instances that only have internal IP addresses to reach the external IP addresses of Google APIs and services. For example, if your private VM instance needs to access a cloud storage bucket, you need to enable private Google access. You enable private Google access on a subnet by subnet basis. As you can see in this diagram, subnet A has private Google access enabled and subnet B has it disabled. This allows VMA one to access Google APIs and services, even though it has no external IP address. Private Google access has no effect on instances that have external IP addresses, that’s why VMs A2 and B2 can access Google APIs and services. The only VM that can’t access those APIs and services is VM B1. This VM has no public IP address and it is in a subnet where Google private access is disabled.”

Answer: The purpose of Private Google Access is to allow VMs with only internal IP addresses to access Google APIs and services. It differs from Cloud NAT in that it specifically enables access to Google services, whereas Cloud NAT provides general internet access. Private Google Access is enabled on a subnet basis, and does not effect VMs that already have external IP Addresses.

207
Q

Question: What are the benefits of using Cloud NAT for VM instances without external IP addresses, and what are its limitations regarding inbound connections?

A

Answer:

Benefits: Cloud NAT allows VM instances without external IP addresses to access the internet for updates, patches, and bootstrapping, providing a managed and highly available service.
Limitations: Cloud NAT implements outbound NAT but not inbound NAT, meaning external hosts cannot initiate connections to instances behind the Cloud NAT gateway.

208
Q

How did the lab demonstrate the functionality of Private Google Access, and what was the impact of enabling it on the vm-internal instance’s ability to access Google Cloud Storage?

A

Answer:

The lab demonstrated Private Google Access by first showing that the vm-internal instance (without an external IP address) could not copy an image from a public bucket.
Enabling Private Google Access on the subnet allowed the vm-internal instance to successfully copy the image, demonstrating that it could now access Google APIs and services.

209
Q

What steps were taken in the lab to create a private instance, and how was access to this instance achieved?

A

Answer:

A private instance (vm-internal) was created by selecting “none” for the external IP address during VM creation.
Access to the instance was achieved using an IAP tunnel through Cloud Shell, as direct SSH access was not possible due to the lack of an external IP address.

210
Q

What is the process to create a private network?

A

Answer:

Navigate to VPC networks
Create a network
Give the network a name
Set the subnet creation mode to custom
Create a subnet
Give the subnet a name
Select a region
Give the subnet an IP address range

211
Q

Question: What is the primary service in Google Cloud Platform (GCP) used to create and manage virtual machine instances (VMs), and what are the basic components of a VM?

A

Introduction to VMs and Compute Engine (00:00 - 00:23)

Content: “In this module, we cover virtual machine instances, or VMs. VMs are the most common infrastructure component and in GCP they’re provided by Compute Engine. A VM is similar but not identical to a hardware computer. VMs consists of a virtual CPU, some amount of memory, disk storage, and an IP address. Compute Engine is GCP’s service to create VMs.”

Answer: The primary service is Compute Engine. The basic components of a VM are a virtual CPU, memory, disk storage, and an IP address.

212
Q

Question: What are two examples of unique features provided by Compute Engine that are not typically found in physical hardware, and how do they benefit users?

A

Compute Engine Flexibility and Unique Features (00:23 - 00:51)

Content: “Compute Engine is GCP’s service to create VMs. It is very flexible and offers many options including some that can’t exist in physical hardware. For example, a micro VM shares a CPU with other virtual machines, so you can get a VM with less capacity at a lower cost. Another example of a function that can’t exist in hardware is that some VMs offer burst capability, meaning that the virtual CPU will run above its rated capacity for a brief period, using the available shared physical CPU.”

Answer:
Micro VMs: These share a CPU with other VMs, offering a lower-cost option for applications with low resource requirements.
Burst Capability: This allows virtual CPUs to temporarily exceed their rated capacity, utilizing available shared physical CPU for short periods of increased demand.

213
Q

Question: What are the four main configuration options for virtual machines in Compute Engine, and what is the general structure of this module?

A

VM Options and Module Overview (00:51 - 01:37)

Content: “The main VM options are CPU, memory, discs, and networking. Now, this is going to be a very robust module; there’s a lot of detail to cover here with how virtual machines work on GCP. First, we’ll start with the basics of Compute Engine, followed by a quick little lab to get you more familiar with creating virtual machines. Then, we’ll look at the different CPU and memory options that enable you to create different configurations. Next, we’ll look at images and the different disk options available with Compute Engine. After that, we will discuss very common Compute Engine actions that you might encounter in your day-to-day job. This will be followed by an in-depth lab that explores many of the features and services covered in this module. Let’s get started with an overview of Compute Engine.”1
Answer: The four main configuration options are CPU, memory, disks, and networking. The module will cover Compute Engine basics, hands-on labs, CPU/memory options, images/disks, common actions, and an in-depth lab.

214
Q

Question: What is the primary advantage of using Compute Engine as an IaaS offering, and what types of workloads are best suited for it?

A

Compute Engine Overview and Use Cases (00:00 - 01:05)

Content: “As mentioned in the introduction to the course, there is a spectrum of different options in Google Cloud for compute and processing. We will focus on the traditional virtual machine instances. Now the difference is, Compute Engine gives you the utmost in flexibility: run whatever language you want—it’s your virtual machine. This is purely an infrastructure as a service or IaaS model. You have a VM and an operating system, and you can choose how to manage it and how to handle aspects, such as autoscaling, where you’ll configure the rules about adding more virtual machines in specific situations. The primary work case of Compute Engine is any generic workload, especially an enterprise application that was designed to run on a server infrastructure. This makes Compute Engine very portable and easy to run in the cloud. Other services, like Google Kubernetes Engine, which consists of containerized workloads, may not be as easily transferable as what you’re used to from on-premises.”

Answer: The primary advantage is its flexibility, allowing users to run any language and manage their VMs and operating systems. Compute Engine is best suited for generic workloads, especially enterprise applications designed for server infrastructure, due to its portability and ease of migration from on-premises environments.

215
Q

Question: What are some key configuration options available in Compute Engine, and what are some of the features that will be covered in this module?

A

Content: “So what is Compute Engine? At its heart, it’s physical servers that you’re used to, running inside the Google Cloud environment, with a number of different configurations. Both predefined and custom machine types allow you to choose how much memory and how much CPU you want. You choose the type of disk you want, whether you want to use persistent disks backed up by standard hard drives or solid-state drives, local SSDs, Cloud Storage, or a mix. You can even configure the networking interfaces and run a combination of Linux and Windows machines. Several different features will be covered throughout this module, such as machine rightsizing, startup and shutdown scripts, metadata, availability policies, OS patch management, and pricing and usage discounts.”

Answer: Key configuration options include predefined and custom machine types, disk types (persistent disks, local SSDs, Cloud Storage), and networking interfaces. Features covered in the module include machine rightsizing, startup/shutdown scripts, metadata, availability policies, OS patch management, and pricing/usage discounts.

216
Q

Question: What are Tensor Processing Units (TPUs), and what are their primary advantages over CPUs and GPUs for machine learning workloads?

A

Tensor Processing Units (TPUs) (01:59 - 03:24)

Content: “It is important to mention that hardware manufacturers have run up against limitations, and CPUs, which are central processing units, and GPUs, which are graphics processing units, can no longer scale to adequately reach the rapid demand for ML. To help overcome this challenge, in 2016 Google introduced the Tensor Processing Unit, or TPU. TPUs are Google’s custom-developed application-specific integrated circuits (ASICs) used to accelerate machine learning workloads. TPUs act as domain-specific hardware, as opposed to general-purpose hardware with CPUs and GPUs. This allows for higher efficiency by tailoring architecture to meet the computation needs in a domain, such as the matrix multiplication in machine learning. TPUs are generally faster than current GPUs and CPUs for AI applications and machine learning. They are also significantly more energy-efficient. Cloud TPUs have been integrated across Google products, making this state-of-the-art hardware and supercomputing technology available to Google Cloud customers. TPUs are mostly recommended for models that train for long durations and for large models with large effective batch sizes.”

Answer: TPUs are Google’s custom-developed ASICs designed to accelerate machine learning workloads. Their primary advantages include higher speed and energy efficiency compared to CPUs and GPUs for AI applications, due to their domain-specific architecture tailored for tasks like matrix multiplication.

217
Q

Question: What are the three disk options available in Compute Engine, and how does CPU selection affect network throughput?

A

Compute Options, Disk Types, and Networking (03:24 - 06:37)

Content: “Let’s start by looking at the compute options. Compute Engine provides several different machine types that we’ll discuss later in this module. If those machines don’t meet your needs, you can also customize your own machine. Your choice of CPU will affect your network throughput. Specifically, your network will scale at 2 gigabits per second for each CPU core, except for instances with 2 or 4 CPUs which receive up to 10 gigabits per second of bandwidth. There is a theoretical maximum throughput of 200 gigabits per second for an instance with 176 vCPU, when you choose an C3 machine series. When you’re migrating from an on-premises setup, you’re used to physical cores, which have hyperthreading. On Compute Engine, each virtual CPU (or vCPU) is implemented as a single hardware hyper-thread on one of the available CPU Platforms. After you pick your compute options, you want to choose your disk. You have three options: Standard, SSD, or local SSD. So basically, do you want the standard spinning hard disk drives (HDDs), or flash memory solid-state drives (SSDs)? Both of these options provide the same amount of capacity in terms of disk size when choosing a persistent disk. Therefore, the question really is about performance versus cost, because there’s a different pricing structure. Basically, SSDs are designed to give you a higher number of IOPS per dollar versus standard disks, which will give you a higher amount of capacity for your dollar. Local SSDs have higher throughput and lower latency than SSD persistent disks, because they are attached to the physical hardware. However, the data that you store on local SSDs persists only until you stop or delete the instance. Typically, a local SSD is used as a swap disk, just like you would do if you want to create a ramdisk, but if you need more capacity, you can store those on a local SSD. Standard and non-local SSD disks can be sized up to 257 TB for each instance. The performance of these disks scales with each GB of space allocated.1 As for networking, we have already seen networking features applied to Compute Engine in the previous module’s lab. We looked at the different types of networks and created firewall rules using IP addresses and network tags. You’ll also notice that you can do regional HTTPS load balancing and network load balancing. This doesn’t require any pre-warming because a load balancer isn’t a hardware device that needs to analyze your traffic. A load balancer is essentially a set of traffic engineering rules that are coming into the Google network, and VPC is applying your rules destined2 to your IP address subnet range.”
1.
www.studocu.com
www.studocu.com
2.
www.studocu.com
www.studocu.com

Answer: The three disk options are Standard (HDDs), SSDs, and local SSDs. Network throughput scales at 2 gigabits per second per CPU core, with exceptions for 2 and 4 CPU instances (up to 10 Gbps) and a theoretical maximum of 200 Gbps for 176 vCPU C3 machine series.

218
Q

Question: How does the initial access to Linux and Windows VM instances differ, and what are the first two stages of a VM’s lifecycle after creation?

A

Section 1: VM Access and Initial Lifecycle (00:00 - 01:21)

Content: “For accessing a VM, the creator of an instance has full root privileges on that instance. On a Linux instance, the creator has SSH capability and can use the Google Cloud console to grant SSH capability to other users. On a Windows instance, the creator can use the console to generate a username and password. After that, anyone who knows the username and password can connect to the instance using a Remote Desktop Protocol, or RDP, client. We listed the required firewall rules for both SSH and RDP here, but you don’t need to define these if you are using the default network that we covered in the previous module. The lifecycle of a VM is represented by different statuses. We will cover this lifecycle on a high level, but we recommend returning to this diagram as a reference. When you define all the properties of an instance and click Create, the instance enters the provisioning state. Here the resources such as CPU, memory, and disks are being reserved for the instance, but the instance itself isn’t running yet. Next, the instance moves to the staging state where resources have been acquired and the instance is prepared for launch. Specifically, in this state, Compute Engine is adding IP addresses, booting up the system image, and booting up the system.”

Answer:
For Linux, the creator has SSH access and can grant it to others.
For Windows, the creator generates a username and password for RDP access.
The first two stages are provisioning (resource reservation) and staging (resource acquisition and preparation).

219
Q

Question: What are some actions that can be performed on a running VM, and what is the difference between stopping and resetting a VM?

A

Section 2: Running and Stopping VMs (01:21 - 02:44)

Content: “After the instance starts running, it will go through pre-configured startup scripts and enable SSH or RDP access. Now, you can do several things while your instance is running. For example, you can live migrate your virtual machine to another host in the same zone instead of requiring your instance to be rebooted. This allows Google Cloud to perform maintenance that is integral to keeping the infrastructure protected and reliable, without interrupting any of your VMs. While your instance is running, you can also move your VM to a different zone, take a snapshot of the VM’s persistent disk, export the system image, or reconfigure metadata. We will explore some of these tasks in later labs. Some actions require you to stop your virtual machine; for example, if you want to upgrade your machine by adding more CPU. When the instance enters this state, it will go through pre-configured shutdown scripts and end in the terminated state. From this state, you can choose to either restart the instance, which would bring it back to its provisioning state, or delete it. You also have the option to reset a VM, which is similar to pressing the reset button on your computer. This action wipes the memory contents of the machine and resets the virtual machine to its initial state. The instance remains in the running state through the reset.”

Answer:
Actions on a running VM include live migration, zone movement, snapshotting, image exporting, and metadata reconfiguration.
Stopping a VM terminates it, allowing for restarting or deletion. Resetting a VM wipes its memory and returns it to its initial state while keeping it running.

220
Q

Question: What causes a VM to enter a repairing state, and what happens when a VM is suspended?

A

Repairing and Suspending VMs (02:44 - 03:20)

Content: “The VM may also enter a repairing state. Repairing occurs when the VM encounters an internal error or the underlying machine is unavailable due to maintenance. During this time, the VM is unusable. You are not billed when a VM is in repair. VMs are not covered by the Service level agreement (SLA) while they are in repair. If repair succeeds, the VM returns to one of the above states. Finally, when you suspend the VM, it enters in the suspending state, before being suspended. You can then resume the VM or delete it.”

Answer:
A VM enters a repairing state due to internal errors or underlying machine maintenance.
Suspending a VM pauses it, allowing for later resumption or deletion.

221
Q

Question: What are the two main components of OS patch management in Google Cloud, and what resources are still charged for when a VM is in the terminated state?

A

OS Patch Management and Terminated VMs (04:45 - 06:54)

Content: “OS updates are a part of managing an infrastructure. Let’s see how we can manage the updates to a fleet of Windows VMs. When you provision a premium image, there is a cost associated with the image. This cost includes both the usage of the OS but also the patch management of the OS. Using Google Cloud, we can easily manage the patching of your OSes. Managing patches effectively is a great way to keep your infrastructure up-to-date and reduce the risk of security vulnerabilities. But without the right tools, patching can be daunting and labor intensive. Use OS patch management to apply operating system patches across a set of Compute Engine VM instances. Long-running VMs require periodic system updates to protect against defects and vulnerabilities. The OS patch management service has two main components: Patch compliance reporting, which provides insights on the patch status of your VM instances across Windows and Linux distributions. Along with the insights, you can also view recommendations for your VM instances. Patch deployment, which automates the operating system and software patch update process. A patch deployment schedules patch jobs. A patch job runs across VM instances and applies patches. There are several tasks that can be performed with patch management. You can: … Create patch approvals. You can select what patches to apply to your system from the full set of updates available for the specific operating system. Set up flexible scheduling. You can choose when to run patch updates (one-time and recurring schedules). Apply advanced patch configuration settings. You can customize your patches by adding configurations such as pre and post patching scripts. And you can manage these patch jobs or updates from a centralized location. When a VM is terminated, you do not pay for memory and CPU resources. However, you are charged for any attached disks and reserved IP addresses. In the terminated state, you can perform any of the actions listed here, such as changing the machine type, but you cannot change the image of a stopped VM. Also, not all of the actions listed here require you to stop a virtual machine. For example, VM availability policies can be changed while the VM is running, as discussed previously.”

Answer:
The two components are patch compliance reporting and patch deployment.
Charges persist for attached disks and reserved IP addresses.

Sources and related content

222
Q

Question: What are the two main components of OS patch management in Google Cloud, and what resources are still charged for when a VM is in the terminated state?

A

OS Patch Management and Terminated VMs (04:45 - 06:54)

Answer:
The two components are patch compliance reporting and patch deployment.
Charges persist for attached disks and reserved IP addresses.

223
Q

Question: What is the default maintenance behavior for Compute Engine VMs, and how can it be changed?

A

VM State Changes and Availability Policies (03:20 - 04:45)

Answer: The default behavior is live migration. It can be changed to terminate the instance during maintenance events through availability policies configured during creation or while running.

224
Q

Question: What causes a VM to enter a repairing state, and what happens when a VM is suspended?

A

Answer:
A VM enters a repairing state due to internal errors or underlying machine maintenance.
Suspending a VM pauses it, allowing for later resumption or deletion.

225
Q

Question: What are the three methods for creating and configuring VMs in Google Cloud, and what is the advantage of using the Google Cloud Console before utilizing the command-line or RESTful API?

A

VM Configuration Options and Machine Types (00:00 - 01:40)

Content: “Now that you have completed the lab, let’s dive deeper into the compute options that are available to you in Google Cloud, by focusing on CPU and memory. You have three options for creating and configuring a VM. You can use the Cloud Console as you did in the previous lab, the Cloud Shell command line, or the RESTful API. If you’d like to automate and process very complex configurations, you might want to programmatically configure these through the RESTful API by defining all the different options for your environment. If you plan on using the command line or RESTful API, I recommend that you first configure the instance through the Google Cloud console and then ask Compute Engine for the equivalent REST request or command line, as shown in the demo earlier. This way you avoid any typos and get dropdown lists of all the available CPU and memory options. Speaking of CPU and memory options, let’s look at the different machine types that are currently available. When you create a VM, you select a machine type from a machine family that determines the resources available to that VM. There are several machine families you can choose from and each machine family is further organized into machine series and predefined machine types within each series. A machine family is a curated set of processor and hardware configurations optimized for specific workloads. When you create a VM instance, you choose a predefined or custom machine type from your preferred machine family. Alternatively, you can create custom machine types. These let you specify the number of vCPUs and the amount of memory for your instance.”

Answer: The three methods are the Google Cloud Console, the Cloud Shell command line, and the RESTful API. Using the Google Cloud Console first helps to avoid typos and provides dropdown lists of available CPU and memory options.

226
Q

Question: What are the key characteristics of the general-purpose machine family, and what are some of the specific use cases for E2, N2/N2D, and Tau T2D/T2A machine series?

A

General-Purpose Machine Family (01:40 - 05:52)

Content: “There are four Compute Engine machine families. General-purpose, Compute-optimized, Memory-optimized, and Accelerator-optimized. Let’s look at each in more detail. The general-purpose machine family has the best price-performance with the most flexible vCPU to memory ratios, and provides features that target most standard and cloud-native workloads. The E2 machine series is suited for day-to-day computing at a lower cost, especially where there are no application dependencies on a specific CPU architecture. E2 VMs provide a variety of compute resources for the lowest price on Compute Engine, especially when paired with committed-use discounts. You simply pick the amount of vCPU and memory you want, and Google provisions it for you. The Standard E2 VMs have between 2 to 32 vCPUs with a ratio of 0.5 GB to 8 GB of memory per vCPU. They are a great choice for web servers, small to medium databases, development and test environments, and many applications that don’t have strict performance requirements. They offer a compatible performance baseline with the current N1 VMs for those of you who have been using them. The E2 machine series also contains shared-core machine types that use context-switching to share a physical core between vCPUs for multitasking. Different shared-core machine types sustain different amounts of time on a physical core. In general, shared-core machine types can be more cost-effective for running small, non-resource intensive applications than standard, high-memory, or high-CPU machine types. Shared-core E2 machine types have 0.25 to 1 vCPUs with 0.5 GB to 8 GB of memory. The N2 and N2D are the next generation following the N1 VMs, offering a significant performance jump. N2 and N2D are the most flexible VM types and provide a balance between price and performance across a wide range of VM shapes, including enterprise applications, medium-to-large databases, and many web and app-serving workloads. Committed use and sustained use discounts are supported. N2 VMs support the latest second generation scalable processor from Intel with up to 128 vCPUs and 0.5 to 8 GB of memory per vCPU. Cascade Lake is the default processor for machine types with up to 80 vCPUs. For larger machine types Ice Lake is the default processor for specific regions and zones. N2D are AMD-based general purpose VMs. They leverage the latest EPYC Milan and EPYC Rome processors, and provide up to 224 vCPUs per node. Tau T2D and Tau T2A VMs are optimized for cost-effective performance of demanding scale-out workloads. T2D VMs are built on the latest 3rd Gen AMD EPYCTM processors and offer full x86 compatibility. They are suited to scale-out workloads including web servers, containerized microservices, media transcoding, and large-scale Java applications. T2D VMs come in predefined VM shapes, with up to 60 vCPUs per VM and 4 GB of memory per vCPU. Tau T2A machine series is the first machine series in Google Cloud to run on Arm processors. The Tau T2A machine series runs on a 64 core Ampere Altra processor with an Arm instruction set and an all-core frequency of 3 GHz. If you have containerized workloads, Tau VMs are supported by Google Kubernetes Engine to help optimize price-performance. You can add T2D nodes to your GKE clusters by specifying the T2D machine type in your GKE node pools.”

Answer: The general-purpose machine family offers the best price-performance with flexible vCPU to memory ratios. E2 is for cost-effective day-to-day computing. N2/N2D provides a balance between price and performance for enterprise applications and databases. Tau T2D/T2A is optimized for cost-effective performance of scale-out workloads.

227
Q

Question: What is the minimum billing increment for Compute Engine resources, and how does the resource-based pricing model work?

A

Compute Engine Pricing Basics (00:00 - 00:35)

Content: “Google Cloud offers a variety of different options to keep the prices low for Compute Engine resources. All vCPUs, GPUs, and GB of memory are charged a minimum of 1 minute. For example, if you run your virtual machine for 30 seconds, you will be billed for 1 minute of usage. After 1 minute, instances are charged in 1-second increments. Compute Engine uses a resource-based pricing model, where each vCPU and each GB of memory on Compute Engine is billed separately rather than as a part of a single machine type. You still create instances using predefined machine types, but your bill reports them as individual vCPUs and memory used.”

Answer: The minimum billing increment is 1 minute. After 1 minute, instances are charged in 1-second increments. The resource-based pricing model bills vCPUs and memory separately, even when using predefined machine types.

228
Q

Question: What are the main discount options available for Compute Engine, and what is the key difference between preemptible VMs and Spot VMs?

A

Discount Options (00:35 - 01:51)

Content: “There are several discounts available but the discount types cannot be combined. Resource-based pricing allows Compute Engine to apply sustained use discounts to all of your predefined machine types usage in a region collectively rather than to individual machine types. If your workload is stable and predictable, you can purchase a specific amount of vCPUs and memory for a discount off of normal prices in return for committing to a usage term of 1 year or 3 years. The discount is up to 57% for most machine types or custom machine types. The discount is up to 70% for memory-optimized machine types. Preemptible and Spot VMs are instances that you can create and run at a much lower price than normal instances. For both types of VM, Compute Engine might terminate (or preempt) these instances if it requires to access those resources for other tasks. Both preemptive VMs and Spot VMs are excess Compute Engine capacity so their availability varies with usage. Importantly, preemptible VMs can only run for up to 24 hours at a time, but Spot VMs do not have a maximum runtime.”

Answer: The main discount options are sustained use discounts, committed use discounts, and preemptible/Spot VMs. Preemptible VMs have a maximum runtime of 24 hours, while Spot VMs do not.

229
Q

Question: How does Compute Engine help optimize VM sizing, and what are sustained use discounts?

A

VM Sizing and Free Usage Limits (02:11 - 03:03)

Content: “The ability to customize the amount of memory and CPU through custom machine types allows for further pricing customization. Speaking of sizing your machine, Compute Engine provides VM sizing recommendations to help you optimize the resource used of your virtual machine instances. When you create a new instance, recommendations for the new instance will appear 24 hours after the instance has been created. Compute Engine also has Free Usage Limits. Sustained use discounts are automatic discounts that you get for running specific Compute Engine resources (vCPUs, memory, and GPU devices) for a significant portion of the billing month. For example, when you run one of these resources for more than 25% of a month, Compute Engine automatically gives you a discount for every incremental minute you use for that instance. The discount increases with usage, and you can get up to 30% net discount for instances that run the entire month.”

Answer: Compute Engine provides VM sizing recommendations 24 hours after instance creation. Sustained use discounts are automatic discounts for running Compute Engine resources for a significant portion of the billing month,

1 with discounts increasing up to 30% for full-month usage

230
Q

Question: What is the minimum billing increment for Compute Engine resources, and how does the resource-based pricing model work?

A

Compute Engine Pricing Basics (00:00 - 00:35)

Answer: The minimum billing increment is 1 minute. After 1 minute, instances are charged in 1-second increments. The resource-based pricing model bills vCPUs and memory separately, even when using predefined machine types.

231
Q

Question: What are the main discount options available for Compute Engine, and what is the key difference between preemptible VMs and Spot VMs?

A

Discount Options (00:35 - 01:51)

Answer: The main discount options are sustained use discounts, committed use discounts, and preemptible/Spot VMs. Preemptible VMs have a maximum runtime of 24 hours, while Spot VMs do not.

232
Q

VM Sizing and Free Usage Limits (02:11 - 03:03)

Question: How does Compute Engine help optimize VM sizing, and what are sustained use discounts?

A

Answer: Compute Engine provides VM sizing recommendations 24 hours after instance creation. Sustained use discounts are automatic discounts for running Compute Engine resources for a significant portion of the billing month,

1 with discounts increasing up to 30% for full-month usage.

233
Q

Question: How do sustained use discounts increase with usage, and what tool can be used to estimate these discounts?

A

Sustained Use Discount Details (03:03 - 04:27)

Answer: Sustained use discounts increase with usage, reaching up to 30% for full-month usage. The Google Cloud Pricing Calculator can be used to estimate these discounts.

234
Q

Question: How does Compute Engine calculate sustained use discounts when using multiple instances with different machine types and usage durations?

A

Sustained Use Discount Example (04:27 - 05:13)

Answer: Compute Engine breaks down the vCPUs and memory usage of each instance and combines them to maximize sustained use discounts, even if the instances have different machine types and run at different times.

235
Q

Sustained Use Discount Example (04:27 - 05:13)

Question: How does Compute Engine calculate sustained use discounts when using multiple instances with different machine types and usage durations?

A

Answer: Compute Engine breaks down the vCPUs and memory usage of each instance and combines them to maximize sustained use discounts, even if the instances have different machine types and run at different times.

236
Q

Question: What are the key characteristics and limitations of preemptible VMs, and what is a common use case for them?

A

Preemptible VMs (00:00 - 01:21)

Content: “As I mentioned earlier, a preemptible VM is an instance that you can create and run at a much lower cost than normal instances. See whether you can make your application function completely on preemptible VMs, because a 60 to 91% discount is a significant investment in your application. Now, just to reiterate, these VMs might be preempted at any time, and there is no charge if that happens within the first minute. Also, preemptible VMs are only going to live for up to 24 hours, and you only get a 30-second notification before the machine is preempted. It’s also worth noting that there are no live migrations nor automatic restarts in preemptible VMs, but something that we might want to highlight is that you can actually create monitoring and load balancers that can start up new preemptible VMs in case of a failure. In other words, there are external ways to keep restarting preemptible VMs if you need to. One major use case for preemptible VMs is running batch processing jobs. If some of those instances terminate during processing, the job slows but it does not completely stop. Therefore, preemptible instances complete your batch processing tasks without placing additional workload on your existing instances, and without requiring you to pay full price for additional normal instances.”

Answer:
Preemptible VMs offer significant cost savings but can be preempted at any time with a 30-second notification, have a 24-hour lifespan, and do not support live migrations or automatic restarts.
A common use case is batch processing jobs, where interruptions are tolerable.

237
Q

Question: What are the key differences between Spot VMs and preemptible VMs, and what factors affect the availability of Spot VMs?

A

Spot VMs (01:21 - 02:37)

Content: “Spot VMs are the latest version of preemptible VMs. Spot VMs are virtual machine (VM) instances with the spot provisioning model. New and existing preemptible VMs continue to be supported, and preemptible VMs use the same pricing model as Spot VMs. However, spot VMs provide new features that preemptible VMs do not support. For example, preemptible VMs can only run for up to 24 hours at a time, but Spot VMs do not have a maximum runtime. Like preemptible VMs, Compute Engine might preempt Spot VMs if it needs to reclaim those resources for other tasks. The probability that Compute Engine stops Spot VMs for a system event is generally low, but might vary from a day to day and from zone to zone depending on current conditions. Spot VMs are finite Compute Engine resources, so they might not always be available. Like preemptible VMs, it’s worth noting that Spot VMs can’t live-migrate to become standard VMs while they are running or be set to automatically restart when there is a maintenance event. There are many best practices which can help you get the most of using Spot VMs. For example, resources for Spot VMs come out of excess and backup Google Cloud capacity. Capacity for spot VMs is often easier to get for smaller machine types, meaning machine types with less resources like vCPU and memory.”

Answer:
Spot VMs do not have a 24-hour runtime limit, unlike preemptible VMs.
Availability of Spot VMs varies based on excess Google Cloud capacity, which can fluctuate daily and by zone. Smaller machine types are typically more available.

238
Q

Question: What are sole-tenant nodes, and what are their primary use cases?
:

A

Sole-Tenant Nodes (02:37 - 04:10)

Content: “If you have workloads that require physical isolation from other workloads or virtual machines in order to meet compliance requirements, you want to consider sole-tenant nodes. A sole-tenant node is a physical Compute Engine server that is dedicated to hosting VM instances only for your specific project. Use sole-tenant nodes to keep your instances physically separated from instances in other projects, or to group your instances together on the same host hardware, for example if you have a payment processing workload that needs to be isolated to meet compliance requirements. The diagram on the left shows a normal host with multiple VM instances from multiple customers. A sole-tenant node is shown on the right and it also has multiple VM instances, but they all belong to the same project. You can also fill the node with multiple smaller VM instances of varying sizes, including custom machine types and instances with extended memory. Also, if you have existing operating system licenses, you can bring them to Compute Engine using sole-tenant nodes while minimizing the physical core usage with the in-place restart feature. To learn how to create nodes and place your instances on those nodes, see the link section of this video.”

Sole-tenant nodes are dedicated physical Compute Engine servers for a single project, providing physical isolation.
They are used for compliance requirements, such as isolating payment processing workloads, and for bringing existing OS licenses.
Section 4: Shielded VMs and Confidential VMs (04:10 - 06:17)

239
Q

Question: What are the primary security benefits of Shielded VMs and Confidential VMs, and how do they differ?

A

Shielded VMs and Confidential VMs (04:10 - 06:17)

Content: “Another compute option is to create a shielded VM. Shielded VMs offer verifiable integrity to your VM instances, so you can be confident that your instances haven’t been compromised by boot or kernel-level malware or rootkits. Shielded VMS is the first offering in the Shielded Cloud Initiative. The Shielded Cloud Initiative is meant to provide an even more secure foundation for all of Google Cloud by providing verifiable integrity and offering features, like vTPM shielding or sealing, that help prevent data exfiltration. In order to use the shielded VM features, you need to select a shielded image. We’ll learn more about images in the next section. Confidential VMs are a breakthrough technology that allows you to encrypt data in use, while it’s been processed. Google Cloud’s approach to encrypt data in use is simple, easy-to-use deployment without making any code changes to their applications or having to compromise performance. You can collaborate with anyone, all while preserving the confidentiality of your data. Confidential Virtual Machine (Confidential VM) is a type of N2D Compute Engine VM instance running on hosts based on the second generation of AMD Epyc processors, code-named “Rome”. Using AMD Secure Encrypted Virtualization (SEV), Confidential VM features built-in optimization of both performance and security for enterprise-class high memory workloads, as well as inline memory encryption that doesn’t introduce significant performance penalties on those workloads. The AMD Rome processor family is specifically optimized for compute-heavy workloads, with high memory capacity, high throughput, and support for parallel workloads. In addition, AMD SEV provides for Confidential Computing support. With the confidential execution environments provided by Confidential VM and AMD SEV, Google Cloud keeps customers’ sensitive code and other data encrypted in memory during processing. Google does not have access to the encryption keys. You can select the Confidential VM service when creating a new VM using the Google Cloud Console, the Compute Engine API, or the gcloud command-line tool.”

Answer:
Shielded VMs provide verifiable integrity against boot and kernel-level malware.
Confidential VMs encrypt data in use during processing, using AMD SEV, without code changes or performance penalties.
Shielded VMs focus on integrity, and Confidential VMs focus on data encryption while in use.

240
Q

Differences Between Physical and Cloud Persistent Disks

Question: What are the major differences between managing physical hard disks and Google Cloud persistent disks?

A

Answer:
Cloud persistent disks abstract away much of the manual management required for physical disks.
Cloud persistent disks offer features such as dynamic resizing, built-in redundancy and snapshotting, and automatic encryption, that are not as easily handled with physical drives.
Physical drives require manual partitioning, redundancy configurations through RAID, and file level encryption.

241
Q

Disk Attachment Limits and Performance Considerations

Question: How many persistent disks can be attached to a VM, and what performance consideration should be taken into account when adding many disks?

A

Answer:
The number of attachable persistent disks depends on the machine type (e.g., up to 128 for standard machine types).
Disk IO throughput shares bandwidth with network traffic, so high disk IO can impact network performance.

242
Q

RAM Disks and Disk Selection

Question: When would you use a RAM disk, and what are the key factors to consider when choosing a disk type?

A

Answer:
RAM disks are used for very high-performance, temporary data storage.
Consider performance needs (HDD vs. SSD), data redundancy, and data volatility (persistent vs. ephemeral).

243
Q

Encryption and Local SSDs

Question: How does Google Cloud handle disk encryption, and what are the characteristics of local SSDs?

A

Answer:
Google Cloud automatically encrypts data at rest, with options for customer-managed or customer-supplied encryption keys.
Local SSDs are physically attached, ephemeral, offer high IOPS, and data survives a reset but not a stop or terminate.

244
Q

Zonal and Regional Persistent Disks & Disk Types

Question: What is the difference between zonal and regional persistent disks, and what are the various types of persistent disks available, and what are their use cases?

A

Answer:
Zonal disks reside in a single zone, while regional disks replicate across two zones for high availability.
Disk types include:
Standard (HDD): For large, sequential I/O workloads.
Performance SSD: For high-performance, low-latency applications.
Balanced SSD: for a cost effective balance of performance.
Extreme SSD: for high end database workloads, with provisioned IOPS.

245
Q

Encryption and Local SSDs

Question: How does Google Cloud handle disk encryption, and what are the characteristics of local SSDs?

A

Answer:
Google Cloud automatically encrypts data at rest, with options for customer-managed or customer-supplied encryption keys.
Local SSDs are physically attached, ephemeral, offer high IOPS, and data survives a reset but not a stop or terminate.

246
Q

RAM Disks and Disk Selection

Question: When would you use a RAM disk, and what are the key factors to consider when choosing a disk type?

A

Answer:
RAM disks are used for very high-performance, temporary data storage.
Consider performance needs (HDD vs. SSD), data redundancy, and data volatility (persistent vs. ephemeral).

247
Q

Disk Attachment Limits and Performance Considerations

Question: How many persistent disks can be attached to a VM, and what performance consideration should be taken into account when adding many disks?

A

Answer:
The number of attachable persistent disks depends on the machine type (e.g., up to 128 for standard machine types).
Disk IO throughput shares bandwidth with network traffic, so high disk IO can impact network performance.

248
Q

Differences Between Physical and Cloud Persistent Disks

Question: What are the major differences between managing physical hard disks and Google Cloud persistent disks?

A

Answer:
Cloud persistent disks abstract away much of the manual management required for physical disks.
Cloud persistent disks offer features such as dynamic resizing, built-in redundancy and snapshotting, and automatic encryption, that are not as easily handled with physical drives.
Physical drives require manual partitioning, redundancy configurations through RAID, and file level encryption.

249
Q

Question: What is the core function of Google Cloud Identity and Access Management (IAM)?

A

: IAM Basics (00:00 - 00:19)

Content: “Person: So what is Identity Access Management? It is a way of identifying who can do what on which resource. The who can be a person, group or application. The what refers to specific privileges or actions, and the resource could be any Google Cloud service. For example, I could give you the privilege or role of compute viewer. This proves you with read-only access to get and list compute engine resources without being able to read the data stored on them.”

Answer: IAM controls who (persons, groups, applications) has what (privileges, actions) access to which (Google Cloud) resources.

250
Q

Question: What are the next concepts that will be covered to gain a better understanding of IAM?

A

IAM Objects and Overview (00:19 - 00:37)

Content: “Cloud IAM is composed of different objects as shown on the slide. We are going to cover each of these in this module. To get a better understanding of where these fit in, let’s look at Cloud IAM policies and the Cloud IAM resource hierarchy.”

Answer: Cloud IAM policies and the Cloud IAM resource hierarchy.

251
Q

Question: Describe the structure of the Google Cloud IAM resource hierarchy and the concept of role inheritance.

A

IAM Resource Hierarchy (00:37 - 01:31)

Content: “Google Cloud resources are organized hierarchically as shown in this tree structure. The organization node is the root node in this hierarchy. Folders are the children of the organization. Projects are the children of folders, and individual resources are the children of projects. Each resource has exactly one parent. The organization resource represents your company. Cloud IAM roles granted by this level are inherited by all resources under the organization. The folder resource could represent your department. Cloud IAM roles granted at this level are inherited by all resources that the folder contains. Projects represent a trust boundary within your company. Services within the same project have the same default level of trust.”

Answer:
The hierarchy is: Organization (root) -> Folders -> Projects -> Resources.
Roles granted at higher levels (Organization, Folders) are inherited by all resources below them.
Projects create trust boundaries.

252
Q

Question: What is the organization node, and what are the key roles associated with it?

A

Organization Node Basics (00:00 - 01:16)

Content: “Let’s learn more about the organization node. As I mentioned earlier, the organization resource is the root node in the GCP resource hierarchy. This node has many roles, like the organization admin. The organization admin provides a user like Bob, with access to administer all resources belonging to his organization, which is useful for auditing. There is also a project creator role, which allows a user like Alice, to create projects within her organization. I am showing the project creator role here because it can also be applied at the organization level, which would then be inherited by all the projects within the organization. The organization resource is closely associated with a G Suite or Cloud Identity Account. When a user with a G Suite or Cloud Identity Account creates a GCP project an organization resource is automatically provisioned for them. Then Google Cloud communicates its availability to the G Suite or Cloud Identity super admins. These super admin accounts, should be used very carefully because they have a lot of control over your organization and all the resources underneath it. The G Suite or Cloud Identity super administrators and the GCP organization admin are key roles during the setup process and for lifecycle control, for the organization resource.”

Answer:
The organization node is the root of the Google Cloud resource hierarchy.
Key roles include:
Organization Admin: Manages all organization resources.
Project Creator: Creates projects within the organization.
G suite or Cloud Identity super admins: manages the root account.

253
Q

Question: What are the distinct responsibilities of G Suite/Cloud Identity super administrators and organization administrators?

A

: Responsibilities of Super Admins and Organization Admins (01:16 - 02:12)

Content: “The two roles are generally assigned to different users or groups, although this depends on the organization structure and needs. In the context of GCP organization setup, the G Suite or Cloud Identity super administrator responsibilities are: assign the organization admin role to some users, be a point of contact in case of recovery issues, control the lifecycle of the G Suite or Cloud Identity account and organization resource. The responsibilities of the organization admin role are: define IAM policies, determine the structure of the resource hierarchy, delegate responsibility over critical components such as networking, billing, and resource hierarchy, through IAM roles. Following the principle of least privilege, this role does not include the permission to perform other actions, such as creating folders. To get these permissions, an organization admin must assign additional roles to their account.”

Answer:
Super administrators: Manage the root account and assign organization admin roles.
Organization administrators: Define IAM policies and structure the resource hierarchy.

254
Q

Question: What are folders, and how can they be used to organize resources within an organization?

A

Folders and Their Structure (02:12 - 02:56)

Content: “Let’s talk more about folders, because they can be viewed as sub organizations within the organization. Folders provide an additional grouping mechanism and isolation boundary between projects. Folders can be used to model different legal entities, departments, and teams within a company. For example, a first-level of folders can be used to represent the main departments in your organization, like departments x and y. Because folders, can contain projects in other folders, each folder could then include other subfolders to represent different teams, like teams A and B. Each team folder could contain additional subfolders, to represent different applications, like products 1 and 2. Folders allow delegation of administration rights, for example, each head of a department, can be granted full ownership of all GCP resources that belong to their department. Similarly, access to resources can be limited by folder, so users in one department can only access and create GCP resources, within that folder.”

Answer:
Folders are sub-organizations that group projects and other folders.
They model organizational structures like departments and teams, and allow for delegated administration and access control.

255
Q
A

Resource Manager Roles (02:56 - 03:56)

Content: “Let’s look at some other resource manager roles, while remembering that policies are inherited from top to bottom. The organization node also has a viewer role. They grants view access to all resources within an organization. The folder node has multiple roles that mimic the organizational roles, but are applied to resources within a folder. There is an admin role that provides full control over folders. A creator role, to browse the hierarchy and create folders, and a viewer role, to view folders and projects below a resource. Similarly for projects, there is a creator role that allows a user to create new projects, making that user automatically the owner. There is also a project deleter role that grants deletion privileges for projects.”

Answer:
Organization: Viewer (view access).
Folder: Admin (full control), Creator (create folders), Viewer (view folders/projects).
Project: Creator (create projects, becomes owner), Deleter (delete projects).

256
Q

Question: What are the three main types of roles within Google Cloud IAM?

A

Types of IAM Roles (00:00 - 00:14)

Content: “Let’s talk more about roles which define the can do what on which resource part of Cloud IAM. There are three types of roles in Cloud IAM, basic roles, predefined roles, and custom roles.”

Answer: Basic roles, predefined roles, and custom roles.

257
Q

Question: What are basic roles, and what are the permissions associated with the owner, editor, and viewer roles?

A

Basic Roles (00:14 - 01:21)

Content: “Basic roles are the original roles that were available in the Cloud console, but they are broad. You apply them to a Google Crowd project, and they affect all resources in that project. In other words, IAM basic roles offer fixed, coarse-grained levels of access. The basic roles are the owner, editor and viewer roles. The owner has full administrative access. This includes the ability to add and remove members and delete projects. The editor role has modify and delete access. This allows the developer to deploy applications and modify or configure its resources. The view role has read only access. Now all of these roles are concentric. That is the owner role includes the permissions of the editor role. And the editor role includes the permissions of the viewer role. There is also a billing administrator role to manage billing and add or remove administrators without the right to change the resources in the project. Each project can have multiple owners, editors, viewers and billing administrators.”

Answer:
Basic roles are broad, project-level roles.
Owner: Full administrative access.
Editor: Modify and delete access.
Viewer: Read-only access.
Billing Administrator: Manages billing.

258
Q

Question: What are predefined roles, and how do they differ from basic roles?

A

Predefined Roles and Permissions (01:21 - 02:26)

Content: “GCP services, offers their own set of predefined roles, and they define where the roles can be applied. This provides members with granular access to specific GCP resources and prevents unwanted access to other resources. These roles are a collection of permissions, because to do any meaningful operations, you usually need more than one permission. For example, as shown here, a group of users is granted the instance admin role on project a. This provides the users of that group with all the Compute Engine permissions listed on the right and even more. Grouping these permissions into a role makes them easier to manage. The permissions themselves are classes and methods in the API’s. For example, compute instance start can be broken down into the service, resource and verb. That mean that the permission is used to start a stopped Compute Engine instance. These permissions usually line with the actions corresponding REST API.”

Answer:
Predefined roles offer granular access to specific GCP services and resources.
They are collections of permissions, unlike the broad basic roles.

259
Q

Question: What are the Compute Admin, Network Admin, and Storage Admin predefined roles in Compute Engine?

A

Content: “Compute Engine has several predefined IAM roles. Let’s look at three of those. The Compute Admin role provides full control of all Compute Engine resources. This includes all permissions that start with compute, which means that every action for any type of Compute Engine resource is permitted. The Network Admin role Contains permissions to create, modify and delete network resources, except for firewall rules and SSL certificates. In other words, the network admin role allows read only access to firewall rules SSL certificates, and instances to view their ephemeral IP addresses. The storage admin role contains permissions to create, modify, and delete disks, images, and snapshots. For example, if your company has someone who manages project images, and you don’t want them to have the editor role in the project. Grant their account the storage admin role on that project. For the full list of predefined roles for Compute Engine, see the links section in the slides.”

Answer:
Compute Admin: Full control of Compute Engine resources.
Network Admin: Manages network resources (excluding firewalls and SSL).
Storage Admin: Manages disks, images, and snapshots.

260
Q

Question: When and why would you use custom roles in IAM?

A

Custom Roles (03:29 - 04:03)

Content: “Now, roles are meant to represent abstract functions and are customized to line with real jobs. But what if one of those roles does not have enough permissions? Or you need something even finer grained? That’s what custom roles permit. A lot of companies use the least privileged model in which each person in your organization is giving the minimal amount of privilege needed to do their job. Let’s say you want to define an instance operator role to allow some users to start and stop Compute Engine virtual machines, but not reconfigure them. Custom roles allow you to do that.”

Answer:
Custom roles are used when predefined roles don’t provide the necessary granularity or specific permissions.
They are very useful when following the principle of least privilege.

261
Q

Question: What are the five different types of members in Google Cloud IAM?

A

Types of IAM Members (00:00 - 01:21)

Content: “Let’s talk more about members, which define the “who” part of “who can do what on which resource.” There are five different types of members: Google Accounts, Service Accounts, Google Groups, Google Workspace domains, and Cloud Identity domains. A Google account represents a developer, an administrator, or any other person who interacts with Google Cloud. Any email address that is associated with a Google account can be an identity, including gmail.com or other domains. New users can sign up for a Google account by going to the Google account signup page, without receiving mail through Gmail. A service account is an account that belongs to your application instead of to an individual end user. When you run code that is hosted on Google Cloud, you specify the account that the code should run as. You can create as many service accounts as needed to represent the different logical components of your application. A Google group is a named collection of Google accounts and service accounts. Every group has a unique email address that is associated with the group. Google groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one-at-a-time for individual users or service accounts. A Workspace domain represents a virtual group of all the Google accounts that have been created in an organization’s Workspace account. Workspace domains represent your organization’s internet domain name, such as example.com, and when you add a user to your Workspace domain, a new Google account is created for the user inside this virtual group, such as username@example.com. Google Cloud customers who are not Workspace customers can get these same capabilities through Cloud Identity. Cloud Identity lets you manage users and groups using the Google Admin Console, but you do not pay for or receive Workspace’s collaboration products such as Gmail, Docs, Drive, and Calendar. Now it’s important to note that you cannot use IAM to create or manage your users or groups. Instead, you can use Cloud Identity or Workspace to create and manage users.”

Answer: Google Accounts, Service Accounts, Google Groups, Google Workspace domains, and Cloud Identity domains.

262
Q

Question: What are organization policies, and how do Google Cloud Directory Sync and Single Sign-On (SSO) integrate with existing identity systems?

A

: Organization Policies, Directory Sync, and Single Sign-On (07:26 - 09:24)

Content: “An organization policy is a configuration of restrictions, defined by configuring a constraint with the desired restrictions for that organization. An organization policy can be applied to the organization node, and all of its folders or projects within that node. Descendants of the targeted resource hierarchy inherit the organization policy that has been applied to their parents. Exceptions to these policies can be made, but only by a user who has the organization policy admin role. What if you already have a different corporate directory? How can you get your users and groups into Google Cloud? Using Google Cloud Directory Sync, your administrators can log in and manage Google Cloud resources using the same usernames and passwords they already use. This tool synchronizes users and groups from your existing Active Directory or LDAP system with the users and groups in your Cloud Identity domain. The synchronization is one-way only; which means that no information in your Active Directory or LDAP map is modified. Google Cloud Directory Sync is designed to run scheduled synchronizations without supervision, after its synchronization rules are set up. Google Cloud also provides single sign-on authentication. If you have your identity system, you can continue using your own system and processes with SSO configured. When user authentication is required, Google will redirect to your system. If the user is authenticated in your system, access to Google Cloud is given; otherwise, the user is prompted to sign in. This allows you to also revoke access to Google Cloud. If your existing authentication system supports SAML2, SSO configuration is as simple as 3 links and a certificate, as shown on this slide. Otherwise, you can use a third-party solution, like ADFS, Ping, or Okta. Also, if you want to use a Google account but are not interested in receiving mail through Gmail, you can still create an account without Gmail.”
Answer:
Organization policies define restrictions that apply to the organization and its descendants.
Google Cloud Directory Sync synchronizes users/groups from existing directories (like Active Directory) to Cloud Identity.
SSO allows users to authenticate with their existing identity system to access Google Cloud.

263
Q

Question: What are IAM deny policies and IAM Conditions, and how do they enhance access control?

A

Deny Policies and IAM Conditions (05:43 - 07:26)

Content: “IAM deny policies let you set guardrails on access to Google Cloud resources. With deny policies, you can define deny rules that prevent certain principals from using certain permissions, regardless of the roles they’re granted. Deny policies are made up of deny rules. Each deny rule specifies a set of principals that are denied permissions, and the permissions that the principals are denied, or unable to use. Optionally, you can define the condition that must be true for the permission to be denied. When a principal is denied a permission, they can’t do anything that requires that permission, regardless of the IAM roles they’ve been granted. This is because IAM always checks relevant deny policies before checking relevant allow policies. IAM Conditions allow you to define and enforce conditional, attribute-based access control for Google Cloud resources. With IAM Conditions, you can choose to grant resource access to identities (members) only if configured conditions are met. For example, this could be done to configure temporary access for users in the event of a production issue or to limit access to resources only for employees making requests from your corporate office. Conditions are specified in the role bindings of a resource’s IAM policy. When a condition exists, the access request is only granted if the condition expression evaluates to true. Each condition expression is defined as a set of logic statements allowing you to specify one or more attributes to check. An organization policy is a configuration of restrictions, defined by configuring a constraint with the desired restrictions for that organization.”
Answer:
Deny policies prevent principals from using specific permissions, overriding allow policies.
IAM Conditions grant access only if specified conditions are met.

264
Q

Question: What are the five different types of members in Google Cloud IAM?

A

Types of IAM Members (00:00 - 01:21)

Content: “Let’s talk more about members, which define the “who” part of “who can do what on which resource.” There are five different types of members: Google Accounts, Service Accounts, Google Groups, Google Workspace domains, and Cloud Identity domains. A Google account represents a developer, an administrator, or any other person who interacts with Google Cloud. Any email address that is associated with a Google account can be an identity, including gmail.com or other domains. New users can sign up for a Google account by going to the Google account signup page, without receiving mail through Gmail. A service account is an account that belongs to your application instead of to an individual end user. When you run code that is hosted on Google Cloud, you specify the account that the code should run as. You can create as many service accounts as needed to represent the different logical components of your application. A Google group is a named collection of Google accounts and service accounts. Every group has a unique email address that is associated with the group. Google groups are a convenient way to apply an access policy to a collection of users. You can grant and change access controls for a whole group at once instead of granting or changing access controls one-at-a-time for individual users or service accounts. A Workspace domain represents a virtual group of all the Google accounts that have been created in an organization’s Workspace account. Workspace domains represent your organization’s internet domain name, such as example.com, and when you add a user to your Workspace domain, a new Google account is created for the user inside this virtual group, such as username@example.com. Google Cloud customers who are not Workspace customers can get these same capabilities through Cloud Identity. Cloud Identity lets you manage users and groups using the Google Admin Console, but you do not pay for or receive Workspace’s collaboration products such as Gmail, Docs, Drive, and Calendar. Now it’s important to note that you cannot use IAM to create or manage your users or groups. Instead, you can use Cloud Identity or Workspace to create and manage users.”

Answer: Google Accounts, Service Accounts, Google Groups, Google Workspace domains, and Cloud Identity domains.

265
Q

Question: What is an IAM policy, how does it relate to the resource hierarchy, and how does inheritance work?

A

IAM Policies and Resource Hierarchy (02:14 - 03:08)

Content: “A policy consists of a list of bindings. A binding binds a list of members to a role, where the members can be user accounts, Google groups, Google domains, and service accounts. A role is a named list of permissions defined by IAM. Let’s revisit the IAM resource hierarchy. A policy is a collection of access statements attached to a resource. Each policy contains a set of roles and role members, with resources inheriting policies from their parent. Think of it like this: resource policies are a union of parent and resource, where a less restrictive parent policy will always override a more restrictive resource policy. The IAM policy hierarchy always follows the same path as the Google Cloud resource hierarchy, which means that if you change the resource hierarchy, the policy hierarchy also changes. For example, moving a project into a different organization will update the project’s IAM policy to inherit from the new organization’s IAM policy. Also, child policies cannot restrict access granted at the parent level. For example, if we grant you the Editor role for Department X, and we grant you the Viewer role at the bookshelf project level, you still have the Editor role for that project.”

Answer:
An IAM policy is a collection of bindings that link members to roles.
Policies are attached to resources and follow the resource hierarchy.
Policies are inherited from parent resources to child resources. Parent policies override child policies.

266
Q

Question: What is a service account, and what is its primary purpose?

A

Section 1: Service Account Basics (00:00 - 00:58)

Content: “As mentioned earlier, another type of member is a service account. A service account is an account that belongs to your application instead of to an individual end user. This provides an identity for carrying out service-to-service interactions in a project without supplying user credentials. For example, if you write an application that interacts with Google Cloud Storage, it must first authenticate to either the Google Cloud Storage XML API or JSON API. You can enable service accounts and grant read-write access to the account on the instance where you plan to run your application. Then, program the application to obtain credentials from the service account. Your application authenticates seamlessly to the API without embedding any secret keys or credentials in your instance, image, or application code. Service accounts are identified by an email address, like the example shown here.”

Answer: A service account is an account belonging to an application,

267
Q

Question: What are the three types of service accounts, and what are their key characteristics?

A

Types of Service Accounts (00:58 - 01:57)

Content: “There are three types of service accounts: user-created or custom, built-in, and Google APIs service accounts. By default, all projects come with the built-in Compute Engine default service account. Apart from the default service account, all projects come with a Google Cloud APIs service account, identifiable by the email: project-number@cloudservices.gserviceaccount.com. This is a service account designed specifically to run internal Google processes on your behalf, and it is automatically granted the Editor role on the project. Alternatively, you can also start an instance with a custom service account. Custom service accounts provide more flexibility than the default service account, but they require more management from you. You can create as many custom service accounts as you need, assign any arbitrary access scopes or IAM roles to them, and assign the service accounts to any virtual machine instance. Let’s talk more about the default Compute Engine service account. As I mentioned, this account is automatically created per project. This account is identifiable by the email project-number-compute@developer.gserviceaccount.com, and it is automatically granted the Editor role on the project. When you start a new instance using gcloud, the default service account is enabled on that instance. You can override this behavior by specifying another service account or by disabling service accounts for the instance.”

Answer:
User-created (custom): Flexible, requires more management.
Built-in (Compute Engine default): Automatically created, granted Editor role.
Google APIs service account: Runs internal Google processes, granted Editor role.

268
Q

Question: What are access scopes, and how do they relate to authorization?

A

Authorization and Access Scopes (02:33 - 03:51)

Content: “Now, authorization is the process of determining what permissions an authenticated identity has on a set of specified resources. Scopes are used to determine whether an authenticated identity is authorized. In the example shown here, Applications A and B contain Authenticated Identities (or service accounts). Let’s assume that both applications want to use a Cloud Storage bucket. They each request access from the Google Authorization server, and in return they receive an access token. Application A receives an access token with read-only scope, so it can only read from the Cloud Storage bucket. Application B, in contrast, receives an access token with read-write scope, so it can read and modify data in the Cloud Storage bucket. Scopes can be customized when you create an instance using the default service account, as shown in this screenshot. These scopes can be changed after an instance is created by stopping it. Access scopes are actually the legacy method of specifying permissions for your VM. Before the existence of IAM roles, access scopes were the only mechanism for granting permissions to service accounts. For user-created service accounts, use IAM roles instead to specify permissions.”

Answer: Access scopes define the level of access an authenticated identity has to resources. They are a legacy way of specifying permissions, and IAM roles are now preferred for user-created service accounts.

269
Q

Question: What is the Service Account User role, and why is it important to grant it cautiously?

A

Service Account User Role and Examples (03:51 - 05:07)

Content: “Now, roles for service accounts can also be assigned to groups or users. Let’s look at the example shown on this slide. First, you create a service account that has the InstanceAdmin role, which has permissions to create, modify, and delete virtual machine instances and disks. Then you treat this service account as the resource, and decide who can use it by providing users or a group with the Service Account User role. This allows those users to act as that service account to create, modify, and delete virtual machine instances and disks. Users who are Service Account Users for a service account can access all the resources that the service account has access to. Therefore, be cautious when granting the Service Account User role to a user or group. Here is another example. The VMs running component_1 are granted Editor access to project_b using Service Account 1. VMs running component_2 are granted objectViewer access to bucket_1 using an isolated Service Account 2. This way you can scope permissions for VMs without re-creating VMs.”

Answer: The Service Account User role allows users or groups to act as a service account, granting them access to all resources the service account can access. It’s important to grant it cautiously because it grants all of the service accounts permissions to the user or group.

270
Q

Question: What are the two types of service account keys, and what are the responsibilities associated with each?

A

Service Account Authentication and Keys (05:29 - 07:27)

Content: “Now, you might ask, how are service accounts authenticated? There are two types of service account keys. By default, when using service accounts within Google Cloud (for example, from Compute Engine or App Engine) Google automatically manages the keys for service accounts. However, if you want to be able to use service accounts outside of Google Cloud, or want a different rotation period, it is possible to also manually create and manage your own service account keys. All service accounts have Google-managed key-pairs. With Google-managed service account keys, Google stores both the public and private portion of the key, and rotates them regularly. Each public key can be used for signing for a maximum of two weeks. Your private key is always held securely in escrow and is never directly accessible. You may optionally create one or more user-managed key pairs (also known as “external” keys) that can be used from outside of Google Cloud. Google only stores the public portion of a user-managed key. The User is responsible for security of the private key and performing other management operations such as key rotation, whether manually or programmatically. Users can create up to 10 service account keys per service account to facilitate key rotation. User-managed keys can be managed by using the IAM API, the gcloud command-line tool, or the Service Accounts page in the Google Cloud console. Google does not save your user-managed private keys, so if you lose them, Google cannot help you recover them. You are responsible for keeping these keys safe and also responsible for performing key rotation. User-managed keys should be used as a last resort. Consider the other alternatives, such as short-lived service account credentials (tokens), or service account impersonation. The gcloud command line shown on this slide is a fast and easy way to list all of the keys associated with a particular service account.”

Answer:
Google-managed keys: Google handles key management and rotation.
User-managed keys: User is responsible for key security and rotation. User-managed keys should be used as a last resort.

271
Q

Question: What is the primary purpose of the Organization Restrictions feature in Google Cloud?

A

Purpose of Organization Restrictions (00:00 - 00:10)

Content: “Let’s talk now about Organization Restrictions. The Organization Restrictions feature lets you prevent data exfiltration through phishing or insider attacks.”

Answer: To prevent data exfiltration through phishing or insider attacks.

272
Q
A

How Organization Restrictions Work (00:10 - 01:19)

Content: “For managed devices in an organization, the Organization Restrictions feature restricts access only to resources in authorized Google Cloud organizations. There is a need in organizations to restrict access of their employees only to resources in authorized Google Cloud organizations. Google Cloud administrators who administer Google Cloud, and egress proxy administrators, who configure the egress proxy, engage together to set up organization restrictions. The managed device is governed by the organizational policies of a company. Employees of an organization use a managed device to access the organization resources. An egress proxy administrator configures the proxy to add organization restrictions headers to any requests originating from a managed device. This proxy configuration prevents users from accessing any Google Cloud resources in non-authorized Google Cloud organization. The Organization Restrictions feature in Google Cloud inspects all requests for organization restrictions header, and allows or denies the requests based on the organization being accessed.”
Question: How do Organization Restrictions limit access to Google Cloud resources?
Answer: Organization Restrictions use an egress proxy to add headers to requests from managed devices, and Google Cloud inspects these headers to allow or deny access based on whether the accessed organization is authorized.

273
Q

Question: What are some examples of how Organization Restrictions can be used in a Google Cloud environment?

A

Use Cases of Organization Restrictions (01:19 - 01:40)

Content: “Organization Restrictions can be used to restrict access to employees in your organization so that employees can access resources only in your Google Cloud organization and not other organizations. They can also be used to allow your employees to read from Cloud Storage resources but restrict employee access only to resources in your Google Cloud organization. Or, allow your employees to access a vendor Google Cloud organization in addition to your Google Cloud organization.”

Answer:
Restrict employee access to resources only within the company’s Google Cloud organization.
Allow employees to read from Cloud Storage but limit access to resources within the company’s organization.
Allow employees to access a vendor’s Google Cloud organization in addition to the company’s own.

274
Q

Question: What are the key best practices related to the resource hierarchy and privilege management?

A

Resource Hierarchy and Least Privilege (00:00 - 00:35)

Content: “Let’s talk about some Cloud IAM best practices to help you apply the concepts you just learned in your day-to-day work. First, leverage and understand the resource hierarchy. Specifically, use projects to group resources that share the same trust boundary. Check the policy granted on each resource and make sure you recognize the inheritance. Because of inheritance, use the principle of least privilege when granting roles. Finally, audit policies using Cloud audit logs and audit memberships of groups using policies.”

Answer:
Understand and leverage the resource hierarchy.
Use projects to group resources with shared trust boundaries.
Apply the principle of least privilege when granting roles.
Regularly audit policies and group memberships.

275
Q

Question: What is another recommended IAM security best practice?

A

Service Account Best Practices (01:18 - 01:50)

Content: “Here are some best practices for using service accounts. As mentioned before, be very careful when granting the service accounts user role because it provides access to all the resources of the service account has access to. Also when you create a service account give it a display name that clearly identifies its purpose, ideally using an established naming convention. As for keys, establish key rotation policies and methods and audit keys with the serviceAccount.keys.list method.”
Question: What are the key best practices for using service accounts?
Answer:
Exercise caution when granting the Service Account User role.
Use descriptive display names for service accounts.
Implement key rotation policies and audit keys regularly.
Section 4: Cloud Identity Aware Proxy (IAP) (01:50)
Content: “Finally, I recommend using Cloud Identity Aware Proxy or Cloud IAP.”

Answer: Use Cloud Identity Aware Proxy (IAP).

275
Q

Question: Why is it recommended to grant roles to groups instead of individuals, and what are some best practices for managing groups?

A

Group Management (00:35 - 01:18)

Content: “Next, I recommend granting roles to groups instead of individuals. This allows you to update group membership instead of changing a Cloud IAM policy. If you do this, make sure to audit membership of groups used in policies and control the ownership of the Google group used in Cloud IAM policies. You can also use multiple groups to get better control. In the example on this slide, there is a network admin group. Some of those members also need a read write role to a Cloud Storage bucket, but others need the read only role. Adding and removing individuals from all three groups controls their total access. Therefore, groups are not only associated with job roles but can exist for the purpose of role assignment.”

Answer:
Granting roles to groups simplifies management by updating group memberships instead of policies.
Best practices: Audit group memberships, control group ownership, and use multiple groups for granular control.

276
Q

Question: What is Google Cloud Storage, and what are its key features?

A

Cloud Storage Basics (00:00 - 00:37)

Content: “Cloud Storage is Google Cloud’s object storage service, and it allows world-wide storage and retrieval of any amount of data at any time. You can use Cloud Storage for a range of scenarios including serving website content, storing data for archival and disaster recovery, or distributing large data objects to users via direct download. Cloud Storage has a couple of key features: It’s scalable to exabytes of1 data; The time to first byte is in milliseconds; It has very high availability across all storage classes; And It has a single API across those storage classes.”
1.
www.mssqltips.com
www.mssqltips.com

Answer: Cloud Storage is Google Cloud’s object storage service for storing and retrieving any amount of data. Key features include scalability, low latency, high availability, and a single API.

276
Q

Question: How is Cloud Storage structured, and what are the different storage classes and location types?

A

Buckets, Objects, and Storage Classes (00:37 - 01:33)

Content: “Some like to think of Cloud Storage as files in a file system but it’s not really a file system. Instead, Cloud Storage is a collection of buckets that you place objects into. You can create directories, so to speak, but really a directory is just another object that points to different objects in the bucket. You’re not going to easily be able to index all of these files like you would in a file system. You just have a specific URL to access objects. Cloud Storage has four storage classes: Standard, Nearline, Coldline and Archive, and each of those storage classes provide 3 location types: There’s a multi-region is a large geographic area, such as the United States, that contains two or more geographic places. Dual-region is a specific pair of regions, such as Finland and the Netherlands. A region is a specific geographic place, such as London. Objects stored in a multi-region or dual-region are geo-redundant.”

Answer: Cloud Storage uses buckets to store objects, not a traditional file system. Storage classes include Standard, Nearline, Coldline, and Archive. Location types are multi-region, dual-region, and region.

277
Q

Question: What are the characteristics and use cases for each of the four Cloud Storage classes?

A

Storage Class Details (01:33 - 04:06)

Content: “Now, let’s go over each of the storage classes: Standard Storage is best for data that is frequently accessed (think of “hot” data) and/or stored for only brief periods of time. This is the most expensive storage class but it has no minimum storage duration and no retrieval cost. When used in a region, Standard Storage is appropriate for storing data in the same location as Google Kubernetes Engine clusters or Compute Engine instances that use the data. Co-locating your resources maximizes the performance for data-intensive computations and can reduce network charges. When used in a dual-region, you still get optimized performance when accessing Google Cloud products that are located in one of the associated regions, but you also get improved availability that comes from storing data in geographically separate locations. When used in multi-region, Standard Storage is appropriate for storing data that is accessed around the world, such as serving website content, streaming videos, executing interactive workloads, or serving data supporting mobile and gaming applications. Nearline Storage is a low-cost, highly durable storage service for storing infrequently accessed data like data backup, long-tail multimedia content, and data archiving. Nearline Storage is a better choice than Standard Storage in scenarios where slightly lower availability, a 30-day minimum storage duration, and costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Archive Storage is the lowest-cost, highly durable storage service for data archiving, online backup, and disaster recovery. Unlike the so-to-speak “coldest” storage services offered by other Cloud providers, your data is available within milliseconds, not hours or days. Archive Storage also has higher costs for data access and operations, as well as a 365-day minimum storage duration. Archive Storage is the best choice for data that you plan to access less than once a year.”

Answer:
Standard: Frequently accessed data, no minimum duration, high cost.
Nearline: Infrequently accessed data, 30-day minimum, lower cost.
Coldline: Very infrequently accessed data, 90-day minimum, very low cost.
Archive: Rarely accessed data, 365-day minimum, lowest cost.

278
Q

Question: What is the difference between durability and availability in Cloud Storage, and how can data be accessed?

A

Durability, Availability, and Access Methods (04:06 - 05:18)

Content: “Let’s focus on durability and availability. All of these storage classes have 11 nines of durability, but what does that mean? Does that mean you have access to your files at all times? No, what that means is you won’t lose data. You may not be able to access the data, which is like going to your bank and saying well my money is in there, it’s 11 nines durable. But when the bank is closed we don’t have access to it, which is the availability that differs between storage classes and the location type. Cloud Storage is broken down into a couple of different items here. First of all, there are buckets which are required to have a globally unique name and cannot be nested. The data that you put into those buckets are objects that inherit the storage class of the bucket and those objects could be text files, doc files, video files, etc. There is no minimum size to those objects and you can scale this as much as you want as long as your quota allows it. To access the data, you can use the gcloud storage command, or either the JSON or XML APIs.”

Answer: Durability means data won’t be lost; availability means data can be accessed. Data can be accessed via gcloud storage commands or JSON/XML APIs.

279
Q

Question: How can storage classes be managed, and what access control options are available?

A

Object Storage Class Management and Access Control (05:18 - 06:57)

Content: “When you upload an object to a bucket, the object is assigned the bucket’s storage class, unless you specify a storage class for the object. You can change the default storage class of a bucket but you can’t change the location type from regional to multi-region/dual-region or vice versa. You can also change the storage class of an object that already exists in your bucket without moving the object to a different bucket or changing the URL to the object. Setting a per-object storage class is useful, for example, if you have objects in your bucket that you want to keep, but that you don’t expect to access frequently. In this case, you can minimize costs by changing the storage class of those specific objects to Nearline, Coldline or Archive Storage. In order to help manage the classes of objects in your bucket, Cloud Storage offers Object Lifecycle Management. More on that later. Let’s look at access control for your objects and buckets that are part of a project. We can use IAM for the project to control which individual user or service account can see the bucket, list the objects in the bucket, view the names of the objects in the bucket, or create new buckets. For most purposes, IAM is sufficient, and roles are inherited from project to bucket to object. Access control lists or ACLs offer finer control. For even more detailed control, signed URLs provide a cryptographic key that gives time-limited access to a bucket or object. Finally, a signed policy document further refines the control by determining what kind of file can be uploaded by someone with a signed URL.”

Answer: Storage classes can be set per object or at the bucket level. Access control options include IAM, ACLs, signed URLs, and signed policy documents.

280
Q

Question: What is Object Versioning, and how does it work?

A

Object Versioning (00:23 - 02:04)

Content: “Cloud Storage also provides Object Lifecycle Management which lets you automatically delete or archive objects. Another feature is object versioning which allows you to maintain multiple versions of objects in your bucket. You are charged for the versions as if they were multiple files, which is something to keep in mind. Cloud Storage also offers directory synchronization so that you can sync a VM directory with a bucket. Object change notifications can be configured for Cloud Storage using Pub/Sub. When enabled, Autoclass manages all aspects of storage classes for a bucket. We will discuss this later. In Cloud Storage, objects are immutable, which means that an uploaded object cannot change throughout its storage lifetime. To support the retrieval of objects that are deleted or overwritten, Cloud Storage offers the Object Versioning feature. Object Versioning can be enabled for a bucket. Once enabled, Cloud Storage creates an archived version of an object each time the live version of the object is overwritten or deleted. The archived version retains the name of the object but is uniquely identified by a generation number as illustrated on this slide by g1. When Object Versioning is enabled, you can list archived versions of an object, restore the live version of an object to an older state, or permanently delete an archived version, as needed. You can turn versioning on or off for a bucket at any time. Turning versioning off leaves existing object versions in place and causes the bucket to stop accumulating new archived object versions. Google recommends that you use Soft Delete instead of Object Versioning to protect against permanent data loss from accidental or malicious deletions. A link to the Object Versioning documentation can be found in the Course Resources for this module.”

Answer: Object Versioning allows storing multiple versions of an object. When enabled, each overwrite or deletion creates an archived version, identified by a generation number. Google recommends Soft Delete instead.

281
Q

Question: What is Soft Delete, and how does it protect data?

A

: Soft Delete (02:04 - 03:02)

Content: “Soft Delete provides default bucket-level protection for your data from accidental or malicious deletion by preserving all recently deleted objects for a specified period of time. The objects stored in Cloud Storage buckets are immutable. If you overwrite or change the data of an object, Cloud Storage deletes its earlier version and replaces it with a new one. Soft Delete retains all these deleted objects, whether from a delete command or because of an overwrite, essentially capturing all changes made to bucket data for the configured retention duration. When you create a Cloud Storage bucket, the Soft Delete feature is enabled by default with a retention duration of seven days. During the retention duration, you can restore deleted objects, but after the duration ends, Cloud Storage permanently deletes the objects. By updating the bucket’s configuration, you can increase the retention duration to 90 days or disable it by setting the retention duration to 0. A link to the Soft Delete documentation can be found in the Course Resources for this module.”

Answer: Soft Delete retains deleted objects for a specified period (default 7 days, up to 90), allowing for restoration. It protects against accidental or malicious deletions.

282
Q

Question: What is Object Lifecycle Management, and what are some of its use cases?

A

Object Lifecycle Management (03:02 - 04:10)

Content: “To support common use cases like setting a Time to Live for objects, archiving older versions of objects, or “downgrading” storage classes of objects to help manage costs, Cloud Storage offers Object Lifecycle Management. You can assign a lifecycle management configuration to a bucket. The configuration is a set of rules that apply to all the objects in the bucket. So when an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. Here are some example use cases: First, downgrade the storage class of objects older than a year to Coldline Storage. Second, delete objects created before a specific date. For example, January 1, 2017. Or third, keep only the 3 most recent versions of each object in a bucket with versioning enabled. Object inspection occurs in asynchronous batches, so rules may not be applied immediately. Also, updates to your lifecycle configuration may take up to 24 hours to go into effect. This means that when you change your lifecycle configuration, Object Lifecycle Management may still perform actions based on the old configuration for up to 24 hours. So keep that in mind. A link to the Object Lifecycle Management documentation can be found in the Course Resources for this module.”

Answer: Object Lifecycle Management automates actions on objects based on defined rules, such as downgrading storage classes, deleting old objects, or keeping only recent versions.

283
Q

Question: What is Object Retention Lock, and how does it help with compliance?

A

Object Retention Lock (04:10 - 04:56)

Content: “The Object Retention Lock feature lets you set retention configuration on objects within Cloud Storage buckets that have enabled the feature. A retention configuration governs how long the object must be retained and has the option to permanently prevent the retention time from being reduced or removed. This helps you meet data retention regulatory and compliance requirements, such as those associated with FINRA, SEC, and CFTC. This also helps provide Google Cloud immutable storage solutions with leading enterprise backup software vendor partners. A link to the Object Retention Lock documentation can be found in the Course Resources for this module.”

Answer: Object Retention Lock sets a retention period on objects, with an option for permanent lock, to meet regulatory and compliance requirements.

284
Q

Question: What services are available for transferring large amounts of data to Cloud Storage?

A

Data Transfer Services (04:56 - 05:56)

Content: “The Cloud Console allows you to upload individual files to your bucket. But what if you have to upload terabytes or even petabytes of data? There are three services that address this: Transfer Appliance, Storage Transfer Service, and Offline Media Import. Transfer Appliance is a hardware appliance you can use to securely migrate large volumes of data (from hundreds of terabytes up to 1 petabyte) to Google Cloud without disrupting business operations. The images on this slide are transfer appliances. The Storage Transfer Service enables high-performance imports of online data. That data source can be another Cloud Storage bucket, an Amazon S3 bucket, or an HTTP/HTTPS location. Finally, Offline Media Import is a third party service where physical media (such as storage arrays, hard disk drives, tapes, and USB flash drives) is sent to a provider who uploads the data. For more information on these three services, refer to the Course Resources.”

Answer: Transfer Appliance (hardware appliance), Storage Transfer Service (online data transfer), and Offline Media Import (transfer via physical media)

285
Q

Question: When should Cloud Spanner be considered over Cloud SQL?

A

Introduction to Cloud Spanner (00:00-00:16)

Content: “If cloud SQL does not fit your requirements because you need horizontal scalability, consider using cloud spanner. Cloud spanner is a service built for the cloud specifically to combine the benefits of relational database structure with non relational horizontal scale.”

Answer: Cloud Spanner should be considered when horizontal scalability is a primary requirement that Cloud SQL cannot fulfill. It is designed to merge the advantages of relational database structure with non-relational horizontal scaling.

286
Q

Question: What are the key capabilities and features of Cloud Spanner?

A

Cloud Spanner’s Capabilities (00:16-00:29)

Content: “This service can provide petabytes of capacity and offers transactional consistency at global scale schemas, SQL and automatic synchronous replication for high availability.”

Answer: Cloud Spanner offers petabyte-scale capacity, global transactional consistency, schema support, SQL compatibility, and automatic synchronous replication for high availability.

287
Q

Question: How does Cloud Spanner combine features of relational and non-relational databases?

A

Comparison with Relational and Non-Relational Databases (00:52-01:19)

Content: “Let’s compare cloud spanner with both relational and non relational databases like a relational database. Cloud spanner has schema, SQL and strong consistency. Also like a non relational database, Cloud spanner offers high availability, horizontal scalability and configurable replication, as mentioned, Cloud spanner offers the best of the relational and non relational worlds.”

Answer: Cloud Spanner provides relational database features like schema, SQL, and strong consistency, while also offering non-relational database benefits such as high availability, horizontal scalability, and configurable replication.

288
Q

Question: What is the architecture of Cloud Spanner, and how does it ensure high availability and consistency?

A

Cloud Spanner Architecture (01:30-02:03)

Content: “To better understand how all of it works. Let’s look at the architecture of cloud spanner. A cloud spanner instance replicates data in end cloud zones which can be within one region or across several regions. The database placement is configurable, meaning you can choose which region to put your database in. This architecture allows for high availability and global placement. The replication of data will be synchronized across zones using Google’s global fiber network. Using atomic clocks ensures adamiscity whenever you are updating your data.”

Answer: Cloud Spanner replicates data across Google Cloud zones, either within a region or across multiple regions. Database placement is configurable. High availability and global placement are enabled through this architecture. Data replication is synchronized across zones using Google’s global fiber network, and atomic clocks ensure strong consistency.

289
Q

Question: When is Cloud Spanner the appropriate choice in a database selection decision tree?

A

Decision Tree for Cloud Spanner (02:13-02:37)

Content: “If you have outgrown any relational database are shutting your databases for throughput, high performance, need transactional consistency, global data and strong consistency. Or just want to consolidate your database. Consider using cloud spanner. If you don’t need any of these nor full relational capabilities, consider a no SQL service, such as cloud Fire store, which we will cover next.”

Answer: Cloud Spanner is the appropriate choice when you have outgrown traditional relational databases, require high throughput, performance, global transactional consistency, or need to consolidate databases. If these needs are absent, and full relational capabilities are not required, a NoSQL service like Cloud Firestore might be more suitable.

290
Q

Question: Where can information about migrating from MySQL to Cloud Spanner be found?

A

Migration from MySQL to Cloud Spanner (02:37-02:44)

Content: “If you’re now convinced that using cloud spanner as a managed service is better than using or re implementing your existing my SQL solution, see the link section for a solution on how to migrate from my SQL to cloud spanner.”

Answer: Information on migrating from MySQL to Cloud Spanner is available in the link section of the video.

291
Q

Question: What are the key architectural features of AlloyDB, and what administrative tasks does it automate?

A

AlloyDB’s Architecture and Automation (00:16-00:35)

Content: “AlloyDB pairs a Google-built database engine with a cloud-based, multi-node architecture to deliver enterprise-grade performance, reliability, and availability. AlloyDB automates administrative tasks, such as backups, replication, patching, and capacity management.”

Answer: AlloyDB combines a Google-built database engine with a cloud-based, multi-node architecture for enterprise-grade performance. It automates tasks like backups, replication, patching, and capacity management.

292
Q

Question: How does AlloyDB utilize machine learning and adaptive algorithms?

A

Machine Learning and Adaptive Algorithms (00:35-00:47)

Content: “AlloyDB also uses adaptive algorithms and machine learning for PostgreSQL vacuum management, storage and memory management, data tiering, and analytics acceleration.”

Answer: AlloyDB uses machine learning and adaptive algorithms for PostgreSQL vacuum management, storage and memory management, data tiering, and analytics acceleration.

293
Q

Question: What is the transactional performance of AlloyDB, and what types of enterprise workloads is it suitable for?

A

Transactional Performance and Suitability (00:47-01:06)

Content: “AlloyDB provides fast transactional processing, more than 4 times faster than standard PostgreSQL for transactional workloads. It’s suitable for demanding enterprise workloads, including workloads that require high transaction throughput, large data sizes, or multiple read replicas.”

Answer: AlloyDB offers transactional processing more than 4 times faster than standard PostgreSQL. It’s suitable for demanding enterprise workloads with high transaction throughput, large data sizes, and multiple read replicas.

294
Q

Question: What are the high availability and analytical performance characteristics of AlloyDB?

A

high Availability and Analytical Performance (01:06-01:24)

Content: “AlloyDB provides high-availability and an 99.99% uptime SLA, inclusive of maintenance. AlloyDB also provides real-time business insights and is up to 100 times faster than standard PostgreSQL for analytical queries.”

Answer: AlloyDB provides high availability with a 99.99% uptime SLA and offers analytical query performance up to 100 times faster than standard PostgreSQL.

295
Q

Question: How does AlloyDB integrate with Google’s AI platform?

A

Integration with Vertex AI (01:24-01:30)

Content: “Built-in integration with Vertex AI, Google’s artificial intelligence platform, lets you call machine learning models.”

Answer: AlloyDB integrates with Vertex AI, allowing users to call machine learning models directly from the database.

296
Q

Question: What is Cloud Firestore, and what are its primary use cases?

A

Section 1: Introduction to Cloud Firestore (00:00-00:22)

Content: “If you are looking for a highly scalable NoSQL database for your applications, consider using Cloud Firestore. Cloud Firestore is a fast, fully managed, serverless, cloud native, NoSQL, document database that simplifies storing, synking and querying data for your mobile web and IOT apps at global scale.”

Answer: Cloud Firestore is a fast, fully managed, serverless, cloud-native NoSQL document database designed to simplify storing, syncing, and querying data for mobile, web, and IoT applications at a global scale.

297
Q

Question: What are the key features that make Cloud Firestore a robust NoSQL database?

A

Key Features of Cloud Firestore (00:22-00:53)

Content: “Its client libraries provide live synchronization and offline support and it’s security features and integrations with Firebase and GCP accelerate building truly serverless apps. Cloud Firestore also supports ACID transactions so if any of the operations in the transaction fail and cannot be retried, the whole transaction will fail. Also with automatic multi region replication and strong consistency, your data is safe and available even when disasters strike.”

Answer: Key features include:
Live synchronization and offline support via client libraries.
Security features and integrations with Firebase and GCP.
ACID transaction support.
Automatic multi-region replication and strong consistency.

298
Q

Question: How does Cloud Firestore improve upon Cloud Datastore, and what are its query capabilities?

A

Query Capabilities and Evolution from Cloud Datastore (00:53-01:23)
Answer: Cloud Firestore improves upon Cloud Datastore by providing:
Sophisticated query capabilities without performance degradation.

Content: “Cloud Firestore even allows you to run sophisticated queries against your NoSQL data without any degradation in performance. This gives you more flexibility in the way you structure your data. Cloud Firestore is actually the next generation of Cloud Datastore. Cloud Firestore can operate in Datastore mode, making it backwards compatible with Cloud Datastore. By creating a Cloud Firestore database in Datastore mode, you can access Cloud Firestore’s improveD storage layer while keeping Cloud Datastore system behavior. This removes the following Cloud Datastore limitations.”

A new, improved storage layer.
Backwards compatibility through Datastore mode.
Removal of Cloud Datastore limitations, such as eventual consistency.

299
Q

Question: What are the differences between Cloud Firestore’s Datastore mode and native mode?
Answer:

A

Cloud Firestore Modes (01:23-02:12)

Content: “This removes the following Cloud Datastore limitations. Queries are no longer eventually consistent instead, they are all strongly consistent. Transactions are no longer limited to 25 entity groups, rights to an entity group are no longer limited to 1 per second. Cloud Firestore in native mode introduces new features such as a new, strongly consistent storage layer, a collection and document data model, real time updates, mobile and web client libraries. Cloud Firestore is backward compatible with Cloud Datastore but the new data model, real time updates in mobile and web client library features are not. To access all of the new Cloud Firestore features, you must use Cloud Firestore in native mode.”

Datastore mode: Provides backward compatibility with Cloud Datastore, using the improved storage layer while maintaining Cloud Datastore behavior.

Native mode: Introduces new features like a collection and document data model, real-time updates, and mobile/web client libraries.

300
Q

Question: What are the recommended usage guidelines for Cloud Firestore modes, and what is its compatibility with Cloud Datastore?

A

Usage Guidelines and Compatibility (02:12-02:36)

Content: “A general guideline is to use Cloud Firestore in Datastore mode for new server projects and native mode for new mobile and web apps. As the next generation of Cloud Datastore, Cloud Firestore is compatible with all Cloud Datastore, APIs and client libraries. Existing Cloud Datastore users will be live upgraded to Cloud Firestore automatically at a future date. For more information, see the link section of this video.”

Answer:
Datastore mode is recommended for new server projects.
Native mode is recommended for new mobile and web applications.
Cloud Firestore is compatible with all Cloud Datastore APIs and client libraries.

301
Q

Question: When is Cloud Firestore the right choice, and when should Cloud Bigtable be considered?

A

Decision Tree and Cloud Bigtable (02:36-03:04)

Content: “To summarize, let’s explore this decision tree. To help you determine whether Cloud Firestore is the right storage service for your data. If your schema might change and you need an adaptable database, you need to scale to zero or you want low maintenance overhead scaling up to terabytes consider using Cloud Firestore. Also, if you don’t require transactional consistency, you might want to consider Cloud Bigtable. Depending on the cost or size, I will cover Cloud Bigtable next.”

Answer:
Cloud Firestore is suitable when you need an adaptable database with a flexible schema, scale-to-zero capabilities, and low maintenance overhead for terabyte-scale data.
Cloud Bigtable should be considered when transactional consistency is not a requirement.

302
Q

Question: What is Cloud Bigtable, and what are its primary characteristics?

A

Introduction to Cloud Bigtable (00:00-00:17)

Content: “If you don’t require transactional consistency, you might want to consider Cloud Bigtable. Cloud Bigtable is a fully managed NoSQL database with petabyte-scale and very low latency. It seamlessly scales for throughput and it learns to adjust to specific access patterns.”

Answer: Cloud Bigtable is a fully managed NoSQL database designed for petabyte-scale data with very low latency. It offers seamless scaling for throughput and adapts to access patterns.

303
Q

Question: What Google services utilize Cloud

A

Cloud Bigtable’s Origins and Use Cases (00:17-00:39)

Content: “Cloud Bigtable is actually the same database that powers many of Google’s core services, including Search, Analytics, Maps, and Gmail. Cloud Bigtable is a great choice for both operational and analytical applications, including IoT, user analytics, and financial data analysis, because it supports high read and write throughput at low latency.”
Bigtable, and what are its common use cases?

Answer: Google’s core services like Search, Analytics, Maps, and Gmail use Cloud Bigtable. Common use cases include IoT, user analytics, and financial data analysis due to its high read/write throughput and low latency.

304
Q

Question: How does Cloud Bigtable integrate with other big data tools, and what API does it support?

A

Integration and API Compatibility (00:39-01:04)

Content: “It’s also a great storage engine for machine learning applications. Cloud Bigtable integrates easily with popular big data tools like Hadoop, Cloud Dataflow, and Cloud Dataproc. Plus, Cloud Bigtable supports the open source industry standard HBase API, which makes it easy for your development teams to get started. Cloud Dataflow and Cloud Dataproc are covered late in the course series. For more information on the HBase API, see the links section of this video.”

Answer: Cloud Bigtable integrates with Hadoop, Cloud Dataflow, and Cloud Dataproc. It supports the open-source HBase API.

305
Q

Question: How is data structured in Cloud Bigtable?

A

Data Model (01:10-01:56)

Content: “Cloud Bigtable stores data in massively scalable tables, each of which is a sorted key/value map. The table is composed of rows, each of which typically describes a single entity, and columns, which contain individual values for each row. Each row is indexed by a single row key, and columns that are related to one another are typically grouped together into a column family. Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family. Each row/column intersection can contain multiple cells, or versions, at different timestamps, providing a record of how the stored data has been altered over time.”
Answer: Data is stored in massively scalable tables as sorted key/value maps, composed of rows (indexed by row keys) and columns (grouped into column families). Each column is defined by a column family and qualifier. Row/column intersections can have multiple cells/versions with timestamps.

306
Q

Question: What is data sparsity in Cloud Bigtable, and how is it utilized in the given example?

A

Data Sparsity and Example (01:56-02:36)

Content: “Cloud Bigtable tables are sparse; if a cell does not contain any data, it does not take up any space. The example shown here is for a hypothetical social network for United States presidents, where each president can follow posts from other presidents. Let me highlight some things: * The table contains one column family, the follows family. This family contains multiple column qualifiers. * Column qualifiers are used as data. This design choice takes advantage of the sparseness of Cloud Bigtable tables, and the fact that new column qualifiers can be added as your data changes. . * The username is used as the row key. Assuming usernames are evenly spread across the alphabet, data access will be reasonably uniform across the entire table.”

Answer: Cloud Bigtable tables are sparse, meaning empty cells don’t consume storage. In the example, column qualifiers are used as data, leveraging sparsity and the ability to add new qualifiers dynamically. Usernames are used as row keys for uniform data access.

307
Q

Question: How is Cloud Bigtable’s architecture designed for performance and scalability?

A

Architecture and Performance (02:45-03:53)

Content: “This diagram shows a simplified version of Cloud Bigtable’s overall architecture. It illustrates that processing, which is done through a front-end server pool and nodes, is handled separately from the storage. A Cloud Bigtable table is sharded into blocks of contiguous rows, called tablets, to help balance the workload of queries. Tablets are similar to HBase regions, for those of you who have used the HBase API. Tablets are stored on Colossus, which is Google’s file system, in SSTable format. An SSTable provides a persistent, ordered immutable map from keys to values, where both keys and values are arbitrary byte strings. As I mentioned earlier, Cloud Bigtable learns to adjust to specific access patterns. If a certain Bigtable node is frequently accessing a certain subset of data… … Cloud Bigtable will update the indexes so that other nodes can distribute that workload evenly, as shown here. That throughput scales linearly, so for every single node that you do add, you’re going to see a linear scale of throughput performance, up to hundreds of nodes.”
Answer: Cloud Bigtable separates processing from storage. Tables are sharded into tablets, stored on Colossus in SSTable format. It adjusts to access patterns by updating indexes. Throughput scales linearly with added nodes.

308
Q

Question: When is Cloud Bigtable the appropriate choice, and how does it compare to Firestore?

A

Summary and Comparison with Firestore (03:53-04:28)

Content: “In summary, if you need to store more than 1 TB of structured data, have very high volume of writes, need read/write latency of less than 10 milliseconds along with strong consistency, or need a storage service that is compatible with the HBase API, consider using Cloud Bigtable. If you don’t need any of these and are looking for a storage service that scales down well, consider using Firestore. Speaking of scaling, the smallest Cloud Bigtable cluster you can create has three nodes and can handle 30,000 operations per second. Remember that you pay for those nodes while they are operational, whether your application is using them or not.”

Answer: Cloud Bigtable is suitable for storing >1TB of structured data, high write volumes, low latency, and HBase API compatibility. Firestore is better for services that need to scale down well. Cloud Bigtable has a minimum of 3 nodes, and those nodes are billed whether they are being used or not.

309
Q

Question: How does the Resource Manager relate to Cloud IAM, and how are IAM policies inherited?

A

Resource Manager and IAM (00:00-00:42)

Content: “The resource manager lets you hierarchically manage resources by project, folder, and organization. This should sound familiar because we covered it in the Cloud IAM module. Let me refresh your memory: Policies contain a set of roles and members, and policies are set on resources. These resources inherit policies from their parent, as we can see on the left. Therefore, resource policies are a union of parent and resource if an IAM allow policy is associated. However, if an IAM deny policy is associated with the resource, then the policy can prevent certain principals from using certain permissions, regardless of the roles they’re granted.”

Answer: The Resource Manager provides a hierarchical structure for managing resources, and Cloud IAM policies are applied at each level (organization, folder, project). IAM policies are inherited from parent resources to child resources. Allow policies are a union of the parent and resource policies. Deny policies override any allow policy.

310
Q

Question: How does billing work within the Resource Manager hierarchy, and what is the relationship between projects and billing accounts?

A

Billing and Resource Consumption (00:42-01:10)

Content: “Although IAM policies are inherited top-to-bottom, billing is accumulated from the bottom up, as we can see on the right. Resource consumption is measured in quantities, like rate of use or time, number of items, or feature use. Because a resource belongs to only one project, a project accumulates the consumption of all its resources. Each project is associated with one billing account, which means that an organization contains all billing accounts.”

Answer: While IAM policies are inherited top-down, billing accumulates bottom-up. Resource consumption is aggregated at the project level, and each project is linked to a single billing account. An organization can contain multiple billing accounts.

311
Q

Question: What are the roles of organizations and projects in resource management, and what identifying information is required for projects?

A

Organizations, Projects, and Resource Management (01:10-01:58)

Content: “Let’s explore organizations, projects, and resources more. Just to reiterate, an organization node is the root node for all Google Cloud Platform resources. This diagram shows an example where we have an individual, Bob, who is in control of the organizational domain through the organization admin role. Bob has delegated privileges and access to the individual projects to Alice by making her a project creator. Because a project accumulates the consumption of all its resources, it can be used to track resources and quota usage. Specifically, projects let you enable billing, manage permissions and credentials, and enable service and APIs. To interact with Cloud Platform resources, you must provide the identifying project information for every request.”

Answer: Organizations are the root node for all GCP resources, providing centralized management. Projects are used to track resource and quota usage, enable billing, manage permissions, and enable services. Projects are identified by project name, project number, and project ID.

312
Q

Question: What are the different types of resources in the Google Cloud hierarchy, and how are they organized within projects?

A

Resource Hierarchy (01:58-02:55)

Content: “A project can be identified by: The project name, which is a human-readable way to identify your projects, but it isn’t used by any Google APIs. There is also the project number, which is automatically generated by the server and assigned to your project. And there is the project ID, which is a unique ID that is generated from your project name. You can find these three identifying attributes on the dashboard of your GCP Console or by querying the Resource Manager API. Finally, let’s talk about the resource hierarchy. From a physical organization standpoint, resources are categorized as global, regional, or zonal. Let’s look at some examples: Images, snapshots, and networks are global resources; External IP addresses are regional resources; and instances and disks are zonal resources. However, regardless of the type, each resource is organized into a project. This enables each project to have its own billing and reporting.”

Answer: Resources are categorized as global (e.g., images, networks), regional (e.g., external IP addresses), or zonal (e.g., instances, disks). All resources are organized within projects, allowing for project-specific billing and reporting.

313
Q

Question: What are quotas in Google Cloud, and why are they important?

A

Introduction to Quotas (00:00-00:17)

Content: “Now that we know that a project accumulates the consumption of all its resources, let’s talk about quotas. All resources in Google Cloud are subject to project quotas or limits. These typically fall into one of the three categories shown here.”

Answer: Quotas are limits on the resources a Google Cloud project can use. They are important for managing resource consumption and preventing abuse.

314
Q

Question: What are the three main categories of quotas in Google Cloud?

A

Types of Quotas (00:17-00:48)

Content: “How many resources you can a project create? For example, you can only have fifteen VPC networks per project. How quickly you can make API requests in a project or rate limits. For example, by default, you can only make five administrative actions per second per project when using the Cloud Spanner API. And three - Regional quotas For example, by default, you can only have 24 CPUs per region.”

Answer:
Project quotas (e.g., number of VPC networks).
Rate limits (e.g., API request frequency).
Regional quotas (e.g., number of CPUs per region).

315
Q

Question: How can quotas be adjusted in Google Cloud?

A

Quota Adjustments (00:48-01:11)

Content: “Given these quotas, you may be wondering, how do I spin up one of those 96 Core VMs. As your use of Google Cloud expands over time, your quotas may increase accordingly. If you expect a notable upcoming increase in usage, you can proactively request quota adjustments from the quotas page in the Cloud console. This page will also display your current quotas.”

Answer: Quotas can increase with usage, and users can proactively request adjustments via the quotas page in the Cloud Console, where current quotas are also displayed.

316
Q

Question: What are the primary purposes of having quotas?

A

Purpose of Quotas (01:11-01:39)

Content: “If quotas can be changed, why do they exist? Project quotas prevent runaway consumption in case of error or malicious attack. For example, imagine you accidentally create 100 instead of 10 Compute Engine instances using the G Cloud command line. Quotas also prevent billing spikes or surprises. Quotas are related to billing, but we will go through how to set up budgets and alerts later, which will really help you manage billing. Finally, quotas for sizing consideration and periodic review. For example, do you really need a 96 Core instance? Or can you go with a smaller and cheaper alternative?”

Answer:
Prevent runaway resource consumption.
Prevent billing spikes.
Encourage sizing consideration and periodic review.
Section 5: Quotas and Resource Availability (01:39-02:06)

317
Q

Question: What is the relationship between quotas and resource availability?

A

Content: “It is also important to mention the quotas are the maximum amount of resources you can create for that resource type as long as those resources are available. Quotas do not guarantee that resources will be available at all times. For example, if a region is out of local SSDs, you cannot create local SSDs in that region, even if you still hae quota for local SSDs.”

Answer: Quotas define the maximum amount of resources you can create, but they do not guarantee resource availability. Even if you have quota, resources may not be available in a specific region or zone.

318
Q

Question: What are labels in Google Cloud, and what is their purpose?

A

Introduction to Labels (00:00-00:26)

Content: “Projects and folders provide levels of segregation for resources, but what if you want more granularity? That’s where labels come in. Labels are a utility for organizing GCP resources. Labels are key-value pairs that you can attach to your resources, like VMs, disks, snapshots and images. You can create and manage labels using the GCP console, gcloud, or the Resource Manager API, and each resource can have up to 64 labels.”

Answer: Labels are key-value pairs used to organize GCP resources, providing more granular management than projects and folders. They help in categorizing and managing resources like VMs, disks, snapshots, and images.

319
Q

Question: How can labels be used to manage and organize resources?

A

Use Cases and Examples (00:26-00:54)

Content: “For example, you could create a label to define the environment of your virtual machines. Then you define the label for each of your instances as either production or test. Using this label, you could search and list all your production resources for inventory purposes. Labels can also be used in scripts to help analyze costs or to run bulk operations on multiple resources. The screenshot on the right shows an example of 4 labels that are created on an instance.”

Answer: Labels can be used to categorize resources based on environment (e.g., production, test), allowing for easy searching, listing, and inventory management. They can also be used in scripts for cost analysis and bulk operations.

320
Q

Question: What are some recommended conventions for using labels?

A

Recommended Labeling Conventions (00:54-01:37)

Content: “Let’s go over some examples of what to use labels for: I recommend adding labels based on team or cost center to distinguish instances owned by different teams. You can use this type of label for cost accounting or budgeting. For example, team:marketing and team:research. You can also use labels to distinguish components. For example, component:redis, component:frontend. Again, you can label based on environment or stage. You should also consider using labels to define an owner or a primary contact for a resource. For example, owner:gaurav, contact:opm. Or add labels to your resources to define their state. For example, state:inuse, state:readyfordeletion.”
Answer:
Labels based on team or cost center (e.g., team:marketing).
Labels to distinguish components (e.g., component:redis).
Labels based on environment or stage.
Labels to define an owner or contact (e.g., owner:gaurav).
Labels to define state of a resource (e.g. state:inuse).

321
Q

Question: What is the key difference between labels and network tags?

A

Labels vs. Network Tags (01:37-02:01)

Content: “Now, it’s important to not confuse labels with network tags. Labels, we just learned, are user-defined strings in key-value format that are used to organize resources, and they can propagate through billing. Network tags, on the other hand, are user-defined strings that are applied to instances only and are mainly used for networking, such as applying firewall rules and custom static routes. For more information about using labels, see the links in the Course Resources.”

Answer: Labels are key-value pairs for organizing resources and can propagate through billing. Network tags are strings applied only to instances and are used for networking purposes, like firewall rules and static routes.

322
Q

Question: How can you set up budgets and alerts in Google Cloud to control costs?

A

Setting Budgets and Alerts (00:06-00:59)

Content: “To help with project planning and controlling costs, you can set a budget. Setting a budget lets you track how your spend is growing toward that amount. This screenshot shows the budget creation interface: First, you set a budget name and specify which project this budget applies to. Then, you can set the budget at a specific amount or match it to the previous month’s spend. After you determine your budget amount, you can set the budget alerts. These alerts send emails to billing admins after spend exceeds a percent of the budget or a specified amount. In our case, it would send an email when spending reaches 50%, 90%, and 100% of the budget amount. You can even choose to send an alert when the spend is forecasted to exceed the percent of the budget amount by the end of the budget period. In addition to receiving an email, you can use Pub/Sub notifications to programmatically receive spend updates about this budget. You could even create a Cloud Function that listens to the Pub/Sub topic to automate cost management.”

Answer: You can set a budget with a specific amount or match it to previous spending. You can configure alerts to send emails to billing admins when spending exceeds a percentage of the budget, a specified amount, or when forecasted spending will exceed the budget. Pub/Sub notifications can be used for programmatic spend updates.

323
Q

Question: How can labels be used to optimize Google Cloud spending?

A

Using Labels for Cost Optimization (01:13-01:38)

Content: “Another way to help optimize your Google Cloud spend is to use labels. For example, you could label VM instances that are spread across different regions. Maybe these instances are sending most of their traffic to a different continent, which could incur higher costs. In that case, you might consider relocating some of those instances or using a caching service like Cloud CDN to cache content closer to your users, which reduces your networking spend. I recommend labeling all your resources and exporting your billing data to BigQuery to analyze your spend.”

Answer: Labels can be used to identify cost drivers, such as network traffic patterns between regions. This information can then be used to optimize resource placement or leverage services like Cloud CDN to reduce costs.

324
Q

Question: What services are recommended for analyzing and visualizing billing data?

A

Analyzing Billing Data (01:38-02:09)

Content: “I recommend labeling all your resources and exporting your billing data to BigQuery to analyze your spend. BigQuery is Google’s scalable, fully managed Enterprise Data Warehouse with SQL and fast response times. Creating a query is as simple as shown in this screenshot, which you will explore in the upcoming lab. You can even visualize spend over time with Looker Studio. Looker Studio turns your data into informative dashboards and reports that are easy to read, easy to share, and fully customizable. For example, you can slice and dice your billing reports using your labels.”

Answer: BigQuery (for analyzing spend with SQL) and Looker Studio (for creating dashboards and reports) are recommended. Labels are very useful for slicing and dicing billing data.

325
Q

Question: What is the importance of Cloud Monitoring, and how does it relate to site reliability engineering (SRE)?

A

Introduction to Cloud Monitoring and Site Reliability Engineering (00:00-00:26)

Content: “Now that you understand Google Cloud’s operations suite from a high-level perspective, let’s look at Cloud Monitoring. Monitoring is important to Google because it is at the base of site reliability engineering, a discipline that applies aspects of software engineering to operations whose goals are to create ultra-scalable and highly reliable software systems. This discipline has enabled Google to build, deploy, monitor, and maintain some of the largest software systems in the world.”

Answer: Cloud Monitoring is crucial for SRE, which focuses on creating reliable and scalable software systems. It allows for the monitoring and maintenance of large-scale systems.

326
Q

Question: What are the key features of Cloud Monitoring, and what is a metrics scope?

A

Cloud Monitoring Features and Metrics Scopes (00:26-01:41)

Content: “Cloud Monitoring dynamically configures monitoring after resources are deployed and has intelligent defaults that allow you to easily create charts for basic monitoring activities. This allows you to monitor your platform, system, and application metrics by ingesting data, such as metrics, events, and metadata. You can then generate insights from this data through dashboards, charts, and alerts. For example, you can configure and measure uptime and health checks that send alerts via email. A metrics scope is the root entity that holds monitoring and configuration information in Cloud Monitoring. Each metrics scope can have between 1 and 375 monitored projects. Now, monitoring data for all projects in that scope will be visible. A metrics scope contains the custom dashboards, alerting policies, uptime checks, notification channels, and group definitions that you use with your monitored projects. A metrics scope can access metric data from its monitored projects, but the metrics data and log entries remain in the individual projects. The first monitored Google Cloud project in a metrics scope is called the scoping project, and it must be specified when you create the metrics scope. The name of that project becomes the name of your metrics scope.”

Answer: Key features include dynamic configuration, intelligent defaults, and the ability to ingest metrics, events, and metadata. A metrics scope is the root entity that holds monitoring and configuration information, allowing visibility across multiple projects.

327
Q

Question: How does Cloud Monitoring handle monitoring AWS accounts and access control?

A

Monitoring AWS and Access Control (01:41-02:23)

Content: “To access an AWS account, you must configure a project in Google Cloud to hold the AWS Connector. Because metrics scopes can monitor all your Google Cloud projects in a single place, a metrics scope is a “single pane of glass” through which you can view resources from multiple Google Cloud projects and AWS accounts. All users of Google Cloud’s operations suite with access to that metrics scope have access to all data by default. This means that a role assigned to one person on one project applies equally to all projects monitored by that metrics scope. In order to give people different roles per-project and to control visibility to data, consider placing the monitoring of those projects in separate metrics scopes.”

Answer: AWS accounts can be monitored by configuring a project with the AWS Connector. Access to data within a metrics scope is shared by default, so separate metrics scopes are needed for granular access control.

328
Q

Question: What are custom dashboards and charts used for in Cloud Monitoring?

A

Custom Dashboards and Charts (02:23-02:56)

Content: “Cloud Monitoring allows you to create custom dashboards that contain charts of the metrics that you want to monitor. For example, you can create charts that display your instances’ CPU utilization, the packets or bytes sent and received by those instances, and the packets or bytes dropped by the firewall of those instances. In other words, charts provide visibility into the utilization and network traffic of your VM instances, as shown on this slide. These charts can be customized with filters to remove noise, groups to reduce the number of time series, and aggregates to group multiple time series together. For a full list of supported metrics, please refer to the documentation.”

Answer: Custom dashboards and charts provide visual representations of metrics, such as CPU utilization and network traffic. They can be customized with filters, groups, and aggregates.

329
Q

Question: What are alerting policies, and how are they used?

A

Alerting Policies (02:56-03:52)

Content: “Now, although charts are extremely useful, they can only provide insight while someone is looking at them. But what if your server goes down in the middle of the night or over the weekend? Do you expect someone to always look at dashboards to determine whether your servers are available or have enough capacity or bandwidth? If not, you want to create alerting policies that notify you when specific conditions are met. For example, as shown on this slide, you can create an alerting policy when the network egress of your VM instance goes above a certain threshold for a specific timeframe. When this condition is met, you or someone else can be automatically notified through email, SMS, or other channels in order to troubleshoot this issue. You can also create an alerting policy that monitors your usage of Google Cloud’s operations suite and alerts you when you approach the threshold for billing. For more information about this, please refer to the documentation. Here is an example of what creating an alerting policy looks like. On the left, you can see an HTTP check condition on the summer01 instance. This will send an email that is customized with the content of the documentation section on the right.”

Answer: Alerting policies notify users when specific conditions are met, such as high network egress or billing thresholds. Notifications can be sent via email, SMS, and other channels.

330
Q

Question: What are some best practices for creating alerts?

A

Alerting Best Practices (04:13-04:41)

Content: “Let’s discuss some best practices when creating alerts: We recommend alerting on symptoms, and not necessarily causes. For example, you want to monitor failing queries of a database and then identify whether the database is down. Next, make sure that you are using multiple notification channels, like email and SMS. This helps avoid a single point of failure in your alerting strategy. We also recommend customizing your alerts to the audience’s needs by describing what actions need to be taken or what resources need to be examined. Finally, avoid noise, because this will cause alerts to be dismissed over time. Specifically, adjust monitoring alerts so that they are actionable and don’t just set up alerts on everything possible.”

Answer:
Alert on symptoms, not causes.
Use multiple notification channels.
Customize alerts with actionable information.
Avoid alert noise.

331
Q

Question: What are uptime checks, and how are they configured?

A

Uptime Checks (04:48-05:29)

Content: “Uptime checks can be configured to test the availability of your public services from locations around the world, as you can see on this slide. The type of uptime check can be set to HTTP, HTTPS, or TCP. The resource to be checked can be an App Engine application, a Compute Engine instance, a URL of a host, or an AWS instance or load balancer. For each uptime check, you can create an alerting policy and view the latency of each global location. Here is an example of an HTTP uptime check. The resource is checked every minute with a 10-second timeout. Uptime checks that do not get a response within this timeout period are considered failures. So far there is a 100% uptime with no outages.”

Answer: Uptime checks test the availability of public services from global locations. They can be HTTP, HTTPS, or TCP checks, and they monitor resources like App Engine applications and Compute Engine instances.

332
Q

Ops Agent and Custom Metrics ?

A

Ops Agent and Custom Metrics (05:34-07:42)

Content: “Monitoring data can originate at a number of different sources. With Google Compute Engine instances, because the VMs are running on Google hardware, the hypervisor cannot access some of the internal metrics inside a VM, for example, memory usage. The Ops Agent collects metrics inside the VM, not at the hypervisor level. The Ops Agent is the primary agent for collecting telemetry data from your Compute Engine instances. This diagram shows how data is collected to monitor workloads running on a Compute Engine instance. Ops agent installed on the Compute Engine collects data beyond the system metrics. The collected metric is then used by Cloud Monitoring to create Dashboards, alerts, uptime checks and notifications to drive observability for workloads running in your application. You can configure the Ops Agent to monitor many third-party applications. For a detailed list, refer to the documentation. The Ops Agent supports most major operating systems such as CentOS, Ubuntu and Windows. If the standard metrics provided by Cloud Monitoring do not fit your needs, you can create custom metrics. For example, imagine a game server that has a capacity of 50 users. What metric indicator might you use to trigger scaling events? From an infrastructure perspective, you might consider using CPU load or perhaps network traffic load

333
Q

Question: What is Cloud Logging, and what are its primary functions?

A

Introduction to Cloud Logging (00:00-00:34)

Content: “Monitoring is the basis of Google Cloud’s operation suite, but the service also provides logging, error reporting, and tracing. Let’s learn about logging. Cloud Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud and AWS. It is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs.”

Answer: Cloud Logging is a fully managed service that allows you to store, search, analyze, monitor, and alert on log data from Google Cloud and AWS. It can ingest application and system log data at scale.

334
Q

Question: What are the key features of Cloud Logging, and what is its default data retention period?

A

Cloud Logging Features and Data Retention (00:34-01:00)

Content: “Logging includes storage for logs, a user interface called Logs Explorer, and an API to manage logs programmatically. The service lets you read and write log entries, search and filter your logs, and create log-based metrics. Logs are only retained for 30 days, but you can export your logs to Cloud Storage buckets, BigQuery datasets, and Pub/Sub topics.”

Answer: Key features include log storage, Logs Explorer UI, an API, read/write log entries, log search/filtering, and log-based metrics. Logs are retained for 30 days by default.

335
Q

Question: Why would you export logs to Cloud Storage, BigQuery, or Pub/Sub?

A

Exporting Logs to Cloud Storage, BigQuery, and Pub/Sub (01:00-01:38)

Content: “Exporting logs to Cloud Storage makes sense for storing logs for more than 30 days, but why should you export to BigQuery or Pub/Sub? Exporting logs to BigQuery allows you to analyze logs and even visualize them in Looker Studio. BigQuery runs extremely fast SQL queries on gigabytes to petabytes of data. This allows you to analyze logs, such as your network traffic, so that you can better understand traffic growth to forecast capacity, network usage to optimize network traffic expenses, or network forensics to analyze incidents.”

Answer:
Cloud Storage: For long-term log storage beyond 30 days.
BigQuery: For log analysis using SQL and visualization with Looker Studio.
Pub/Sub: For streaming logs to applications or endpoints.

336
Q

Question: How can BigQuery and Looker Studio be used to analyze and visualize logs?

A

Log Analysis with BigQuery and Looker Studio (01:38-02:09)

Content: “For example, in this screenshot I queried my logs to identify the top IP addresses that have exchanged traffic with my web server. Depending on where these IP addresses are, and who they belong to, I could relocate part of my infrastructure to save on networking costs or deny some of these IP addresses if I don’t want them to access my web server. If you want to visualize your logs, I recommend connecting your BigQuery tables to Looker Studio. Looker Studio transforms your raw data into the metrics and dimensions that you can use to create easy-to-understand reports and dashboards.”

Answer: BigQuery allows you to query logs using SQL to identify patterns and insights, such as top IP addresses. Looker Studio can then visualize this data in reports and dashboards.

337
Q

Question: What is the benefit of exporting logs to Pub/Sub?

A

Exporting Logs to Pub/Sub (02:09-02:23)

Content: “I mentioned that you can also export logs to Pub/Sub. This enables you to stream logs to applications or endpoints.”

Answer: Exporting logs to Pub/Sub allows for real-time streaming of log data to applications or endpoints for immediate processing or analysis.

338
Q

Question: What is Cloud Logging, and what are its primary functions?

A

Introduction to Cloud Logging (00:00-00:34)

Content: “Monitoring is the basis of Google Cloud’s operation suite, but the service also provides logging, error reporting, and tracing. Let’s learn about logging. Cloud Logging allows you to store, search, analyze, monitor, and alert on log data and events from Google Cloud and AWS. It is a fully managed service that performs at scale and can ingest application and system log data from thousands of VMs.”

Answer: Cloud Logging is a fully managed service that allows you to store, search, analyze, monitor, and alert on log data from Google Cloud and AWS. It can ingest application and system log data at scale.

339
Q

Question: What are the key features of Cloud Logging, and what is its default data retention period?

A

Cloud Logging Features and Data Retention (00:34-01:00)

Content: “Logging includes storage for logs, a user interface called Logs Explorer, and an API to manage logs programmatically. The service lets you read and write log entries, search and filter your logs, and create log-based metrics. Logs are only retained for 30 days, but you can export your logs to Cloud Storage buckets, BigQuery datasets, and Pub/Sub topics.”

Answer: Key features include log storage, Logs Explorer UI, an API, read/write log entries, log search/filtering, and log-based metrics. Logs are retained for 30 days by default.

340
Q

Question: Why would you export logs to Cloud Storage, BigQuery, or Pub/Sub?

A

Exporting Logs to Cloud Storage, BigQuery, and Pub/Sub (01:00-01:38)

Content: “Exporting logs to Cloud Storage makes sense for storing logs for more than 30 days, but why should you export to BigQuery or Pub/Sub? Exporting logs to BigQuery allows you to analyze logs and even visualize them in Looker Studio. BigQuery runs extremely fast SQL queries on gigabytes to petabytes of data. This allows you to analyze logs, such as your network traffic, so that you can better understand traffic growth to forecast capacity, network usage to optimize network traffic expenses, or network forensics to analyze incidents.”

Answer:
Cloud Storage: For long-term log storage beyond 30 days.
BigQuery: For log analysis using SQL and visualization with Looker Studio.
Pub/Sub: For streaming logs to applications or endpoints.

341
Q

Question: How can BigQuery and Looker Studio be used to analyze and visualize logs?

A

Log Analysis with BigQuery and Looker Studio (01:38-02:09)

Content: “For example, in this screenshot I queried my logs to identify the top IP addresses that have exchanged traffic with my web server. Depending on where these IP addresses are, and who they belong to, I could relocate part of my infrastructure to save on networking costs or deny some of these IP addresses if I don’t want them to access my web server. If you want to visualize your logs, I recommend connecting your BigQuery tables to Looker Studio. Looker Studio transforms your raw data into the metrics and dimensions that you can use to create easy-to-understand reports and dashboards.”

Answer: BigQuery allows you to query logs using SQL to identify patterns and insights, such as top IP addresses. Looker Studio can then visualize this data in reports and dashboards.

342
Q

Question: What is the benefit of exporting logs to Pub/Sub?

A

Exporting Logs to Pub/Sub (02:09-02:23)

Content: “I mentioned that you can also export logs to Pub/Sub. This enables you to stream logs to applications or endpoints.”
Answer: Exporting logs to Pub/Sub allows for real-time streaming of log data to applications or endpoints for immediate processing or analysis.

343
Q

Question: What is Cloud Profiler, and what problem does it address?

A

Introduction to Cloud Profiler (00:00-00:16)

Content: “Finally, let’s cover the last feature of Google Cloud’s operations suite in this module, which is the profiler. Poorly performing code increases the latency and cost of applications and web services every day.”

Answer: Cloud Profiler is a tool within Google Cloud’s operations suite that continuously analyzes the performance of applications, focusing on CPU and memory-intensive functions. It addresses the problem of poorly performing code that increases latency and costs.

344
Q

Question: What are the advantages of using Cloud Profiler over traditional profiling methods?

A

Cloud Profiler’s Capabilities (00:16-00:44)

Content: “Cloud Profiler continuously analyzes the performance of CPU or memory-intensive functions executed across an application. While it’s possible to measure code performance in development environments, the results generally don’t map well to what’s happening in production. Many production profiling techniques either slow down code execution or can only inspect a small subset of a codebase.”

Answer: Cloud Profiler provides continuous analysis of production environments, which is more accurate than development environment testing. It also avoids the slowdowns and limited scope associated with traditional production profiling techniques.

345
Q

Question: How does Cloud Profiler operate, and what environments and languages does it support?

A

Cloud Profiler’s Methodology and Scope (00:44-00:58)

Content: “Profiler uses statistical techniques and extremely low-impact instrumentation that runs across all production application instances to provide a complete picture of an application’s performance without slowing it down. Profiler allows developers to analyze applications running anywhere, including Google Cloud, other cloud platforms, or on-premises, with support for Java, Go, Node.js, and Python.”

Answer: Cloud Profiler uses statistical techniques and low-impact instrumentation across all production instances. It supports applications running in Google Cloud, other cloud platforms, and on-premises, and it’s compatible with Java, Go, Node.js, and Python.

346
Q

Question: What is a Managed Instance Group (MIG), and what are its key features?

A

Introduction to Managed Instance Groups (00:08-00:44)

Content: “A managed instance group is a collection of identical VM instances that you control as a single entity using an instance template. You can easily update all the instances in a group by specifying a new template in a rolling update. Also when your applications require additional compute resources, managed instance groups can scale automatically to the number of instances in the group. Managed instance groups can work with load balancing services to distributor network traffic to all of the instances in the group. If an instance in the group stops, crashes or is deleted by an action other than the instance group commands, the managed instance group automatically recreates the instance so it can resume its processing tasks.”

Answer: A MIG is a collection of identical VMs controlled as a single entity using an instance template. Key features include:
Rolling updates.
Automatic scaling.
Load balancing integration.
Automatic instance recreation.

347
Q

Question: How do MIGs handle instance health, and why are regional MIGs preferred?

A

Health Management and Regional MIGs (00:44-01:29)

Content: “The recreated instance uses the same name and the same instance template as the previous instance. Managed instance groups can automatically identify and recreate unhealthy instances in a group to ensure that all instances are running optimally. Regional managed instance groups are generally recommended over zonal managed instance groups because they allow you to spread the application load across multiple zones instead of confining your application to a single zone or having you manage multiple instance groups across different zones. This replication protects against zonal failures and unforeseen scenarios where an entire group of instances in a single zone malfunctions. If that happens, your application can continue serving traffic from instances running in another zone in the same region.”

Answer: MIGs automatically recreate unhealthy instances. Regional MIGs are preferred for distributing load across multiple zones, providing redundancy against zonal failures.

348
Q

Question: What are the initial steps in creating a MIG?

A

MIG Creation Process (01:29-01:54)

Content: “In order to create a managed instance group, you first need to create a instance template. Next, you’re going to create a managed instance group of N specified instances. The instance group manager then automatically populates the instance group based on the instance template.”1
1.
www.studocu.com
www.studocu.com

Answer: The first step is to create an instance template, and then create the MIG with a specified number of instances.

349
Q

Question: What are instance templates, and what configurations are required when creating a MIG?

A

Instance Templates and MIG Configuration (01:54-03:02)

Content: “You can easily create instance templates using the cloud console. The instance template dialogue looks and works exactly like creating an instance, except that the choices are recorded so they can be repeated. When you create an instance group, you define the specific rules for that instance group. First, you decide what type of managed instance group you want to create. You can use managed instance groups for stateless serving or batch workloads. such as website front end or image processing from a queue, or for stateful applications. such as databases or legacy applications. Second, provide a name for the instance group. Third, decide whether the instance group is going to be single or multizoned and where those locations will be. You can optionally provide port name mapping details. Fourth, select the instance template that you want to use. Fifth, decide whether you want to autoscale and under what circumstances. Finally, consider creating a health check to determine which instances are healthy and should receive traffic. Essentially, you’re creating virtual machines, but you’re applying more rules to that instance group.”

Answer:
Instance templates are like instance creation blueprints.
MIG configurations include:
Workload type (stateless/stateful).
Group name.
Zonal/regional deployment.
Port mappings.
Instance template selection.
Autoscaling policies.
Health checks.

350
Q

Question: What is autoscaling in MIGs, and what are its benefits?

A

Autoscaling Basics (00:05-00:30)

Content: “Let me provide more details on the autoscaling and health checks of a managed instance group. As I mentioned earlier, managed instance groups offer autoscaling capabilities that allow you to automatically add or remove instances from a managed instance group based on increases or decreases in load. Autoscaling helps your applications gracefully handle increases in traffic and reduces cost when the need for resources is lower. You just define the autoscaling policy, and the autoscaler performs automatic scaling based on the measured load.”

Answer: Autoscaling automatically adjusts the number of instances based on load. Benefits include handling traffic increases gracefully and cost reduction during low-demand periods.

351
Q

Question: What are the different types of autoscaling policies?

A

Autoscaling Policies (00:30-01:12)

Content: “Applicable autoscaling policies include scaling based on CPU utilization, load balancing capacity, or monitoring metrics, or by a queue-based workload like1 Pub/Sub or schedule such as start-time, duration and recurrence. For example, let’s assume you have 2 instances that are at 100% and 85% CPU utilization as shown on this slide. If your target CPU utilization is 75%, the autoscaler will add another instance to spread out the CPU load and stay below the 75% target CPU utilization. Similarly, if the overall load is much lower than the target, the autoscaler will remove instances as long as that keeps the overall utilization below the target.”
1.
www.studocu.com
www.studocu.com

Answer: Policies include scaling based on:
CPU utilization.
Load balancing capacity.
Monitoring metrics.
Queue-based workloads (Pub/Sub).
Scheduled times.

352
Q

Question: How can instance utilization be monitored, and what tools are available?

A

Monitoring Instance Utilization (01:12-01:56)

Content: “Now, you might ask yourself how do I monitor the utilization of my instance group. When you click on an instance group (or even an individual VM), you can choose to view different metrics. By default you’ll see the CPU utilization over the past hour, but you can change the time frame and visualize other metrics like disk and network usage. These graphs are very useful for monitoring your instances’ utilization and for determining how best to configure your Autoscaling policy to meet changing demand. If you monitor the utilization of your VM instances in Cloud Monitoring, you can even set up alerts through several notification channels. A link to more information on autoscaling can be found in the Course Resources for this module.”

Answer: Instance utilization can be monitored via the Google Cloud Console, which displays metrics like CPU, disk, and network usage. Cloud Monitoring can also be used to set up alerts.

353
Q

Question: What are health checks in MIGs, and how are they configured?

A

Health Checks (01:56-02:40)

Content: “Another important configuration for a managed instance group and load balancer is a health check. A health check is very similar to an uptime check in Cloud Monitoring. You just define a protocol, port, and health criteria, as shown in this screenshot. Based on this configuration, Google Cloud computes a health state for each instance. The health criteria define how often to check whether an instance is healthy (that’s the check interval); how long to wait for a response (that’s the timeout); how many successful attempts are decisive (that’s the healthy threshold); and how many failed attempts are decisive (that’s the unhealthy threshold). In the example on this slide, the health check would have to fail twice over a total of 15 seconds before an instance is considered unhealthy.”

Answer: Health checks determine instance health for load balancing. They are configured with:
Protocol.
Port.
Health criteria (check interval, timeout, healthy/unhealthy thresholds).

354
Q

Question: What is the benefit of configuring stateful IP addresses in a MIG?

A

Stateful IP Addresses (02:40-end)

Content: “Configuring stateful IP addresses in a managed instance group ensures that applications continue to function seamlessly during autohealing, update, and recreation events.”

Answer: Stateful IP addresses ensure seamless application functionality during autohealing, updates, and recreation events.

355
Q

Question: What is HTTP(S) load balancing, and what are its key advantages?

A

Introduction to HTTP(S) Load Balancing (00:06-00:31)

Content: “Now let’s talk about HTTP(S) load balancing, which acts at Layer 7 of the OSI model. This is the application layer, which deals with the actual content of each message, allowing for routing decisions based on the URL. GCP’s HTTP(S) load balancing provides global load balancing for HTTP(S) requests destined for your instances. This means that your applications are available to your customers at a single anycast IP address, which simplifies your DNS setup.”

Answer: HTTP(S) load balancing operates at Layer 7, routing requests based on URL content. It offers global load balancing and simplifies DNS setup with a single anycast IP.

356
Q
A

Features of HTTP(S) Load Balancing (00:31-01:27)

Content: “HTTP(S) load balancing balances HTTP and HTTPS traffic across multiple back-end instances and across multiple regions. HTTP requests are load balanced on port 80 or 8080, and HTTPS requests are load balanced on port 443. This load balancer supports both IPv4 and IPv6 clients, is scalable, requires no pre-warming, and enables content-based and cross-regional load balancing. You can configure URL maps that route some URLs to one set of instances and route other URLs to other instances. Requests are generally routed to the instance group that is closest to the user. If the closest instance group does not have sufficient capacity, the request is sent to the next closest instance group that does have the capacity. You will get to explore most of these benefits in the first lab of the module.”
Question: What are the main features of Google Cloud’s HTTP(S) load balancing?
Answer: Features include:
Global load balancing across regions.
Support for HTTP(S) traffic (ports 80/8080, 443).
IPv4 and IPv6 support.
Scalability without pre-warming.
Content-based routing (URL maps).
Cross-regional load balancing.
Proximity-based routing.

357
Q

Question: What are the key components of an HTTPS load balancer’s architecture?

A

Architecture of HTTPS Load Balancer (01:27-02:28)

Content: “Let me walk through the complete architecture of an HTTPS load balancer by using this diagram. A global forwarding rule directs incoming requests from the Internet to a target HTTP proxy. The target HTTP proxy checks each request against a URL map to determine the appropriate back-end service for the request. For example, you can send requests for www.example.com/audio to one back-end service, which contains instances configured to deliver audio files, and the requests for www.example.com/video to another back-end service which contains instances configured to deliver video files. The back-end service directs each request to an appropriate back-end based on serving capacity, zone, and instance health of its attached backends. The back-end services contain a health check, session affinity, a timeout setting and one or more backends.”

Answer: Components include:
Global forwarding rule.
Target HTTP proxy.
URL map.
Back-end services.
Backends (instance groups).

358
Q

Question: What configurations are available for back-end services?

A

Back-end Service Configuration (02:28-03:27)

Content: “A health check polls instances attached to the back-end service at configured intervals. Instances that pass the health check are allowed to receive new requests. Unhealthy instances are not sent requests until they are healthy again. Normally, HTTP(S) load balancing uses a round-robin algorithm to distribute requests among available instances. This can be overridden with session affinity. Session affinity attempts to send all requests from the same client to same virtual machine instance. Back-end services also have a timeout setting, which is set to 30 seconds by default. This is the amount of time the back-end service will wait on the backend before considering the request a failure. This is a fixed timeout, not an idle timeout. If you require longer-lived connections, set this value appropriately. The backends themselves contain an instance group, a balancing mode and a capacity scaler.”

Answer: Configurations include:
Health checks.
Session affinity.
Timeout settings.
Balancing mode (CPU/RPS).
Capacity scaler.

359
Q

Question: What are backends, and how do balancing modes and capacity scalers work?

A

Backends and Balancing Modes (03:27-04:22)

Content: “An instance group contains virtual machine instances. The instance group may be a managed instance group with or without autoscaling or an unmanaged instance group. A balancing mode tells the load balancing system how to determine when the back-end is at full usage. If all the backends for the back-end service in a region are at the full usage, new requests are automatically routed to the nearest region that can still handle requests. The balancing mode can be based on CPU utilization or requests per second1 (RPS). A capacity setting is an additional control that interacts with the balancing mode setting. For example, if you normally want your instances to operate at a maximum of 80 percent CPU utilization, you would set your balancing mode to 80 percent CPU utilization and your capacity to 100 percent. If you want to cut instance utilization in half, you could leave the balancing mode at 80 percent CPU utilization and set capacity to 50 percent. Now any changes to your back-end services are not instantaneous, so don’t be surprised if it takes several minutes for your changes to propagate throughout the network.”
1.
www.studocu.com
www.studocu.com

Answer:
Backends are instance groups (managed or unmanaged).
Balancing modes (CPU/RPS) determine full usage.
Capacity scalers adjust the percentage of balancing mode usage.

360
Q

Question: How does the HTTP load balancer handle traffic from users in different locations?

A

Basic HTTP Load Balancer Scenario (00:00-01:13)

Content: “Let me walk through an HTTP load balancer in action. The project on this slide has a single global IP address, but users enter the Google Cloud network from two different locations: one in North America and one in EMEA. First, the global forwarding rule directs incoming requests to the target HTTP proxy. The proxy checks the URL map to determine the appropriate backend service for the request. In this case, we are serving a guestbook application with only one backend service. The backend service has two backends: one in us-central1-a and one in europe-west1-d. Each of those backends consists of a managed instance group. Now, when a user request comes in, the load balancing service determines the approximate origin of the request from the source IP address. The load balancing service also knows the locations of the instances owned by the backend service, their overall capacity, and their overall current1 usage.”
1.
www.studocu.com
www.studocu.com

Answer: The global forwarding rule directs requests to the target HTTP proxy, which uses the URL map to select the backend service. The load balancer then routes traffic based on the user’s location and the capacity of the backend instance groups.

361
Q

Question: How are requests distributed when multiple users are in the same region?

A

Location-Based Traffic Routing (01:13-01:41)

Content: “Therefore, if the instances closest to the user have available capacity, the request is forwarded to that closest set of instances. In our example, traffic from the user in North America would be forwarded to the managed instance group in us-central1-a, and the traffic from the user in EMEA would be forwarded to the managed instance group in europe-west1-d. If there are several users in each region, the incoming requests to the given region are distributed evenly across all available backend services and instances in that region.”

Answer: Requests are distributed evenly across all available backend services and instances within that region.

362
Q

Question: What happens when a region’s backend instances are unavailable or at capacity?

A

Cross-Region Load Balancing (01:41-02:08)

Content: “If there are no healthy instances with available capacity in a given region, the load balancer instead sends the request to the next closest region with available capacity. Therefore, traffic from the EMEA user could be forwarded to the us-central1-a backend if the europe-west1-d backend does not have capacity or has no healthy instances as determined by the health checker. This is referred to as cross-region load balancing.”

Answer: The load balancer uses cross-region load balancing, sending requests to the next closest region with available capacity.

363
Q

Question: How does content-based load balancing work?

A

Content-Based Load Balancing (02:12-02:41)

Content: “Another example of an HTTP load balancer is a content-based load balancer. In this case, there are two separate backend services that handle either web or video traffic. The traffic is split by the load balancer based on the URL header as specified in the URL map. If the user is navigating to /video, the traffic is sent to the backend video service, and if the user is navigating anywhere else, the traffic is sent to the web-service backend. All of that is achieved with a single global IP address.”
Answer: The load balancer uses the URL map to route traffic based on the URL header. For example, requests to /video are sent to a video backend service, while other requests are sent to a web service backend, all through a single global IP address.

364
Q

Question: What are the key differences between HTTP and HTTPS load balancers?

A

Differences Between HTTP and HTTPS Load Balancers (00:05-00:47)

Content: “An HTTP(S) load balancer has the same basic structure as an HTTP load balancer, but it differs in the following ways: An HTTP(S) load balancer uses a target HTTPS proxy instead of a target HTTP proxy. An HTTP(S) load balancer requires at least once signed SSL certificate installed on the target HTTPS proxy for the load balancer. The client SSL sessions terminate at the load balancer. HTTP(S) load balancers support the QUIC transport layer protocol. QUIC is a transport layer protocol that allows for faster client connection initiation, eliminates head-of-line blocking in multiplexed streams, and supports connection migration when a client’s IP address changes.”

Answer:
HTTPS uses a target HTTPS proxy.
HTTPS requires SSL certificates.
HTTPS terminates SSL sessions at the load balancer.
HTTPS supports the QUIC transport protocol.

365
Q

Question: How are SSL certificates used with HTTPS load balancers?

A

SSL Certificates and Target Proxies (00:47-01:27)

Content: “For more information on the QUIC protocol, see the link in the course resources. To use HTTPS, you must create at least one SSL certificate that can be used by the target proxy for the load balancer. You can configure the target proxy with up to 15 SSL certificates. For each SSL certificate, you first create an SSL certificate resource, which contains the SSL certificate information. SSL certificate resources are used only with the load balancing proxies such as a target HTTPS proxy or target SSL proxy, which we’ll discuss later in this module.”
Answer: SSL certificates are required for HTTPS load balancers and are installed on the target HTTPS proxy. Up to 15 certificates can be used. SSL certificate resources are created to hold the certificate information.

366
Q

Question: What are backend buckets, and how are they used with HTTP(S) load balancing?

A

Backend Buckets (01:27-02:24)

Content: “Backend buckets allow you to use Google Cloud Storage buckets with HTTP(S) Load Balancing. An external HTTP(S) load balancer uses a URL map to direct traffic from specified URLs to either a backend service or a backend bucket. One common use case is: send requests for dynamic content, such as data, to a backend service; and send requests for static content, such as images, to a backend bucket. In this diagram, the load balancer sends traffic with a path of /love-to-fetch/ to a Cloud storage bucket in the europe-north region. All the other requests go to a Cloud Storage bucket in the us-east region. After you configure a load balancer with the backend buckets, requests to your URL paths to begin with /love-to-fetch/ are sent to the europe-north Cloud Storage bucket, and all other requests are sent to the us-east Cloud Storage bucket.”

Answer: Backend buckets allow Cloud Storage buckets to be used as backends for HTTP(S) load balancing.

URL maps direct traffic to backend services or buckets, often routing static content requests to buckets.

367
Q

Question: What are Network Endpoint Groups (NEGs), and what are the different types?

A

Network Endpoint Groups (NEGs) (02:24-03:43)

Content: “A network endpoint group, or NEG, is a configuration object that specifies a group of backend endpoints or services. A common use case for this configuration is deploying services in containers. You can also distribute traffic in a granular fashion to applications running on your backend instances. You can use NEGs as backends for some load balancers and with Traffic Director. Zonal and internet NEGs define how endpoints should be reached, whether they are reachable, and where they are located. Unlike these NEG types, serverless NEGs don’t contain endpoints. A zonal NEG contains one or more endpoints that can be Compute Engine VMs or services running on the VMs. Each endpoint is specified by either an IP address or an IP:port combination. An Internet NEG contains a single endpoint that is hosted outside of Google Cloud. This endpoint is specified by hostname FQDN:port or IP:port. A hybrid connectivity NEG points to Traffic Director services running outside of Google Cloud. A serverless NEG points to Cloud Run, App Engine, Cloud Functions services residing in the same region as the NEG. For more information on using NEGs, please see the link in course resources.”

Answer: NEGs are configuration objects specifying backend endpoints or services. Types include:
Zonal NEGs (VMs or VM services).
Internet NEGs (external endpoints).
Hybrid connectivity NEG’s (Traffic director external services)
Serverless NEGs (Cloud Run, App Engine, Cloud Functions).

Sources and related content

368
Q

Question: What is Cloud CDN, and what is its purpose?

A

Introduction to Cloud CDN (00:00-00:26)

Content: “Cloud CDN, or Content Delivery Network, uses Google’s globally distributed edge points of presence to cache HTTP(S) load-balanced content close to your users. Specifically, content can be cached at CDN nodes as shown on this map. There are over 90 of these cache sites spread across metropolitan areas in Asia Pacific, Americas, and EMEA. For an up-to-date list, please refer to the Cloud CDN documentation.”

Answer: Cloud CDN is a Content Delivery Network that uses Google’s global edge points of presence to cache HTTP(S) load-balanced content closer to users, improving delivery speed.

369
Q

Question: What are the benefits of using Cloud CDN?

A

Benefits of Cloud CDN (00:34-00:52)

Content: “Now, why should you consider using Cloud CDN? Well, Cloud CDN caches content at the edge of Google’s network providing faster delivery of content to your users while reducing serving costs. You can enable Cloud CDN with a simple checkbox when setting up the backend service of your HTTP(S) load balancer.”

Answer: Benefits include faster content delivery to users and reduced serving costs by caching content at Google’s network edge. It’s also easy to enable.

370
Q

Question: How does Cloud CDN handle cache misses and cache hits?

A

Cloud CDN Response Flow (01:00-02:39)

Content: “Let’s walk through the Cloud CDN response flow with this diagram. In this example, the HTTP(S) load balancer has two types of backends. There are managed VM instance groups in the us-central1 and asia-east1 regions, and there is a Cloud Storage bucket in us-east1. A URL map will decide which backend to send the content to: the Cloud Storage bucket could be used to serve static content and the instance groups could handle PHP traffic. Now, when a user in San Francisco is the first to access a piece of content, the cache site in San Francisco sees that it can’t fulfill the request. This is called a cache miss. The cache might attempt to get the content from a nearby cache, for example if a user in Los Angeles has already accessed the content. Otherwise, the request is forwarded to the HTTP(S) load balancer, which in turn forwards the request to one of your backends. Depending on what content is being served, the request will be forwarded to the us-central1 instance group or the us-east1 storage bucket. If the content from the backend is cacheable, the cache site in San Francisco can store it for future requests. In other words, if another user requests the same content in San Francisco, the cache site might now be able to serve that content. This shortens the round trip time and saves the origin server from having to process the request. This is called a cache hit. For more information on what content can be cached, please refer to the documentation.”

Answer:
Cache miss: The CDN requests the content from the HTTP(S) load balancer, which retrieves it from the backend. The content is then cached.
Cache hit: The CDN serves the cached content directly to the user, reducing latency and origin server load.

371
Q

Question: What are Cloud CDN cache modes, and how do they affect caching behavior?

A

Cloud CDN Logging and Cache Modes (02:39-04:15)

Content: “Now, each Cloud CDN request is automatically logged within Google Cloud. These logs will indicate a “Cache Hit” or “Cache Miss” status for each HTTP request of the load balancer. You will explore such logs in the next lab. But how do you know how Cloud CDN will cache your content? How do you control this? This is where cache modes are useful. Using cache modes, you can control the factors that determine whether or not Cloud CDN caches your content by using cache modes. Cloud CDN offers three cache modes, which define how responses are cached, whether or not Cloud CDN respects cache directives sent by the origin, and how cache TTLs are applied. The available cache modes are USE_ORIGIN_HEADERS, CACHE_ALL_STATIC and FORCE_CACHE_ALL. USE_ORIGIN_HEADERS mode requires origin responses to set valid cache directives and valid caching headers. CACHE_ALL_STATIC mode automatically caches static content that doesn’t have the no-store, private, or no-cache directive. Origin responses that set valid caching directives are also cached. FORCE_CACHE_ALL mode unconditionally caches responses, overriding any cache directives set by the origin. You should make sure not to cache private, per-user content (such as dynamic HTML or API responses) if using a shared backend with this mode configured.”

Answer:
Cloud CDN logs cache hit/miss status.
Cache modes control caching behavior:
USE_ORIGIN_HEADERS: Respects origin’s cache directives.
CACHE_ALL_STATIC: Caches static content without no-store, private, or no-cache directives, and respects origin cache directives.
FORCE_CACHE_ALL: Unconditionally caches responses, overriding origin directives.

372
Q

Question: Which Google Cloud load balancers support IPv6 clients, and how does this support work?

A

IPv6 Client Support (00:00-01:32)

Content: “Now that we’ve discussed all the different load balancing services within Google Cloud, let me help you determine which load balancer best meets your needs. One differentiator between the different Google Cloud load balancers is the support for IPv6 clients. Only the HTTP(S), SSL proxy, and TCP proxy load balancing services support IPv6 clients. IPv6 termination for these load balancers enables you to handle IPv6 requests from your users and proxy them over IPv4 to your backends. For example, in this diagram there is a website www.example.com that is translated by Cloud DNS to both an IPv4 and IPv6 address. This allows a desktop user in New York and a mobile user in Iowa to access the load balancer through the IPv4 and IPv6 addresses, respectively. But how does the traffic get to the backends and their IPv4 addresses? Well, the load balancer acts as a reverse proxy, terminates the IPv6 client connection, and places the request into an IPv4 connection to a backend. On the reverse path, the load balancer receives the IPv4 response from the backend and places it into the IPv6 connection back to the original client. In other words, configuring IPv6 termination for your load balancers lets your backend instances appear as IPv6 applications to your IPv6 clients.”

Answer: HTTP(S), SSL proxy, and TCP proxy load balancing support IPv6 clients. The load balancer acts as a reverse proxy, terminating the IPv6 connection and proxying it over IPv4 to the backends.
Section 2: General Load Balancer Selection Guidelines (01:32-02:17)

Content: “To determine which Cloud Load Balancing product to use, you must first determine what traffic type your load balancers must handle. As a general rule, you’d choose an Application Load Balancer when you need a flexible feature set for your applications with HTTP(S) traffic. You’d choose a proxy Network Load Balancer to implement TLS offload, TCP proxy, or support for external load balancing to backends in multiple regions. You’d choose a passthrough Network Load Balancer to preserve client source IP addresses, avoid the overhead of proxies, and to support additional protocols like UDP, ESP, and ICMP. UDP, or if you need to expose client IP addresses to your applications.”
Question: What are the general guidelines for choosing a Cloud Load Balancing product based on traffic type and application requirements?
Answer:
Application Load Balancer: Flexible features for HTTP(S) traffic.
Proxy Network Load Balancer: TLS offload, TCP proxy, cross-regional backends.
Passthrough Network Load Balancer: Preserve client IP addresses, avoid proxies, support UDP, ESP, and ICMP.

373
Q

Question: What other factors should be considered when selecting a load balancer, and what does the “MANAGED” load-balancing scheme indicate?

A

Additional Factors for Load Balancer Selection (02:17-03:10)

Content: “You can further narrow down your choices depending on your application’s requirements: whether your application is external (internet-facing), or internally, and whether you need backends deployed globally, or regionally. If you prefer a table over a flow chart, we recommend this summary table. The load-balancing scheme is an attribute on the forwarding rule and the backend service of a load balancer and indicates whether the load balancer can be used for internal or external traffic. The term MANAGED in the load-balancing scheme indicates that the load balancer is implemented as a managed service either on Google Front Ends or on the open source Envoy proxy. In a load-balancing scheme that is MANAGED, requests are routed either to the Google Front End or to the Envoy proxy. For more information on Network Service Tiers, refer to the documentation.”

Answer: Other factors include:
Whether the application is external (internet-facing) or internal.
Whether backends are deployed globally or regionally.
“MANAGED” scheme indicates the load balancer is implemented as a managed service on Google Front Ends or Envoy proxy.

374
Q

ront: Question: What is Google Cloud Dataflow and what fundamental data processing tasks does it handle?

A

Back: Answer: Cloud Dataflow is a fully managed service for executing a wide range of data processing patterns. It’s designed to transform and enrich data in both streaming and batch modes, providing reliability and expressive data manipulation.

375
Q

How does Cloud Dataflow simplify infrastructure management for data pipelines?

A

Back: Answer: Cloud Dataflow handles the complexities of infrastructure setup and maintenance, allowing users to focus on data processing logic rather than server management or scaling.

376
Q

Front: Question: What are Cloud Dataflow’s scalability and performance capabilities?

A

Back: Answer: Built on Google Cloud infrastructure, Dataflow auto-scales to meet data pipeline demands, intelligently scaling to process millions of queries per second.

377
Q

Development Tools and SDK

Front: Question: What development tools and SDKs does Cloud Dataflow support for building data pipelines?

A

:
Back: Answer: Cloud Dataflow supports fast and simplified pipeline development via expressive SQL, Java, and Python APIs within the Apache Beam SDK. This SDK offers windowing and session analysis primitives, along with an ecosystem of source and sink connectors.

378
Q

Front: Question: What GCP services are commonly used as data output destinations for analytical purposes after processing with Cloud Dataflow?

A

Back: Answer: BigQuery, AI Platform, and Cloud Bigtable are commonly used for analyzing processed data.

379
Q

Question: Beyond basic cleaning, how does Cloud Dataprep intelligently assist users in the data preparation process, and what specific features enable this intelligence?

A

Answer: Cloud Dataprep intelligently assists users by predicting and suggesting ideal data transformations with each UI input, eliminating the need for manual coding. This is enabled by features like automatic schema detection, data type inference, possible join suggestions, and anomaly detection, which streamline data profiling and preparation.

380
Q

Question: How does the automatic anomaly detection within Cloud Dataprep benefit data analysts and machine learning engineers during the data preparation phase?

A

Answer: Automatic anomaly detection in Cloud Dataprep helps analysts and ML engineers quickly identify and address data inconsistencies or errors that could skew analysis or model training. By highlighting anomalies, it improves data quality, reduces the risk of biased outcomes, and saves significant time that would otherwise be spent on manual data inspection.

381
Q

Question: What is the significance of the Trifacta partnership for Cloud Dataprep users, and how does it impact the operational aspects of data preparation?

A

Answer: The Trifacta partnership is significant because Cloud Dataprep is built on Trifacta’s industry-leading data preparation solution, Trifacta Wrangler. This integration provides a seamless user experience, eliminating the need for up-front software installation, separate licenses, and ongoing operational overhead, making data preparation more accessible and efficient.

382
Q

Question 4:
Question: Describe the typical architectural flow of data when using Cloud Dataprep in conjunction with other Google Cloud services, focusing on data sources and destinations.

A

Answer: In a typical architecture, Cloud Dataprep prepares raw data from sources like BigQuery, Cloud Storage, or uploaded files. This prepared data can then be ingested into a transformation pipeline like Cloud Dataflow for further processing. Finally, the refined data is typically exported to destinations like BigQuery or Cloud Storage for analysis, reporting, and machine learning.

383
Q

Which Google Cloud service is best suited for hosting static websites and storing large files such as videos or photos?

A. Firestore
B. Cloud SQL
C. Cloud Storage
D. Memorystore

A

Which Google Cloud service is best suited for hosting static websites and storing large files such as videos or photos?

A. Firestore
B. Cloud SQL
C. Cloud Storage
D. Memorystore

✅ Answer: C. Cloud Storage
Cloud Storage is ideal for storing unstructured data like images, videos, and hosting static websites.

384
Q

Which of the following Google Cloud services provides live synchronization and offline support, making it ideal for mobile and web apps?

A. BigQuery
B. Firestore
C. Bigtable
D. Spanner

A

Answer: B. Firestore
Firestore is a serverless NoSQL document database with real-time syncing and offline capabilities—perfect for mobile and web apps.

385
Q

Which service is designed for high-throughput, low-latency access to very large amounts of single-keyed data?

A. Spanner
B. AlloyDB
C. Bigtable
D. Cloud SQL

A

✅ Answer: C. Bigtable
Bigtable is optimized for fast read/write access at scale—great for operational and analytical workloads.

386
Q

If you’re migrating an existing application that uses MySQL, PostgreSQL, or SQL Server with minimal refactoring, which Google Cloud service is most appropriate?

A. Cloud SQL
B. Spanner
C. BigQuery
D. Firestore

A

If you’re migrating an existing application that uses MySQL, PostgreSQL, or SQL Server with minimal refactoring, which Google Cloud service is most appropriate?

A. Cloud SQL
B. Spanner
C. BigQuery
D. Firestore

✅ Answer: A. Cloud SQL
Cloud SQL supports MySQL, PostgreSQL, and SQL Server, making it ideal for lift-and-shift migrations.

387
Q

Which Google Cloud database service is fully managed, PostgreSQL-compatible, and optimized for both transactional and analytical workloads?

A. Firestore
B. AlloyDB
C. Bigtable
D. Cloud Storage

A

✅ Answer: B. AlloyDB
AlloyDB is a high-performance PostgreSQL-compatible database, optimized for hybrid transactional and analytical processing.

388
Q

Which of the following best describes the principle of least privilege in Google Cloud IAM?

A. Give all users access to all resources for convenience
B. Only administrators should have access to Google Cloud resources
C. Grant users and services only the permissions necessary to perform their job
D. Use basic roles for all users to simplify access management

A

✅ Answer: C. Grant users and services only the permissions necessary to perform their job
The principle of least privilege ensures that users or service accounts are given only the permissions they truly need, minimizing risk.

389
Q

What type of IAM principal is used to represent an application or compute workload?

A. Google Account
B. Service Account
C. Google Group
D. Cloud Identity

A

✅ Answer: B. Service Account
Service accounts are used by applications or services—not individuals—to authenticate and access resources.

390
Q

Which of the following statements about IAM roles is true?

A. Permissions can be assigned directly to users.
B. Basic roles are recommended for fine-grained access control.
C. Roles are collections of permissions.
D. Custom roles are managed by Google.

A

✅ Answer: C. Roles are collections of permissions
In IAM, you assign roles to users, and each role contains specific permissions.

391
Q

Why would you use a custom role in IAM?

A. To provide access to all services in a project
B. To replace predefined roles managed by Google
C. To create tightly scoped permissions tailored to a specific job
D. To allow users to create new projects

A

✅ Answer: C. To create tightly scoped permissions tailored to a specific job
Custom roles are used when predefined roles grant too many permissions, helping enforce the principle of least privilege.

392
Q

Which IAM principal cannot be used to establish identity for authentication requests?

A. Service Account
B. Google Account
C. Google Group
D. gcloud CLI

A

✅ Answer: C. Google Group
Google Groups are used for managing access to groups of users but cannot be used to authenticate requests on their own.

393
Q

What is the main difference between authentication and authorization in Google Cloud?

A. Authentication defines what actions a user can perform.
B. Authorization proves the identity of the user or application.
C. Authentication proves identity, while authorization defines access.
D. Authentication and authorization are interchangeable terms.

A

✅ Answer: C. Authentication proves identity, while authorization defines access.
Authentication confirms who you are; authorization defines what you’re allowed to do.

394
Q

Which of the following is true about API keys in Google Cloud?

A. They are recommended for securing all types of APIs.
B. They identify an application and are appropriate for low-security, read-only access.
C. They offer short-lived, role-based tokens.
D. They are tied to a user and expire after a fixed period.

A

✅ Answer: B. They identify an application and are appropriate for low-security, read-only access.
API keys are useful for non-sensitive access but can pose a risk if leaked.

395
Q

What type of identity should you use for a workload or application to authenticate in Google Cloud?

A. Google Group
B. User account
C. Service account
D. OAuth token

A

✅ Answer: C. Service account
Service accounts are specifically designed for workloads and applications to access Google Cloud services.

Question 4:

396
Q

Why is managing downloaded service account keys considered risky?

A. They can’t be used to authenticate to APIs.
B. They can only be used by admins.
C. If leaked, they provide full access to the associated resources.
D. They automatically expire after 24 hours.

A

✅ Answer: C. If leaked, they provide full access to the associated resources.
Access to a private key is like having a user’s password — a serious security risk.

397
Q

Which of the following is a recommended best practice for authenticating service accounts?

A. Use service account keys stored in source code.
B. Use the default service account for all workloads.
C. Avoid downloaded service account keys whenever possible.
D. Share service account keys with all team members.

A

✅ Answer: C. Avoid downloaded service account keys whenever possible.
To reduce risk of leakage and misuse, it’s best to use other authentication methods like Workload Identity Federation or using built-in credentials on Google Cloud services.

398
Q

What is the recommended way to authenticate an application running in a local development environment?

A. Attach a service account to the instance.
B. Use Workload Identity Federation.
C. Use service account keys.
D. Run gcloud auth application-default login.

A

✅ Answer: D. Run gcloud auth application-default login.
This command allows ADC to use your user credentials during local development.

399
Q

If your application is running on Google Kubernetes Engine (GKE), which authentication method should you use?

A. Service account keys
B. Workload Identity
C. API keys
D. OAuth tokens

A

✅ Answer: B. Workload Identity
Workload Identity allows GKE workloads to impersonate service accounts securely.

400
Q

Which method is preferred for workloads running outside Google Cloud if the external environment can issue ID tokens?

A. Use service account keys
B. Use API keys
C. Use Workload Identity Federation
D. Attach default service account

A

✅ Answer: C. Use Workload Identity Federation
It securely allows external workloads to impersonate a service account without using keys.

401
Q

What is the last resort for authenticating to Google Cloud APIs from outside Google Cloud?

A. OAuth user login
B. Service account keys
C. Google Workspace identities
D. gcloud CLI login

A

✅ Answer: B. Service account keys
Service account keys are discouraged but may be used if federation is not possible.

402
Q

In which order does Application Default Credentials (ADC) search for credentials?

A. Attached service account → Environment variable → gcloud user credentials
B. gcloud user credentials → Environment variable → Attached service account
C. Environment variable → gcloud user credentials → Attached service account
D. Service account key → OAuth token → gcloud user credentials

A

✅ Answer: C. Environment variable → gcloud user credentials → Attached service account
ADC checks for credentials in this exact order.

403
Q

Which of the following is the preferred way for a production application running on Cloud Run or Compute Engine to authenticate with Google Cloud APIs?

A. Use gcloud auth login
B. Use a downloaded service account key
C. Attach a user-managed service account to the resource
D. Use Workload Identity Federation

A

✅ Answer: C. Attach a user-managed service account to the resource

Attaching a user-managed service account ensures least privilege and avoids the need to manage service account keys.

404
Q

What is the first method that Application Default Credentials (ADC) checks when looking for credentials?

A. gcloud CLI user login credentials
B. Attached service account on the compute resource
C. The GOOGLE_APPLICATION_CREDENTIALS environment variable
D. IAM policy bindings for the default Compute Engine service account

A

✅ Answer: C. The GOOGLE_APPLICATION_CREDENTIALS environment variable

ADC first checks if this environment variable is set and points to a service account key file.

405
Q

Which of the following authentication approaches is recommended for GKE workloads needing access to Google Cloud APIs?

A. Cloud SQL Auth Proxy
B. Workload Identity
C. OAuth user login
D. Service account keys

A

✅ Answer: B. Workload Identity

Workload Identity allows GKE workloads to securely impersonate IAM service accounts without exposing long-lived keys.

406
Q

When should you consider using service account keys to authenticate to Google Cloud APIs?

A. Always, because they are faster
B. For GKE and Cloud Run workloads
C. Only as a last resort, if federation is not possible
D. When you need long-term access to Cloud APIs

A

✅ Answer: C. Only as a last resort, if federation is not possible

The transcript explicitly states that service account keys should only be used when no other option (e.g., federation) is viable.

407
Q

What are the three primary paths to building or modernizing applications in the cloud?

A

Answer:

Migrating an existing on-premises application to the cloud

Deploying to a hybrid or multicloud environment

Building a cloud-native application

408
Q

Why are microservices preferred over monolithic architectures in modern cloud apps?

A

Answer:
Because microservices are modular, allow independent deployment, reduce risk, enable reuse, and support diverse development teams and CI/CD practices.

409
Q

What are the three migration strategies for monolithic applications?

A

Answer:

Lift and Shift: Move the whole application to the cloud without changes

Move and Improve: Migrate incrementally, adopting containers and microservices

Refactor: Break the app into services, adopting serverless and modern architectures

410
Q

What is Anthos, and why is it useful for hybrid and multicloud?

A

Answer:
Anthos is a managed application platform that enables consistent application management across Google Cloud, on-premises, and other cloud providers, supporting modernization and observability.

411
Q

What are the deployment options for Anthos?

A

Answer:

Google Cloud

VMware vSphere

Bare-metal servers

Anthos Attached Clusters

AWS

Microsoft Azure (in preview)

412
Q

How does Cloud Build support CI/CD?

A

Answer:
Cloud Build is a managed CI/CD platform that builds, tests, and deploys code across environments, supports Docker and custom build steps, integrates with source repositories, and enables security scanning.

413
Q

What is the difference between Cloud Build and Cloud Deploy?

A

Answer:
Cloud Build focuses on building and packaging applications, while Cloud Deploy manages continuous delivery pipelines to GKE with built-in deployment strategies.

414
Q

What is service orchestration vs. service choreography?

A

Answer:

Orchestration: Centralized controller (e.g., Workflows) manages service interactions

Choreography: Decentralized, event-driven interaction (e.g., via Pub/Sub or Eventarc)

415
Q

What Google Cloud services support service choreography?

A

Answer:

Pub/Sub

Eventarc

Cloud Scheduler (for scheduled events)

Cloud Tasks (for background task queues)

416
Q

How does Cloud Operations Suite support application monitoring?

A

Answer:
It offers logging (Cloud Logging), monitoring (Cloud Monitoring), tracing (Cloud Trace), debugging (Cloud Debugger), and profiling (Cloud Profiler) to observe and troubleshoot applications.

417
Q

What is the difference between Apigee and API Gateway?

A

Answer:

API Gateway: Lightweight, secure access to services in Google Cloud

Apigee: Full API management platform including monetization, analytics, and developer portals

418
Q

What is Cloud Code and how does it support development?

A

What is Cloud Code and how does it support development?
Answer:
Cloud Code is an IDE plugin that helps developers write, run, debug, and deploy cloud-native apps directly from their development environment.

419
Q

What types of databases can be used in Google Cloud app architectures?

A

Answer:

Relational: Cloud SQL, Cloud Spanner

NoSQL: Firestore, Bigtable

In-memory: Memorystore (Redis/Memcached)

420
Q

How can Google Cloud support event-driven microservices?

A

Answer:
Using Pub/Sub for message delivery, Eventarc for routing events, Cloud Functions for lightweight triggers, and Cloud Run or GKE for microservice execution.

421
Q

What tools help secure CI/CD pipelines in Google Cloud?

A

Answer:

Binary Authorization: Only allows trusted images in production

Cloud Build Private Pools: For secure and isolated builds

IAM: Controls developer access to services and resources

422
Q

When should you use OAuth 2.0 in your application?

A

Answer:
Use OAuth 2.0 when your application needs to access resources on behalf of a user, such as querying a user’s BigQuery dataset or creating resources in their project.

423
Q

What is Identity-Aware Proxy (IAP) used for?

A

Answer:
IAP is used to control access to web applications running in Google Cloud by verifying user identity and checking access permissions—without the developer writing access control code.

424
Q

What type of access control model does IAP promote?

A

Answer:
IAP promotes an application-level access control model, rather than relying on VPNs, network firewalls, or embedded code-based authorization.

425
Q

What are the benefits of using IAP?

A

Answer:
IAP provides centralized authorization for HTTPS-based applications, simplifies access control, and improves security posture without requiring code changes.

426
Q

What is Firebase Authentication?

What features does Firebase Auth provide for developers?

A

What is Firebase Authentication?
Answer:
Firebase Auth is a mobile app authentication service that supports signing in with passwords, phone numbers, and federated identity providers like Google, Apple, and GitHub.

What features does Firebase Auth provide for developers?
Answer:
It provides drop-in UI components for sign-up/sign-in, SDKs, and libraries that simplify handling edge cases like account recovery and identity management.

427
Q

After successful login with Firebase Auth, how can the app use the identity?

A

After successful login with Firebase Auth, how can the app use the identity?
Answer:
The app gets access to the user’s profile and an authentication token, which can be used in OAuth 2.0 and OpenID Connect flows to authenticate to backend services.

428
Q

How does Identity Platform differ from Firebase Authentication?

A

How does Identity Platform differ from Firebase Authentication?
Answer:
Identity Platform offers enterprise-level features like OpenID Connect and SAML support, multi-factor authentication, and integration with Identity-Aware Proxy.

429
Q

What kind of identity providers are supported by Firebase Auth?

A

Answer:
Google, Apple, GitHub, phone numbers, and email/password authentication are supported.

430
Q

What is the main purpose of Secret Manager in Google Cloud?

A

What is the main purpose of Secret Manager in Google Cloud?
Answer:
Secret Manager provides a secure and centralized way to store, manage, and access sensitive information such as API keys, passwords, and certificates.

431
Q

Why is storing credentials in flat files not recommended?

A

Answer:
Flat files can be insecure and hard to manage at scale. Secrets may be scattered, and access control is harder to enforce, increasing the risk of credential leaks.

432
Q

How does Secret Manager control access to secrets?

A

How does Secret Manager control access to secrets?
Answer:
Secret Manager uses IAM permissions to control access, ensuring that only authorized users and services can access secrets.

433
Q

What types of data can you store in Secret Manager?

A

What types of data can you store in Secret Manager?
Answer:
Secrets can be stored as binary blobs or text strings.

434
Q

How does versioning work in Secret Manager?

A

Answer:
Each secret can have multiple immutable versions. Versions cannot be modified but can be deleted, and there’s no limit on the number of versions.

435
Q

What does the principle of least privilege mean in the context of Secret Manager?

A

Answer:
It means only necessary access is granted — by default only project owners can access secrets, and all others must be explicitly granted IAM roles.

436
Q

What logging feature is supported by Secret Manager?

A

Answer:
Secret Manager supports Cloud Audit Logs, which record every interaction (read, write, delete) with secrets for auditing and compliance.

437
Q

What encryption methods does Secret Manager use?

A

Answer:
By default, it uses Google-managed encryption, but you can also use Cloud KMS for customer-managed encryption of secrets.

438
Q

how does cloud monitoring work with the organizations projects.

A

Cloud Monitoring workspace in Google Cloud is a central place where you can monitor and manage metrics, logs, uptime checks, alerts, and other observability data across one or more Google Cloud projects or AWS accounts.

Key Features:
Centralized monitoring: You can view and analyze metrics from multiple projects or AWS accounts in one place.

Dashboards: Create custom dashboards to visualize metrics.

Alerting policies: Define conditions to trigger alerts and route notifications.

Logs-based metrics: Create metrics from log data using Cloud Logging.

Uptime checks & SLOs: Monitor service availability and define service level objectives (SLOs).

Integration with Cloud Logging and Cloud Trace.