PCA Q's 100-180 Flashcards

1
Q
  1. You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP Leveraging Google best practices, what should you do?

A
Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute Engine instances

B
Using the Cron service provided by AppEngine, publish messages to a Cloud Pub/Subtopic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.

C
Using the Cron service provided by Google Kubernetes Engine(GKE), publish messages directly to a message-processing utility service running on Compute Engine instances

D
Using the Cron service provided by GKE, publish messages to a CloudPub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances

A

B is correct. More appropriately: https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived csv files totaling 900

TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.

What actions will meet your company’s needs?

A Compress and upload both archived files and files uploaded daily using the gsutil -m option.

B Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily

C Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage Establish one Cloud VPN Tunnel to VPC
networks over the public internet, and compress and upload files daily using the gsutil -m option.

D. Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC
networks over the public internet, and compress and upload files daily.

A

B Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily

Agree B. 100Mbps connections for 10TB data transfer is takes too long

https://cloud.google.com/solutions/transferring-big-data-sets-to-gcp#close

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data for proper processing.

Which products should you deploy to ensure guaranteed-once FIFO (first-in. first-out) delivery of data?

A Cloud Pub/Sub alone
B Cloud Pub/Sub to Cloud Dataflow
C Cloud Pub/Sub to Stackdriver
D Cloud Pub/Sub to Cloud SQL

A

B Cloud Pub/Sub to Cloud Dataflow

I believe the answer is B. “Pub/Sub doesn’t provide guarantees about the order of message delivery. Strict message ordering can be achieved with buffering, often using Dataflow.”

https://cloud.google.com/solutions/data-lifecycle-cloud-platform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Your company is planning to perform a lift and shift migration of their Linux RHEL 6.5+ virtual machines. The virtual machines are running in an on-premises VMware
    environment. You want to migrate them to Compute Engine following Google-recommended practices. What should you do?

A 1 Define a migration plan based on the list of the applications and their dependencies.
2 Migrate all virtual machines into Compute Engine individually with Migrate for Compute Engine

B 1 Perform an assessment of virtual machines running in the current VMware environment.
2 Create images of all disks Import disks on Compute Engine
3 Create standard virtual machines where the boot disks are the ones you have imported

C. 1. Perform an assessment of virtual machines running in the current VMware environment
2 Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.

D 1. Perform an assessment of virtual machines running in the current VMware environment.
2 Install a third-party agent on all selected virtual machines
3 Migrate all virtual machines into Compute Engine

A

C. 1. Perform an assessment of virtual machines running in the current VMware environment
2 Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.

Ans ) C ,
Migrate for Compute Engine organizes groups of VMs into Waves. After understanding the dependencies of your applications, create runbooks that contain groups of VMs and begin your migration!

https://cloud.google.com/migrate/compute-engine/docs/4.5/how-to/migrate-on-premises-to-gcp/overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. You need to deploy an application to Google Cloud The application receives traffic via TCP and reads and writes data to the filesystem. The application does not support
    horizontal scaling The application process requires full control over the data on the file system because concurrent access causes corruption The business is willing to
    accept a downtime when an incident occurs, but the application must be available 24/7 to support their business operations. You need to design the architecture of this
    application on Google Cloud. What should you do?

A Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use an HTTP load balancer in front of the instances

B Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use a network load balancer in front of the instances

C. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use an HTTP load balancer in front of the instances

D Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.

A

D Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Your company has an application running on multiple Compute Engine instances. You need to ensure that the application can communicate with an on-premises service that
    requires high throughput via internal IPs, while minimizing latency. What should you do?
    A Use OpenVPN to configure a VPN tunnel between the on-premises
    environment and Google Cloud

B Configure a direct peering connection between the on-premises environment and Google Cloud

C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud

D Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud

A

D Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud

The correct answer is D.
Reason: “requires high throughput via internal IPs, while minimizing latency” - both are aspects you cannot guarantee with using VPN traversing the internet.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. You are managing an application deployed on Cloud Run for Anthos. and you need to define a strategy for deploying new versions of the application. You want to evaluate the
    new code with a subset of production traffic to decide whether to proceed with the rollout. What should you do?

A Deploy a new revision to Cloud Run with the new version Configure traffic percentage between revisions

B Deploy a new service to Cloud Run with the new version Add a Cloud Load Balancing instance in front of both services

C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch As part of the Cloud Build trigger,
configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version.

D In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run Configure Traffic Director to
send a small percentage of traffic to the new version of the application

A

A Deploy a new revision to Cloud Run with the new version Configure traffic percentage between revisions

Correct Answer: A
o Each deployment to a service creates a revision. A revision consists of a specific container image, along with environment settings such as environment variables, memory limits, or concurrency value.
o Once the new revision is deployed to a Service you can manage the traffic using MANAGE TRAFFIC option inside the revision tab

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

107.
You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace As a Site Reliability Engineer (SRE), you need to triage incidents quickly.
What should you do?

A Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies

B Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting
software on a Compute Engine instance

C Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery. and make a Data Studio dashboard

D Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies.

A

A Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies

Ans: A
If we started to create a custom dashboard for every incident and there will be more dashboards and we will loose the center of visibility, so Option D is wrong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data You want to ensure that the minimum
    amount of data is lost in case of catastrophic failure Which two features should you implement? (Choose two.)
A Sharding
B Read replicas
C. Binary logging
D Automated backups
E Semisynchronous replication
A

C,D

Cloud SQL. If you use Cloud SQL, the fully managed Google Cloud MySQL database, you should enable automated backups and binary logging for your Cloud SQL instances. This allows you to perform a point-in-time recovery, which restores your database from a backup and recovers it to a fresh Cloud SQL instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You
    are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can
    accommodate such a request. What should you do?

A Use a unique identifier for each individual Upon a deletion request, delete all rows from BigQuery with this identifier

B When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information.

C Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject’s data from this view Use this view
instead of the source table for all analysis tasks.

D Use a unique identifier for each individual Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value

A

B When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information.

According to me, the question states “The association collects a large amount of health data, such as sustained injuries.” and the nuance on the word such => “ Current legislation requires you to delete “SUCH” information upon request of the subject. “ So from that point of view the question is not to delete the entire user records but specific data related to personal health data. With DLP you can use InfoTypes and InfoType detectors to specifically scan for those entries and how to act upon them (link https://cloud.google.com/dlp/docs/concepts-infotypes)
I would say B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

110.
Your company has announced that they will be outsourcing operations functions. You want to allow developers to easily stage new versions of a cloud-based application in the production environment and allow the outsourced operations team to autonomously promote staged versions to production. You want to minimize the operational
overhead of the solution. Which Google Cloud product should you migrate to?

A App Engine
B GKEOn-Prem
C Compute Engine
D Google Kubernetes Engine

A

A App Engine

Answer should be A as only with App Engine we have a default service account which allows the user to deploy the changes per project. for GKE we may have to configure additional permission for both DEV and Operations team to deploy the changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

111.
Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development environments The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do?

A Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the production instances to automate the task.

B Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours

C Deploy the development and acceptance applications on a managed instance group and enable autoscaling

D Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development environments.

A

B Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours

B is the answer, assuming VM doesn’t need to be up after office hours

https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing-your-cloud-costs
Schedule VMs to auto start and stop: The benefit of a platform like Compute Engine is that you only pay for the compute resources that you use. Production systems tend to run 24/7; however, VMs in development, test or personal environments tend to only be used during business hours, and turning them off can save you a lot of money!

https://cloud.google.com/blog/products/storage-data-transfer/save-money-by-stopping-and-starting-compute-engine-instances-on-schedule

Cloud Scheduler, GCP’s fully managed cron job scheduler, provides a straightforward solution for automatically stopping and starting VMs. By employing Cloud Scheduler with Cloud Pub/Sub to trigger Cloud Functions on schedule, you can stop and start groups of VMs identified with labels of your choice (created in Compute Engine). Here you can see an example schedule that stops all VMs labeled “dev” at 5pm and restarts them at 9am, while leaving VMs labeled “prod” untouched

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

112.
You are moving an application that uses MySQL from on-premises to Google Cloud The application will run on Compute Engine and will use Cloud SQL You want to cut over to the Compute Engine deployment of the application with minimal downtime and no data loss to your customers. You want to migrate the application with minimal
modification. You also need to determine the cutover strategy. What should you do?

A
1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server
2. Stop the on-premises application.
3 Create a mysqldump of the on-premises MySQL server.
4 Upload the dump to a Cloud Storage bucket.
5. Import the dump into Cloud SQL
6 Modify the source code of the application to write queries to both databases and read from its local database
7. Start the Compute Engine application
8 Stop the on-premises application.

B
1 Set up Cloud SQL proxy and MySQL proxy
2 Create a mysqldump of the on-premises MySQL server
3. Upload the dump to a Cloud Storage bucket. 4 Import the dump into Cloud SQL
5 Stop the on-premises application
6 Start the Compute Engine application

C.
1 Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server
2 Stop the on-premises application
3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server
4 Create the replication configuration in Cloud SQL
5 Configure the source database server to accept connections from the Cloud SQL replica
6 Finalize the Cloud SQL replica configuration
7. When replication has been completed, stop the Compute Engine application
8 Promote the Cloud SQL replica to a standalone instance
9 Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance

D
1. Stop the on-premises application
2 Create a mysqldump of the on-premises MySQL server
3 Upload the dump to a Cloud Storage bucket
4 Import the dump into Cloud SQL
5 Start the application on Compute Engine

A

C.
1 Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server
2 Stop the on-premises application
3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server
4 Create the replication configuration in Cloud SQL
5 Configure the source database server to accept connections from the Cloud SQL replica
6 Finalize the Cloud SQL replica configuration
7. When replication has been completed, stop the Compute Engine application
8 Promote the Cloud SQL replica to a standalone instance
9 Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

113.
Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this requirement across all of your Virtual Private Clouds (VPCs). What should you do?

A Remove the default route on all VPCs Move all approved instances into a new subnet that has a default route to an internet gateway

B Create a new VPC in custom mode Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet

C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.

D Set an Organization Policy with a constraint on constraints/compute vmExternallpAccess. List theapproved instances in the allowedValues list

A

D Set an Organization Policy with a constraint on constraints/compute vmExternallpAccess. List the approved instances in the allowedValues list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

114.
Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances. You need to evaluate the efficiency of the applied firewall ruleset When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to
display. What should you do to troubleshoot the issue?

A Enable Virtual Private Cloud (VPC) flow logging

B Enable Firewall Rules Logging for the firewall rules you want to monitor.

C. Verify that your user account is assigned the compute networkAdmin Identity and Access Management (IAM) role.

D Install the Google Cloud SDK. and verify that there are no Firewall logs in the command line output

A

Answer is B
when you create a firewall rule there is an option for firewall rule logging on/off. It is set to off by default.
To get firewall insights or view the logs for a specific firewall rule you need to enable logging while creating the rule or you can enable it by editing that rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

115.
Your company has sensitive data in Cloud Storage buckets Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?

A 1 Create a VPC Service Controls perimeter that includes the projects with the buckets
2 Create an access level with the CIDR of the office network

B 1 Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range
2. Use the Classless Inter-domain Routing (CIDR) of the office network

C. 1 Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets.
2 Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business.

D. 1 Create a Cloud VPN to the office network
2. Configure Private Google Access for on-premises hosts

A

A 1 Create a VPC Service Controls perimeter that includes the projects with the buckets
2 Create an access level with the CIDR of the office network

Should be A.
For all Google Cloud services secured with VPC Service Controls, you can ensure that:
Resources within a perimeter are accessed only from clients within authorized VPC networks using Private Google Access with either Google Cloud or on-premises.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

116.
You have developed a non-critical update to your application that is running in a managed instance group, and have created a new instance template with the update that you
want to release. To prevent any possible impact to the application, you don’

t want to update any running instances. You want any new instances that are created by the

managed instance group to contain the new update. What should you do?

A Start a new rolling restart operation
B Start a new rolling replace operation.
C. Start a new rolling update Select the Proactive update mode.
D Start a new rolling update Select the Opportunistic update mode

A

IMHO the correct answer is d) opportunistic mode, not c) proactive mode. The requirement is not to update any running instances. see: https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups For automated rolling updates, you must set the mode to proactive. Alternatively, if an automated update is potentially too disruptive, you can choose to perform an opportunistic update. The MIG applies an opportunistic update only when you manually initiate the update on selected instances or when new instances are created. New instances can be created when you or another service, such as an autoscaler, resizes the MIG.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

117.
Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in another zone as quickly as
possible with the latest application data. You need to design the solution to meet this requirement. What should you do?

A Create a snapshot schedule for the disk containing the application data Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same
zone

B Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region Use the regional persistent disk for the application data

C. Create a snapshot schedule for the disk containing the application data Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region.

D Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data Whenever a zonal
outage occurs, use the instance template to spin up the application in another region Use the regional persistent disk for the application data tvt_vn/ebay

A

B. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region Use the regional persistent disk for the application data

Answer is B
it only request zonal resiliency.
Regional persistent disk is a storage option that provides synchronous replication of data between two zones in a region. Regional persistent disks can be a good building block to use when you implement HA services in Compute Engine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

118.
Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company’s data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company’s Virtual Private Cloud (VPC) overlap with your data center IP space.

What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established?

A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space

B Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space

C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP
space

D Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space

A

A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space

Correct Answer: A
- IP Should not overlap so applying new IP address is the solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

119.
You need to migrate Hadoop jobs for your company’s Data Science team without modifying the underlying infrastructure. You want to minimize costs and infrastructure management effort. What should you do?

A Create a Dataproc cluster using standard worker instances.

B Create a Dataproc cluster using preemptible worker instances

C Manually deploy a Hadoop cluster on Compute Engine using standard instances

D Manually deploy a Hadoop cluster on Compute Engine using preemptible instances

A

B. Create a Dataproc cluster using preemptible worker instances

“Should be B, you want to minimize costs.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

120.
Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network subnets do not overlap
and must remain separated. The network configuration is shown below.

VPC#2
subnet #2

Instance #2
Compute Engine VPC#1
subnet #!

Instance #1
Compute Engine VPC#3
subnet #3

Instance #3
Compute Engine

Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this?

A. Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1.

B. Add two additional NICs to Instance #1 with the following configuration:
- NIC1
O VPC: VPC #2
o SUBNETWORK: subnet #2
- NIC2
o VPC: VPC #3
O SUBNETWORK: subnet #3
Update firewall rules to enable traffic between instances.

C. Create two VPN tunnels via CloudVPN:
• 1 between VPC #1 and VPC #2
- 1 between VPC #2 and VPC #3

D.
Peer all three VPCs:
• Peer VPC #1 with VPC #2.
- Peer VPC #2 with VPC #3.
Update firewall rules to enable traffic between the instances.
A
B. Add two additional NICs to Instance #1 with the following configuration:
- NIC1
O VPC: VPC #2
o SUBNETWORK: subnet #2
- NIC2
o VPC: VPC #3
O SUBNETWORK: subnet #3
Update firewall rules to enable traffic between instances.

According to my understanding the requirement is that only VM1 shall be able to communicate with VM2 and VM3, but not VM2 with VM3.
We can exclude d) as d) would enable VM2 to communicate with VM3 as well - my assumption is, that if the quizzer wanted that d) is the correct answer, he would make just 2 peerings - 1x between VM1 and VM2 and 1x between VM1 and VM3 repectively the VPCs.
We can exclude c) as well - there is no connection between VPC1 and VPC3.
IMHO a) will not work.
So the only correct answer seems to be b) - what I don’t understand is why we have to update the firewall rules as IMHO the default firewall rules enable such communication (maybe some restrictive rules are implemented - not enough details in the question to clarify that part). Please correct me if I am wrong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

121.
You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What should you do?

A Create a Compute Engine instance template using the most recent Debian image Create an instance from this template, and install and configure the application as
part of the startup script Repeat this process whenever a new Google-managed Debian image becomes available

B Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.

C Create an instance with the latest available Debian image Connect to the instance via SSH, and install and configure the application on the instance Repeat this
process whenever a new Google-managed Debian image becomes available.

D Create a Docker container with Debian as the base image Install and configure the application as part of the Docker image creation process Host the container on
Google Kubernetes Engine and restart the container whenever a new update is available

A

B Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.

“B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
  1. You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do?

A 1. Update your GKE cluster to use Cloud Operations for GKE
2 Use the GKE Monitoring dashboard to investigate logs from affected Pods

B 1 Create a new GKE cluster with Cloud Operations for GKE enabled
2 Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster
3. Use the GKE Monitoring dashboard to investigate logs from affected Pods

C. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus.
2 Set an alert to trigger whenever the application returns an error.

D 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus.
2 Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster
3 Set an alert to trigger whenever the application returns an error.

A

A
1. Update your GKE cluster to use Cloud Operations for GKE
2 Use the GKE Monitoring dashboard to investigate logs from affected Pods

According to the reference, answer should be A.

https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running-kubernetes-engine

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

123.
You need to deploy a stateful workload on Google Cloud The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do?

A Use a persistent disk for each instance.
B Use a regional persistent disk for each instance
C Create a Cloud Filestore instance and mount it in each instance.
D Create a Cloud Storage bucket and mount it in each instance using gcsfuse

A

C.
Create a Cloud Filestore instance and mount it in each instance.

Answer should be C,
https://cloud.google.com/storage/docs/gcs-fuse#notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

124.
Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that is running multiple microservices. The cluster has both Anthos Service Mesh and
Anthos Config Management configured End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay.
What should you do?

A Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices

B Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads.

C Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads

D Reinstall istio using the default istio profile in order to collect request latency Evaluate the telemetry between the microservices in the Cloud Console

A

A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices

The Anthos Service Mesh pages in the Google Cloud Console provide both summary and in-depth metrics, charts, and graphs that enable you to observe service behavior. You can monitor the overall health of your services, or drill down on a specific service to set a service level objective (SLO) or troubleshoot an issue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

125.
You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years What should you do?

A Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.

B Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer Use the service account to upload new files

C Use a customer-managed key for the encryption of the bucket Rotate the key after 5 years

D Create the bucket with fine-grained access control, and grant a service account the role of Object Writer Use the service account to upload new files.

A

A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.

  • If a bucket has a retention policy, objects in the bucket can only be deleted or replaced once their age is greater than the retention period.
  • Once you lock a retention policy, you cannot remove it or reduce the retention period it has.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

126.
Your team will start developing a new application using microservices architecture on Kubernetes Engine As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment You want to ensure that all code deployed in the development environment follows this process.
What should you do?

A Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch After a
successful commit, have the developer deploy the newly built container image on the development cluster.

B Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch After a
successful commit, have the developer deploy the newly built container image on the development cluster

C Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry Create a deployment
pipeline that watches for new images and deploys the new image on the development cluster Ensure only the deployment tool has access to deploy new versions

D Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry Rely on Vulnerability Scanning to
ensure the code tests succeed As the final step of the Cloud Build process, deploy the new container image on the development cluster Ensure only Cloud Buildhas access to deploy new versions.

A

C Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry Create a deployment
pipeline that watches for new images and deploys the new image on the development cluster Ensure only the deployment tool has access to deploy new versions

“Questions say “relevant microservice will be deployed automatically in the development environment.” Therefore A and B are out. D says “Rely on Vulnerability Scanning to ensure the code tests succeed.” Vulnerability Scanning is not test so D is out. The correct Answer is therefore C.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

127.
Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again
as quickly as possible. Which action should you recommend?

A Change the autoscaling metric to agent googleapis.com/memory/percent_used.

B Restart the affected instances on a staggered schedule

C. SSH to each instance and restart the application process

D Increase the maximum number of instances in the autoscaling group

A

D Increase the maximum number of instances in the autoscaling group

“Cannot be A), since changing the metric used for autoscaling will not solve the issue, the CPU is already over utilized, hence the unique “workaround” meanwhile the application causing the issue is fixed (connection leaks, infinite loops, etc.) is to allow introducing new nodes/workers/VMs.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

128.
You are implementing the infrastructure for a web service on Google Cloud The web service needs to receive and store the data from 500.000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business
wants to keep costs low. Which web service platform and database should you use for the application?

A Cloud Run and BigQuery

B Cloud Run and Cloud Bigtable

C. A Compute Engine autoscaling managed instance group and BigQuery

D A Compute Engine autoscaling managed instance group and Cloud Bigtable

A

B Cloud Run and Cloud Bigtable

“Any correct answer must involve Cloud Bigtable over BigQuery since Bigtable is optimized for heavy write loads. That leaves B and D. I would suggest B b/c it is lower cost (“The business wants to keep costs low”)”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

129.
You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific
number of replicas You also want to be able to address a specific microservice from any other microservice in a uniform way. regardless of the number of replicas the
microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do?

A Deploy each microservice as a Deployment Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.

B Deploy each microservice as a Deployment Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster,

C Deploy each microservice as a Pod Expose the Pod in the cluster using a Service , and use the Service DNS name to address the microservice from other microservices within the cluster

D Deploy each microservice as a Pod Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from othermicroservices within the cluster.

A

A
Deploy each microservice as a Deployment Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

130.
Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team The development team does not want the networking team to have access to the sensitive data on the instances. What should you do?

A 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team
2 Create a second project with a standalone VPC and assign the Compute Admin role to the development team
3. Use Cloud VPN to join the two VPCs.

B
1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team

C.
1 Create a project with a Shared VPC and assign the Network Admin role to the networking team
2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team

D

  1. Create a project with a standalone VPC and assign the Network Admin role to the networking team
  2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team.
  3. Use VPC Peering to join the two VPCs
A

C.
1 Create a project with a Shared VPC and assign the Network Admin role to the networking team
2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

131.
Your company wants you to build a highly reliable web application with a few public APIs as the backend.You don’t expect a lot of user traffic, but traffic could spike
occasionally. You want to leverage Cloud Load Balancing,and the solution must be cost-effective for users. What should you do?

A Store static content such as HTML and images in Cloud CDN Host the APIs on App Engine and store the user data in Cloud SQL.

B Store static content such as HTML and images in a Cloud Storage bucket Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner

C Store static content such as HTML and images in Cloud CDN Use Cloud Run to host the APIs and save the user data in Cloud SQL

D Store static content such as HTML and images in a Cloud Storage bucket Use Cloud Functions to host the APIs and save the user data in Firestore

A

D.
Store static content such as HTML and images in a Cloud Storage bucket Use Cloud Functions to host the APIs and save the user data in Firestore

“Answer should be D”

https: //cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#gcloud:-cloud-functions
https: //cloud.google.com/blog/products/networking/better-load-balancing-for-app-engine-cloud-run-and-functions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

132.
Your company sends all Google Cloud logs to Cloud Logging Your security team wants to monitor the logs You want to ensure that the security team can react quickly if an anomaly such as an unwanted firewall change or server breach is detected You want to follow Google-recommended practices. What should you do?

A Schedule a cron job with Cloud Scheduler The scheduled job queries the logs every minute for the relevant events.

B Export logs to BigQuery. and trigger a query in BigQuery to process the log data for the relevant events.

C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events

D Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events.

A

C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events

” I think C using BigQuery can get expensive if you have somehow check the logs for anomalies

https://cloud.google.com/blog/products/management-tools/automate-your-response-to-a-cloud-logging-event

check there is a diagram

34
Q

133.
You have deployed several instances on Compute Engine. As a security requirement, instances cannot have a public IP address. There is no VPN connection between
Google Cloud and your office, and you need to connect via SSH into a specific machine without violating the security requirements. What should you do?

A Configure Cloud NAT on the subnet where the instance is hosted Create an SSH connection to the Cloud NAT IP address to reach the instance.

B Add all instances to an unmanaged instance group Configure TCP Proxy Load Balancing with the instance group as a backend Connect to the instance using the TCP Proxy IP.

C Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of lAP-secured Tunnel User Use the gcloud command line tool to ssh into the instance

D Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion host. SSH into the desired instance

A

C Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of lAP-secured Tunnel User Use the gcloud command line tool to ssh into the instance

” Answer is C.”
https://cloud.google.com/iap/docs/using-tcp-forwarding#tunneling_with_ssh

35
Q

134.
Your company is using Google Cloud. You have two folders under the Organization: Finance and Shopping The members of the development team are in a Google Group. The development team group has been assigned the Project Owner role on the Organization. You want to prevent the development team from creating resources in projects in the Finance folder. What should you do?

A Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the Shopping folder

B Assign the development team group only the Project Viewer role on the Finance folder

C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization.

D Assign the development team group only the Project Owner role on the Shopping folder

A

C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization.

Answer C is correct.
Answer A and B are both overridden by the less-restrictive permission on Organization level.
Answer D permission was already there on Organization level, and does not remove the project owner permission on the other folder

36
Q

135.
You are developing your microservices application on Google Kubernetes Engine During testing, you want to validate the behavior of your application in case a specific microservice should suddenly crash. What should you do?

A Add a taint to one of the nodes of the Kubernetes cluster For the specific microservice, configure a pod anti-affinity label that has the name of the tainted node as a
value

B Use Istio’s fault injection on the particular microservice whose faulty behavior you want to simulate

C. Destroy one of the nodes of the Kubernetes cluster to observe the behavior.

D Configure Istio’s traffic management features to steer the traffic away from a crashing microservice

A

B Use Istio’s fault injection on the particular microservice whose faulty behavior you want to simulate

Answer is B.
application crash, not node.

https://istio.io/latest/docs/tasks/traffic-management/fault-injection/

37
Q

136.
Your company is developing a new application that will allow globally distributed users to upload pictures and share them with other selected users The application will support millions of concurrent users. You want to allow developers to focus on just building code without having to create and maintain the underlying infrastructure Which
service should you use to deploy the application?

A App Engine
B Cloud Endpoints
C. Compute Engine
D Google Kubernetes Engine

A

A App Engine

38
Q

137.
Your company provides a recommendation engine for retail customers. You are providing retail customers with an API where they can submit a user ID and the API returns a list of recommendations for that user. You are responsible for the API lifecycle and want to ensure stability for your customers in case the API makes backward-incompatible changes. You want to follow Google-recommended practices. What should you do?

A Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the new API

B Create an automated process to generate API documentation, and update the public API documentation as part of the Cl/CD process when deploying an update to the API.

C Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change

D Use a versioning strategy for the APIs that adds the suffix ‘ DEPRECATED’ to the current API version number on every backward-incompatible change Use the current version number for the new API

A

C Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change

https://cloud.google.com/apis/design/versioning

39
Q

138.
Your company has developed a monolithic. 3-tier application to allow external users to upload and share files. The solution cannot be easily enhanced and lacks reliability. The development team would like to re-architect the application to adopt microservices and a fully managed service approach, but they need to convince their leadership thatthe effort is worthwhile. Which advantage(s) should they highlight to leadership?

A The new approach will be significantly less costly, make it easier to manage the underlying infrastructure, and automatically manage the Cl/CD pipelines

B The monolithic solution can be converted to a container with Docker The generated container can then be deployed into a Kubernetes cluster

C. The new approach will make it easier to decouple infrastructure from application, develop and release new features, manage the underlying infrastructure, manage Cl/CD pipelines and perform A/B testing, and scale the solution if necessary

D The process can be automated with Migrate for Compute Engine

A

C. The new approach will make it easier to decouple infrastructure from application, develop and release new features, manage the underlying infrastructure, manage Cl/CD pipelines and perform A/B testing, and scale the solution if necessary

40
Q

139.
Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch and you need to ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your application stays below a certain threshold. What should you do?

A Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results

B Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployments. Send curl requests to your application, and validate if the auto scaling works.

C Replicate the application over multiple GKE clusters in every Google Cloud region Configure a global HTTP(S) load balancer to expose the different clusters over a single global IP address.

D Use Cloud Debugger in the development environment to understand the latency between the different microservices

A

A Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results

41
Q

140.
Your company has a Kubernetes application that pulls messages from Pub/Sub and stores them in Filestore. Because the application is simple, it was deployed as a single pod The infrastructure team has analyzed Pub/Sub metrics and discovered that the application cannot process the messages in real time. Most of them wait for minutes before being processed. You need to scale the elaboration process that is l/O-intensive What should you do?

A Use Icubectl autoscale deployment APP_NAME —max 6 —min 2 -cpu-percent 50 to configure Kubernetes autoscaling deployment

B Configure a Kubernetes autoscaling deployment based on the subscription/push_request_latencies metric

C. Use the —enable-autoscaling flag when you create the Kubernetes cluster.

D Configure a Kubernetes autoscaling deployment based on the subscription/numjmdeliveredjnessages metric

A

D.
Configure a Kubernetes autoscaling deployment based on the subscription/numjmdeliveredjnessages metric

“Answer is D. num_undelivered_messages metric can indicate if subscribers are keeping up with message submissions.

https://cloud.google.com/pubsub/docs/monitoring#monitoring_the_backlog

42
Q

141.
Your company is developing a web-based application. You need to make sure that production deployments are linked to source code commits and are fully auditable What should you do?

A Make sure a developer is tagging the code commit with the date and time of commit.

B Make sure a developer is adding a comment to the commit that links to the deployment

C Make the container tag match the source code commit hash.

D Make sure the developer is tagging the commits with latest

A

A Make sure a developer is tagging the code commit with the date and time of commit.

“Developer shouldn’t tag or comment every commit with some specific data, like timestamps or something else. There might be an app version, but it’s not mentioned. I’d go with C as it’s an automated, error-less approach that answers the question.”

43
Q

142.
An application development team has come to you for advice. They are planning to write and deploy an HTTP(S) API using Go 1.12. The API will have a very unpredictable workload and must remain reliable during peaks in traffic They want to minimize operational overhead for this application. Which approach should you recommend?

A Develop the application with containers, and deploy to Google Kubernetes Engine

B Develop the application for App Engine standard environment.

C. Use a Managed Instance Group when deploying to Compute Engine

D Develop the application for App Engine flexible environment, using a custom runtime.

A

B Develop the application for App Engine standard environment.

“B is ok.”

https://cloud.google.com/appengine/docs/the-appengine-environments

44
Q

143..
Your company is designing its data lake on Google Cloud and wants to develop different ingestion pipelines to collect unstructured data from different sources. After the data is stored in Google Cloud, it will be processed in several data pipelines to build a recommendation engine for end users on the website The structure of the data retrieved
from the source systems can change at any time. The data must be stored exactly as it was retrieved for reprocessing purposes in case the data structure is incompatible with the current processing pipelines. You need to design an architecture to support the use case after you retrieve the data What should you do?

A Send the data through the processing pipeline, and then store the processed data in a BigQuery table for reprocessing

B Store the data in a BigQuery table Design the processing pipelines to retrieve the data from the table

C Send the data through the processing pipeline and then store the processed data in a Cloud Storage bucket for reprocessing

D Store the data in a Cloud Storage bucket Design the processing pipelines to retrieve the data from the bucket

A

D Store the data in a Cloud Storage bucket Design the processing pipelines to retrieve the data from the bucket

45
Q

144.
You are responsible for the Google Cloud environment in your company Multiple departments need access to their own projects, and the members within each department will have the same project responsibilities. You want to structure your Google Cloud environment for minimal maintenance and maximum overview of IAM permissions as each department’s projects start and end. You want to follow Google-recommended practices. What should you do?

A Grant all department members the required IAM permissions for their respective projects

B Create a Google Group per department and add all department members to their respective groups Create a folder per department and grant the respective group the required IAM permissions at the folder level. Add the projects under the respective folders.

C Create a folder per department and grant the respective members of the department the required IAM permissions at the folder level Structure all projects for each department under the respective folders

D Create a Google Group per department and add all department members to their respective groups Grant each group the required IAM permissions for their respective
projects

A

B Create a Google Group per department and add all department members to their respective groups Create a folder per department and grant the respective group the required IAM permissions at the folder level. Add the projects under the respective folders.

46
Q

145.
Your company has an application running as a Deployment in a Google Kubernetes Engine (GKE) cluster You have separate clusters for development, staging, and production. You have discovered that the team is able to deploy a Docker image to the production cluster without first testing the deployment in development and then
staging You want to allow the team to have autonomy but want to prevent this from happening You want a Google Cloud solution that can be implemented quickly with minimal effort. What should you do?

A Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment

B Implement a corporate policy to prevent teams from deploying Docker images to an environment unless the Docker image was tested in an earlier environment

C. Configure binary authorization policies for the development, staging, and production clusters. Create attestations as part of the continuous integration pipeline.

D Create a Kubernetes admissions controller to prevent the container from starting if it is not approved for usage in the given environment

A

C. Configure binary authorization policies for the development, staging, and production clusters. Create attestations as part of the continuous integration pipeline.

“C is the correct answer

The most common Binary Authorization use cases involve attestations. An attestation certifies that a specific image has completed a previous stage, as described previously. You configure the Binary Authorization policy to verify the attestation before allowing the image to be deployed. At deploy time, instead of redoing activities that were completed in earlier stages, Binary Authorization only needs to verify the attestation.”

47
Q

146.
Your company wants to migrate their 10-TB on-premises database export into Cloud Storage You want to minimize the time it takes to complete this activity,the overall cost, and database load The bandwidth between the on-premises environment and Google Cloud is 1 Gbps You want to follow Google-recommended practices. What should you do?

A Develop a Dataflow job to read data directly from the database and write it into Cloud Storage

B Use the Data Transfer appliance to perform an offline migration

C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage

D Compress the data and upload it with gsutil -m to enable multi-threaded copy

A

B) Google recommends gsutil for <1TB; for >1TB, Storage Transfer Service is recommended. Since STS is not in the answers, the next best large transfer option would be offline Storage Appliance.

48
Q

147.
Your company has an enterprise application running on Compute Engine that requires high availability and high performance. The application has been deployed on two instances in two zones in the same region in active-passive mode The application writes data to a persistent disk. In the case of a single zone outage, that data should be
immediately made available to the other instance in the other zone. You want to maximize performance while minimizing downtime and data loss. What should you do?

A.
1 Attach a persistent SSD disk to the first instance
2. Create a snapshot every hour.
3. In case of a zone outage, recreate a persistent SSD disk in the second instance where data is coming from the created snapshot

B

  1. Create a Cloud Storage bucket
  2. Mount the bucket into the first instance with gcs-fuse.
  3. In case of a zone outage, mount the Cloud Storage bucket to the second instance with gcs-fuse.

C

  1. Attach a regional SSD persistent disk to the first instance
  2. In case of a zone outage, force-attach the disk to the other instance

D
1. Attach a local SSD to the first instance disk.
2 Execute an rsync command every hour where the target is a persistent SSD disk attached to the second instance
3. In case of a zone outage, use the second instance.

A

C

  1. Attach a regional SSD persistent disk to the first instance
  2. In case of a zone outage, force-attach the disk to the other instance

https://cloud.google.com/compute/docs/disks/repd-failover

49
Q

148.
You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate the encryption keys outside of Google Cloud. You need to implement a solution. What should you do?

A Generate a new key in Cloud Key Management Service (Cloud KMS) Store all data in Cloud Storage using the customer-managed key option and select the created key Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset

B Generate a new key in Cloud KMS Create a dataset in BigQuery using the customer-managed key option and select the created key.

C Import a key in Cloud KMS Store all data in Cloud Storage using the customer-managed key option and select the created key Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset

D Import a key in Cloud KMS Create a dataset in BigQuery using the customer-supplied key option and select the created key.

A

D Import a key in Cloud KMS Create a dataset in BigQuery using the customer-supplied key option and select the created key.

https://cloud.google.com/bigquery/docs/customer-managed-encryption

50
Q

149.
Your organization has stored sensitive data in a Cloud Storage bucket For regulatory reasons, your company must be able to rotate the encryption key used to encrypt the data in the bucket The data will be processed in Dataproc You want to follow Google-recommended practices for secunty. What should you do?

A Create a key with Cloud Key Management Service (KMS) Encrypt the data using the encrypt method of Cloud KMS

B Create a key with Cloud Key Management Service (KMS) Set the encryption key on the bucket to the Cloud KMS key.

C. Generate a GPG key pair Encrypt the data using the GPG key. Upload the encrypted data to the bucket

D Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption keys feature.

A

B Create a key with Cloud Key Management Service (KMS) Set the encryption key on the bucket to the Cloud KMS key.

“B is OK”
https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-object-key

51
Q

150.
Your team needs to create a Google Kubernetes Engine (GKE) cluster to host a newly built application that requires access to third-party services on the internet Your company does not allow any Compute Engine instance to have a public IP address on Google Cloud. You need to create a deployment strategy that adheres to these
guidelines. What should you do?

A Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet

B Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private Cloud (VPC).

C. Configure the GKE cluster as a route-based cluster Configure Private Google Access on the Virtual Private Cloud (VPC).

D Create a Compute Engine instance, and install a NAT Proxy on the instance Configure all workloads on GKE to pass through this proxy to access third-party
services on the Internet

A

A Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet

52
Q

151.
Your company has a support ticketing solution that uses App Engine Standard. The project that contains the App Engine application already has a Virtual Private Cloud
(VPC) network fully connected to the company’s on-premises environment through a Cloud VPN tunnel. You want to enable the App Engine application to communicate with a database that is running in the company’s on-premises environment. What should you do?

A Configure private Google access for on-premises hosts only.
B Configure private Google access.
C. Configure private services access.
D Configure serverless VPC access.

A

D Configure serverless VPC access.

https://cloud.google.com/vpc/docs/serverless-vpc-access#use_cases

53
Q

152.
Your company is planning to upload several important files to Cloud Storage After the upload is completed, they want to verify that the uploaded content is identical to what they have on-premises. You want to minimize the cost and effort of performing this check. What should you do?

A
1. Use Linux shasumto compute a digest of files you want to upload.
2. Use gsutil -mto upload all the files to Cloud Storage.
3. Use gsutil cp to download the uploaded files.
4 Use Linux shasumto compute a digest of the downloaded files.
5. Compare the hashes

B

  1. Use gsutil -mto upload the files to Cloud Storage.
  2. Develop a custom Java application that computes CRC32C hashes.
  3. Use gsutil Is -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files.
  4. Compare the hashes

C.
1 . Use gsutil -m to upload all the files to Cloud Storage
2. Use gsutil cp to download the uploaded files.
3. Use Linux diff to compare the content of the files.

D.

  1. Use gsutil -m to upload the files to Cloud Storage.
  2. Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files
  3. Use gsutil Is -L gs:// [YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files.
  4. Compare the hashes.
A

D.

  1. Use gsutil -m to upload the files to Cloud Storage.
  2. Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files
  3. Use gsutil Is -L gs:// [YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files.
  4. Compare the hashes.

“D is ok .”
https://cloud.google.com/storage/docs/gsutil/commands/hash

54
Q

153.
You have deployed an application on Anthos clusters (formerly Anthos GKE). According to the SRE practices at your company, you need to be alerted if request latency is above a certain threshold for a specified amount of time. What should you do?

A Install Anthos Service Mesh on your cluster Use the Google Cloud Console to define a Service Level Objective (SLO). and create an alerting policy based on this SLO.

B Enable the Cloud Trace API on your project , and use Cloud Monitoring Alerts to send an alert based on the Cloud Trace metrics

C Use Cloud Profiler to follow up the request latency. Create a custom metric in Cloud Monitoring based on the results of Cloud Profiler, and create an Alerting policy in case this metric exceeds the threshold.

D Configure Anthos Config Management on your cluster, and create a yaml file that defines the SLO and alerting policy you want to deploy in your cluster

A

A Install Anthos Service Mesh on your cluster Use the Google Cloud Console to define a Service Level Objective (SLO). and create an alerting policy based on this SLO.

“A is ok.”
https://cloud.google.com/service-mesh/docs/observability/slo-overview

55
Q

154.
Your company has a stateless web API that performs scientific calculations. The web API runs on a single Google Kubernetes Engine (GKE) cluster. The cluster is currently deployed in us-centrall. Your company has expanded to offer your API to customers in Asia. You want to reduce the latency for users in Asia What should you do?
A. Create a second GKE cluster in asia-southeastl, and expose both APIs using a Service of type LoadBalancer. Add the public IPs to the Cloud DNS zone.

B. Use a global HTTP(s) load balancer with Cloud CDN enabled.

C. Create a second GKE cluster in asia-southeastl, and use Icubemci to create a global HTTP(s) load balancer.

D. Increase the memory and CPU allocated to the application in the cluster.

A

C. Create a second GKE cluster in asia-southeastl, and use Icubemci to create a global HTTP(s) load balancer.

“C is ok .”
https://cloud.google.com/blog/products/gcp/how-to-deploy-geographically-distributed-services-on-kubernetes-engine-with-kubemci

56
Q

155.
You are migrating third-party applications from optimized on-premises virtual machines to Google Cloud. You are unsure about the optimum CPU and memory options The applications have a consistent usage pattern across multiple weeks. You want to optimize resource usage for the lowest cost. What should you do?

A Create an instance template with the smallest available machine type, and use an image of the third-party application taken from a current on-premises virtual machine Create a managed instance group that uses average CPU utilization to autoscale the number of instances in the group Modify the average CPU utilization threshold to optimize the number of instances running.

B Create an App Engine flexible environment , and deploy the third-party application using a Dockerfile and a custom runtime Set CPU and memory options similar to your application’s current on-premises virtual machine in the app yaml file

C Create multiple Compute Engine instances with varying CPU and memory options Install the Cloud Monitoring agent, and deploy the third-party application on each of them. Run a load test with high traffic levels on the application, and use the results to determine the optimal settings

D Create a Compute Engine instance with CPU and memory options similar to your application’s current on-premises virtual machine Install the Cloud Monitoring agent, and deploy the third-party application Run a load test with normal traffic levels on the application, and follow the Rightsizing Recommendations in the Cloud Console

A

D Create a Compute Engine instance with CPU and memory options similar to your application’s current on-premises virtual machine Install the Cloud Monitoring agent, and deploy the third-party application Run a load test with normal traffic levels on the application, and follow the Rightsizing Recommendations in the Cloud Console

57
Q

156.
Your company has a Google Cloud project that uses BigQuery for data warehousing They have a VPN tunnel between the on-premises environment and Google Cloud that is configured with Cloud VPN. The security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing What should they do?

A Configure Private Google Access for on-premises only.

B Perform the following tasks
1 Create a service account.
2 Give the BigQuery Jobllser role and Storage Reader role to the service account
3. Remove all other IAM access from the project

C Configure VPC Service Controls and configure Private Google Access.

D Configure Private Google Access

A

C Configure VPC Service Controls and configure Private Google Access.

58
Q

157.
You are working at an institution that processes medical data You are migrating several workloads onto Google Cloud Company policies require all workloads to run on physically separated hardware, and workloads from different clients must also be separated. You created a sole-tenant node group and added a node for each client You
need to deploy the workloads on these dedicated hosts. What should you do?

A Add the node group name as a network tag when creating Compute Engine instances in order to host each workload on the correct node group

B Add the node name as a network tag when creating Compute Engine instances in order to host each workload on the correct node

C. Use node affinity labels based on the node group name when creating Compute Engine instances in order to host each workload on the correct node group

D Use node affinity labels based on the node name when creating Compute Engine instances in order to host each workload on the correct node

A

D Use node affinity labels based on the node name when creating Compute Engine instances in order to host each workload on the correct node

“When you create a node template, specify a node type, and optionally specify node affinity labels. You can only specify node affinity labels on a node template; you can’t specify node affinity labels on a node group.”

https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes#node_affinity_and_anti-affinity

59
Q

158.
Your company’s test suite is a custom C++ application that runs tests throughout each day on Linux virtual machines. The full test suite takes several hours to complete, running on a limited number of on-premises servers reserved for testing Your company wants to move the testing infrastructure to the cloud, to reduce the amount of time it takes to fully test a change to the system, while changing the tests as little as possible.
Which cloud infrastructure should you recommend?

A Google Compute Engine unmanaged instance groups and Network Load Balancer

B Google Compute Engine managed instance groups with auto-scaling

C. Google Cloud Dataproc to run Apache Hadoop jobs to process each test

D Google App Engine with Google StackDriver for logging

A

B Google Compute Engine managed instance groups with auto-scaling

60
Q

159.
A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him
ensure his application will run properly on Google Cloud Platform.
What should you do?

A Help the engineer to convert his websocket code to use HTTP streaming

B Review the encryption requirements for websocket connections with the security team

C. Meet with the cloud operations team and the engineer to discuss load balancer options

D Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions

A

C. Meet with the cloud operations team and the engineer to discuss load balancer options

https://cloud.google.com/load-balancing/docs/https#websocket_support

61
Q

160..
The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?

A 
• Append metadata to file body
• Compress individual files
• Name files with serverName -Timestamp
• Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket Otherwise, save files to existing bucket

B • Batch every 10.000 events with a single
manifest file for metadata
• Compress event files and manifest file into a single archive file
• Name files using serverName -EventSequence
• Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket Otherwise, save the single archive file to existing bucket

C.
• Compress individual files
• Name files with serverName -EventSequence
• Save files to one bucket
• Set custom metadata headers for each object after saving

D. 
• Append metadata to file body
• Compress individual files
• Name files with a random prefix pattern
• Save files to one bucket
A
D. 
• Append metadata to file body
• Compress individual files
• Name files with a random prefix pattern
• Save files to one bucket

“How about D?”
“In order to maintain a high request rate, avoid using sequential names. Using completely random object names will give you the best load distribution. If you want to use sequential numbers or timestamps as part of your object names, introduce randomness to the object names by adding a hash value before the sequence number or timestamp.”

62
Q

161.
A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH port open to the world. You want to discover this network’s origin.
What should you do?

A Search for Create VM entry in the Stackdriver alerting console

B Navigate to the Activity page in the Home section Set category to Data Access and search for Create VM entry

C. In the Logging section of the console, specify GCE Network as the logging section Search for the Create Insert entry

D Connect to the GCE instance using project SSH keys Identify previous logins in system logs, and match these with the project owners list

A

C. In the Logging section of the console, specify GCE Network as the logging section Search for the Create Insert entry

“I am going to go with C. Answer A doesn’t seem to fit because the matter of when a VM was created.
Answer B focuses on Data Access logs which doesn’t seem to fit since the matter of creating a network firewall rule
is an Admin activity, not a data access activity.
D focuses on who logged in which is good to know but doesn’t answer the question of how the network was created.
C focuses on logging, the selection of network events, and the Create/Insert entry.

63
Q

162.
You want to make a copy of a production Linux virtual machine in the US-Central region You want to manage and replace the copy easily if there are changes on the production virtual machine. You will deploy the copy as a new instance in a different project in the US-East region.
What steps must you take?

A Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine instance in the US-East region

B Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region

C. Create an image file from the root disk with Linux dd command, create a new virtual machine instance in the US-East region

D Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.

A

D Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.

“D is correct. A and B are talking about appending the file system to a new VM, not setting it at the root in a new VM set. Option C is not offered within the GCP because the image must be on the GCP platform to run the gcloud of Google Console instructions to create a VM with the image”

64
Q

163.
Your company runs several databases on a single MySQL instance They need to take backups of a specific database at regular intervals The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk performance.
How should you configure the storage?

A Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots

B Mount a Local SSD volume as the backup location After the backup is complete, use gsutil to move the backup to Google Cloud Storage

C Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump

D Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use LVM to create snapshots to send to Cloud Storage

A

B Mount a Local SSD volume as the backup location After the backup is complete, use gsutil to move the backup to Google Cloud Storage

“I think it’s B. If you use a tool like GCFUSE it will write immediatly to GCS which is a cost benefit because you don’t need intermediate storage. In this case however “Quickly as possible” key for understanding. GCFUSE will write to GCS which is much slower than writing directly to an added SSD. During the write to GCS it would also execute reads for a longer period on the production database. Therefor writing to the extra SSD would be my recommended solution. Offloading from the SSD to GCS would not impact the running database because the data is already separated.”

65
Q

164.
You are helping the QA team to roll out a new load-testing tool to test the scalability of your primary cloud services that run on Google Compute Engine with Cloud Bigtable.
Which three requirements should they include? (Choose three.)

A Ensure that the load tests validate the performance of Cloud Bigtable

B Create a separate Google Cloud project to use for the load-testing environment

C. Schedule the load-testing tool to regularly run against the production environment

D Ensure all third-party systems your services use is capable of handling high load

E Instrument the production services to record every transaction for replay by the load-testing tool

F. Instrument the load-testing tool and the target services with detailed logging and metrics collection tvt_vn/ebay

A

(A ,B, F)

A. Ensure that the load tests validate the performance of Cloud Bigtable

B. Create a separate Google Cloud project to use for the load-testing environment

F. Instrument the load-testing tool and the target services with detailed logging and metrics collection tvt_vn/ebay

66
Q

165.
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?

A Org viewer, project owner
B Org viewer, project viewer
C. Org admin, project browser
D Project owner, network admin

A

B Org viewer, project viewer

A is not correct because Project owner is too broad. The security team does not need to be able to make changes to projects.

  • **B is correct because:-Org viewer grants the security team permissions to view the organization’s display name.
  • Project viewer grants the security team permissions to see the resources within projects.

C is not correct because Org admin is too broad. The security team does not need to be able to make changes to the organization.

D is not correct because Project owner is too broad. The security team does not need to be able to make changes to projects.

67
Q

166.
Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of secunty errors being accidentally introduced.
Which two actions can you take? (Choose two.)

A Ensure every code check-in is peer reviewed by a security SME

B Use source code security analyzers as part of the Cl/CD pipeline

C. Ensure you have stubs to unit test all interfaces between components

D Enable code signing and a trusted binary repository integrated with your Cl/CD pipeline

E Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (Cl/CD) pipeline tvt_vn/ebay

A

D, E

D. Enable code signing and a trusted binary repository integrated with your Cl/CD pipeline

E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (Cl/CD) pipeline tvt_vn/ebay

68
Q

167.
You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.
What should you do?

A. Add additional nodes to your Kubernetes Engine cluster using the following command
gcloud container clusters resize
CLUSTER_Name - -size 10

B. Add a tag to the instances in the cluster with the following command gcloud compute instances add-tags INSTANCE - -tags enable- autoscaling max-nodes-10

C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=l - -max-nodes=10

D. Create a new Kubernetes Engine cluster with the following command gcloud alpha container clusters
create mycluster - -enable-autoscaling - -min-nodes=l - -max-nodes=10 and redeploy your application

A

C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=l - -max-nodes=10

69
Q

168.
Your marketing department wants to send out a promotional email campaign. The development team wants to minimize direct operation management They project a wide range of possible customer responses, from 100 to 500.000
click-through per day The link leads to a simple website that explains the promotion and collects user information and preferences.
Which infrastructure should you recommend?
(Choose two.)

A Use Google App Engine to serve the website and Google Cloud Datastore to store user data

B. Use a Google Container Engine cluster to serve the website and store data to persistent disk.

C Use a managed instance group to serve the website and Google Cloud Bigtable to store user data

D Use a single Compute Engine virtual machine (VM) to host a web server, backend by Google Cloud SQL

A

A, C

A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data

C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data

70
Q

169.
Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose?
(Choose two.)

A Compute Engine with containers
B Google Kubernetes Engine with containers
C. Google App Engine Standard Environment
D Compute Engine with custom instance types
E Compute Engine with managed instance groups

A

B,C

B Google Kubernetes Engine with containers
C. Google App Engine Standard Environment

71
Q

170.
One of your pnmary business objectives is being able to trust the data stored in your application.You want to log all changes to the application data
How can you design your logging system to verify authenticity of your logs?

A Write the log concurrently in the cloud and on premises

B Use a SQL database and limit who can modify the log table

C. Digitally sign each timestamp and log entry and store the signature

D Create a JSON dump of each log entry and store it in Google Cloud Storage

A

C. Digitally sign each timestamp and log entry and store the signature

“C (Correct answer) - Digitally sign each timestamp and log entry and store the signature.
Answer A, B, and D don’t have any added value to verify the authenticity of your logs. Besides, Logs are mostly suitable for exporting to Cloud storage, BigQuery, and PubSub. SQL database is not the best way to be exported to nor store log data.
Simplified Explanation
To verify the authenticity of your logs if they are tampered with or forged, you can use a certain algorithm to generate digest by hashing each timestamp or log entry and then digitally sign the digest with a private key to generate a signature. Anybody with your public key can verify that signature to confirm that it was made with your private key and they can tell if the timestamp or log entry was modified. You can put the signature files into a folder separate from the log files. This separation enables you to enforce granular security policies.”

72
Q

171.
Your company has a Google Workspace account and Google Cloud Organization Some developers in the company have created Google Cloud projects outside of the Google Cloud
Organization

You want to create an Organization structure that allows developers to create projects, but prevents them from modifying production projects. You want to manage policies for all projects
centrally and be able to set more restrictive policies for production projects.

You want to minimize disruption to users and developers when business needs change in the future. You want to follow Google-recommended practices. Now should you design the
Organization structure?

A 1. Create a second Google Workspace account and Organization.

  1. Grant all developers the Project Creator IAM role on the new Organization.
  2. Move the developer projects into the new Organization.
  3. Set the policies for all projects on both Organizations. 5.Additionally, set the production policies on the original Organization.

B. 1. Create a folder under the Organization resource named “Production ’
2. Grant all developers the Project Creator IAM role on the new Organization.
3. Move the developer projects into the new Organization.
4. Set the policies for all projects on the Organization
5.Additionally, set the production policies on the “Production’
folder

C. 1. Create folders under the Organization resource named “Development and “Production

  1. Grant all developers the Project Creator IAM role on the “Development’folder.
  2. Move the developer projects into the “Development’folder.
  3. Set the policies for all projects on the Organization
  4. Additionally, set the production policies on the “Production’folder

D 1. Designate the Organization for production projects only.

  1. Ensure that developers do not have the Project Creator IAM role on the Organization
  2. Create development projects outside of the Organization using the developer Google Workspace accounts
  3. Set the policies for all projects on the Organization
  4. Additionally, set the production policies on the individual production projects
A

C. 1. Create folders under the Organization resource named “Development and “Production

  1. Grant all developers the Project Creator IAM role on the “Development’folder.
  2. Move the developer projects into the “Development’folder.
  3. Set the policies for all projects on the Organization
  4. Additionally, set the production policies on the “Production’folder

“C, because managing multiple organizations is not a Google best practice”

73
Q

172.
Your company has an application running on Compute Engine that allows users to play their favorite music There are a fixed number of instances. Files are stored in Cloud Storage, and data is streamed directly to users. Users are reporting that they sometimes need to attempt to play popular songs multiple times before they are successful. You need to improve the performance of the application. What should you do?

A 1 Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engine instances.
2. Serve music files directly from the backend Compute Engine instance.

B 1. Create a Cloud Filestore NFS volume and attach it to the backend Compute Engine instances.

  1. Download popular songs in Cloud Filestore.
  2. Serve music files directly from the backend Compute Engine instance.

C. 1. Copy popular songs into CloudSQL as a blob.
2. Update application code to retrieve data from CloudSQL when Cloud Storage is overloaded

D. 1. Create a managed instance group with Compute Engine instances.
2. Create a global load balancer and configure it with two backends:
c Managed instance group
c Cloud Storage bucket
3. Enable Cloud CDN on the bucket backend.1

A

D. 1. Create a managed instance group with Compute Engine instances.
2. Create a global load balancer and configure it with two backends:
c Managed instance group
c Cloud Storage bucket
3. Enable Cloud CDN on the bucket backend.

“The Answer is D.
This is the meaning of using CND.
Cache content closer to the end user to optimize delivery time and other benefits.”

74
Q

173.
The operations team in your company wants to save Cloud VPN log events for one year. You need to configure the cloud infrastructure to save the logs What should you do?

A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save.

B Enable the Compute Engine API. and then enable logging on the firewall rules that match the traffic you want to save

C. Set up a Cloud Logging Dashboard titled Cloud VPN Logs, and then add a chart that queries for the VPN metrics over a one-year time period

D Set up a filter in Cloud Logging and a topic in Pub/Sub to publish the logs

A

A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save.

75
Q

174.
You are working with a data warehousing team that performs data analysis The team needs to process data from external partners, but the data contains personally identifiable information (Pll). You need to process and store the data without storing any of the PIE data What should you do?

A Create a Dataflow pipeline to retrieve the data from the external sources As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any Pll data Store the result in BigQuery

B Create a Dataflow pipeline to retrieve the data from the external sources As part of the pipeline, store all non-PII data in BigQuery and store all Pll data in a Cloud Storage bucket that has a retention policy set.

C. Ask the external partners to upload all data on Cloud Storage Configure Bucket Lock for the bucket Create a Dataflow pipeline to read the data from the bucket As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any Pll data Store the result in BigQuery.

D Ask the external partners to import all data in your BigQuery dataset Create a dataflow pipeline to copy the data into a new table As part of the Dataflow bucket, skip all data in columns that have Pll data

A

A.
Create a Dataflow pipeline to retrieve the data from the external sources As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any Pll data Store the result in BigQuery

“Option-A is correct. Although Option-C sounds good, ultimately we should not store PI data at all as per question says.”

76
Q

175.
You want to allow your operations team to store logs from all the production projects in your Organization, without including logs from other projects All of the production projects are contained in a folder. You want to ensure that all logs for existing and new production projects are captured automatically. What should you do?

A Create an aggregated export on the Production folder Set the log sink to be a Cloud Storage bucket in an operations project

B Create an aggregated export on the Organization resource Set the log sink to be a Cloud Storage bucket in an operations project

C. Create log exports in the production projects Set the log sinks to be a Cloud Storage bucket in an operations project

D Create log exports in the production projects Set the log sinks to be BigQuery datasets in the production projects, and grant IAM access to the operations team to run queries on the
datasets

A

A Create an aggregated export on the Production folder Set the log sink to be a Cloud Storage bucket in an operations project

77
Q

176.
Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs For compliance reasons, the logs need to be kept for at least two years The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?

A
1. Install a Cloud Logging agent on all instances
2 Create a sink to export logs into a regional Cloud Storage bucket.
3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month
4 Configure a retention policy at the bucket level using bucket lock.

B

  1. Write a daily cron job. running on all instances, that uploads logs into a Cloud Storage bucket
  2. Create a sink to export logs into a regional Cloud Storage bucket.
  3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month

C.

  1. Install a Cloud Logging agent on all instances.
  2. Create a sink to export logs into a partitioned BigQuery table.
  3. Set a time_part;itioning_expiration of 30 days.

D 1. Create a daily cron job. running on all instances, that uploads logs into a partitioned BigQuery table
2. Set a time_partitioning_expiration of 30 days.

A

A
1. Install a Cloud Logging agent on all instances
2 Create a sink to export logs into a regional Cloud Storage bucket.
3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month
4 Configure a retention policy at the bucket level using bucket lock.

“The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.

The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.
The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires setting up a lifecycle based on the storage period.
In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.

If the data is to be used for active queries, we can use BigQuery’s Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-optimal solution.

Therefore, the correct answer is as follows
1. Install the Cloud Logging agent on all instances.
Create a sync that exports the logs to the region’s Cloud Storage bucket.
3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.
4. set up a bucket-level retention policy using bucket locking.”

78
Q

177.
Your company has just recently activated Cloud Identity to manage users. The Google Cloud Organization has been configured as well. The security team needs to secure
projects that will be part of the Organization. They want to prohibit IAM users outside the domain from gaining permissions from now on. What should they do?

A. Configure an organization policy to restrict identities by domain

B. Configure an organization policy to block creation of service accounts

C. Configure Cloud Scheduler to trigger a Cloud Function every hour that removes all users that don’t belong to the Cloud Identity domain from all projects

D. Create a technical user (e g. , crawler@yourdomain.com), and give it the project owner role at root organization level Write a bash script that
• Lists all the IAM rules of all projects within the organization.
• Deletes all users that do not belong to the company domain.
Create a Compute Engine instance in a project within the Organization and configure gcloud to be executed with technical user credentials. Configure a cron job that executes the bash script every hour.
Reference https //svsdio com/bloQ/ocp-securitv-best-Dractices/

A

A. Configure an organization policy to restrict identities by domain

https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains

79
Q

178.
Your company has an application running on Google Cloud that is collecting data from thousands of physical devices that are globally distributed. Data is published to Pub/Sub and streamed in real time into an SSD Cloud Bigtable cluster via a Dataflow pipeline The operations team informs you that your Cloud Bigtable cluster has a hotspot, and quenes are taking longer than expected You need to resolve the problem and prevent it from happening in the future. What should you do?

A Advise your clients to use HBase APIs instead of NodeJS APIs

B Delete records older than 30 days

C. Review your RowKey strategy and ensure that keys are evenly spread across the alphabet

D Double the number of nodes you currently have

A

C. Review your RowKey strategy and ensure that keys are evenly spread across the alphabet

https://cloud.google.com/bigtable/docs/schema-design#row-keys

80
Q

179.
Your company has a Google Cloud project that uses BigQuery for data warehousing There are some tables that contain personally identifiable information (Pll).Only the compliance team may access the Pll.The other information in the tables must be available to the data science team. You want to minimize cost and the time it takes to
assign appropriate access to the tables. What should you do?
A
1. From the dataset where you have the source data, create views of tables that you want to share, excluding Pll
2. Assign an appropriate project-level IAM role to the members of the data science team
3. Assign access controls to the dataset that contains the view.
B.

  1. From the dataset where you have the source data, create materialized views of tables that you want to share excluding Pll.
  2. Assign an appropriate project-level IAM role to the members of the data science team
  3. Assign access controls to the dataset that contains the view.

C.

  1. Create a dataset for the data science team
  2. Create views of tables that you want to share, excluding Pll.
  3. Assign an appropriate project-level IAM role to the members of the data science team
  4. Assign access controls to the dataset that contains the view.
  5. Authorize the view to access the source dataset.

D.

  1. Create a dataset for the data science team. 2. Create materialized views of tables that you want to share, excluding Pll.
  2. Assign an appropriate project-level IAM role to the members of the data science team
  3. Assign access controls to the dataset that contains the view.
  4. Authorize the view to access the source dataset.
A

C.

  1. Create a dataset for the data science team
  2. Create views of tables that you want to share, excluding Pll.
  3. Assign an appropriate project-level IAM role to the members of the data science team
  4. Assign access controls to the dataset that contains the view.
  5. Authorize the view to access the source dataset.
81
Q

180.
Your operations team currently stores 10 TB of data in an object storage service from a third-party provider They want to move this data to a Cloud Storage bucket as quickly as possible, following Google-recommended practices. They want to minimize the cost of this data migration. Which approach should they use?

A Use the gsutil mv command to move the data

B Use the Storage Transfer Service to move the data

C. Download the data to a Transfer Appliance, and ship it to Google

D Download the data to the on-premises data center, and upload it to the Cloud Storage bucket

A

B Use the Storage Transfer Service to move the data