PCA Q's 100-180 Flashcards
- You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP Leveraging Google best practices, what should you do?
A
Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute Engine instances
B
Using the Cron service provided by AppEngine, publish messages to a Cloud Pub/Subtopic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
C
Using the Cron service provided by Google Kubernetes Engine(GKE), publish messages directly to a message-processing utility service running on Compute Engine instances
D
Using the Cron service provided by GKE, publish messages to a CloudPub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances
B is correct. More appropriately: https://cloud.google.com/solutions/reliable-task-scheduling-compute-engine
- Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived csv files totaling 900
TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.
What actions will meet your company’s needs?
A Compress and upload both archived files and files uploaded daily using the gsutil -m option.
B Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily
C Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage Establish one Cloud VPN Tunnel to VPC
networks over the public internet, and compress and upload files daily using the gsutil -m option.
D. Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC
networks over the public internet, and compress and upload files daily.
B Lease a Transfer Appliance, upload archived files to it. and send it to Google to transfer archived data to Cloud Storage Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily
Agree B. 100Mbps connections for 10TB data transfer is takes too long
https://cloud.google.com/solutions/transferring-big-data-sets-to-gcp#close
- You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in. first-out) delivery of data?
A Cloud Pub/Sub alone
B Cloud Pub/Sub to Cloud Dataflow
C Cloud Pub/Sub to Stackdriver
D Cloud Pub/Sub to Cloud SQL
B Cloud Pub/Sub to Cloud Dataflow
I believe the answer is B. “Pub/Sub doesn’t provide guarantees about the order of message delivery. Strict message ordering can be achieved with buffering, often using Dataflow.”
https://cloud.google.com/solutions/data-lifecycle-cloud-platform
- Your company is planning to perform a lift and shift migration of their Linux RHEL 6.5+ virtual machines. The virtual machines are running in an on-premises VMware
environment. You want to migrate them to Compute Engine following Google-recommended practices. What should you do?
A 1 Define a migration plan based on the list of the applications and their dependencies.
2 Migrate all virtual machines into Compute Engine individually with Migrate for Compute Engine
B 1 Perform an assessment of virtual machines running in the current VMware environment.
2 Create images of all disks Import disks on Compute Engine
3 Create standard virtual machines where the boot disks are the ones you have imported
C. 1. Perform an assessment of virtual machines running in the current VMware environment
2 Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.
D 1. Perform an assessment of virtual machines running in the current VMware environment.
2 Install a third-party agent on all selected virtual machines
3 Migrate all virtual machines into Compute Engine
C. 1. Perform an assessment of virtual machines running in the current VMware environment
2 Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.
Ans ) C ,
Migrate for Compute Engine organizes groups of VMs into Waves. After understanding the dependencies of your applications, create runbooks that contain groups of VMs and begin your migration!
https://cloud.google.com/migrate/compute-engine/docs/4.5/how-to/migrate-on-premises-to-gcp/overview
- You need to deploy an application to Google Cloud The application receives traffic via TCP and reads and writes data to the filesystem. The application does not support
horizontal scaling The application process requires full control over the data on the file system because concurrent access causes corruption The business is willing to
accept a downtime when an incident occurs, but the application must be available 24/7 to support their business operations. You need to design the architecture of this
application on Google Cloud. What should you do?
A Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use an HTTP load balancer in front of the instances
B Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use a network load balancer in front of the instances
C. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use an HTTP load balancer in front of the instances
D Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.
D Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.
- Your company has an application running on multiple Compute Engine instances. You need to ensure that the application can communicate with an on-premises service that
requires high throughput via internal IPs, while minimizing latency. What should you do?
A Use OpenVPN to configure a VPN tunnel between the on-premises
environment and Google Cloud
B Configure a direct peering connection between the on-premises environment and Google Cloud
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud
D Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud
D Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud
The correct answer is D.
Reason: “requires high throughput via internal IPs, while minimizing latency” - both are aspects you cannot guarantee with using VPN traversing the internet.
- You are managing an application deployed on Cloud Run for Anthos. and you need to define a strategy for deploying new versions of the application. You want to evaluate the
new code with a subset of production traffic to decide whether to proceed with the rollout. What should you do?
A Deploy a new revision to Cloud Run with the new version Configure traffic percentage between revisions
B Deploy a new service to Cloud Run with the new version Add a Cloud Load Balancing instance in front of both services
C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch As part of the Cloud Build trigger,
configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version.
D In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run Configure Traffic Director to
send a small percentage of traffic to the new version of the application
A Deploy a new revision to Cloud Run with the new version Configure traffic percentage between revisions
Correct Answer: A
o Each deployment to a service creates a revision. A revision consists of a specific container image, along with environment settings such as environment variables, memory limits, or concurrency value.
o Once the new revision is deployed to a Service you can manage the traffic using MANAGE TRAFFIC option inside the revision tab
107.
You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace As a Site Reliability Engineer (SRE), you need to triage incidents quickly.
What should you do?
A Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies
B Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting
software on a Compute Engine instance
C Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery. and make a Data Studio dashboard
D Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies.
A Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies
Ans: A
If we started to create a custom dashboard for every incident and there will be more dashboards and we will loose the center of visibility, so Option D is wrong.
- You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data You want to ensure that the minimum
amount of data is lost in case of catastrophic failure Which two features should you implement? (Choose two.)
A Sharding B Read replicas C. Binary logging D Automated backups E Semisynchronous replication
C,D
Cloud SQL. If you use Cloud SQL, the fully managed Google Cloud MySQL database, you should enable automated backups and binary logging for your Cloud SQL instances. This allows you to perform a point-in-time recovery, which restores your database from a backup and recovers it to a fresh Cloud SQL instance
- You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You
are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can
accommodate such a request. What should you do?
A Use a unique identifier for each individual Upon a deletion request, delete all rows from BigQuery with this identifier
B When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information.
C Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject’s data from this view Use this view
instead of the source table for all analysis tasks.
D Use a unique identifier for each individual Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value
B When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information.
According to me, the question states “The association collects a large amount of health data, such as sustained injuries.” and the nuance on the word such => “ Current legislation requires you to delete “SUCH” information upon request of the subject. “ So from that point of view the question is not to delete the entire user records but specific data related to personal health data. With DLP you can use InfoTypes and InfoType detectors to specifically scan for those entries and how to act upon them (link https://cloud.google.com/dlp/docs/concepts-infotypes)
I would say B.
110.
Your company has announced that they will be outsourcing operations functions. You want to allow developers to easily stage new versions of a cloud-based application in the production environment and allow the outsourced operations team to autonomously promote staged versions to production. You want to minimize the operational
overhead of the solution. Which Google Cloud product should you migrate to?
A App Engine
B GKEOn-Prem
C Compute Engine
D Google Kubernetes Engine
A App Engine
Answer should be A as only with App Engine we have a default service account which allows the user to deploy the changes per project. for GKE we may have to configure additional permission for both DEV and Operations team to deploy the changes.
111.
Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development environments The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do?
A Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the production instances to automate the task.
B Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours
C Deploy the development and acceptance applications on a managed instance group and enable autoscaling
D Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development environments.
B Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours
B is the answer, assuming VM doesn’t need to be up after office hours
https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing-your-cloud-costs
Schedule VMs to auto start and stop: The benefit of a platform like Compute Engine is that you only pay for the compute resources that you use. Production systems tend to run 24/7; however, VMs in development, test or personal environments tend to only be used during business hours, and turning them off can save you a lot of money!
https://cloud.google.com/blog/products/storage-data-transfer/save-money-by-stopping-and-starting-compute-engine-instances-on-schedule
Cloud Scheduler, GCP’s fully managed cron job scheduler, provides a straightforward solution for automatically stopping and starting VMs. By employing Cloud Scheduler with Cloud Pub/Sub to trigger Cloud Functions on schedule, you can stop and start groups of VMs identified with labels of your choice (created in Compute Engine). Here you can see an example schedule that stops all VMs labeled “dev” at 5pm and restarts them at 9am, while leaving VMs labeled “prod” untouched
112.
You are moving an application that uses MySQL from on-premises to Google Cloud The application will run on Compute Engine and will use Cloud SQL You want to cut over to the Compute Engine deployment of the application with minimal downtime and no data loss to your customers. You want to migrate the application with minimal
modification. You also need to determine the cutover strategy. What should you do?
A
1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server
2. Stop the on-premises application.
3 Create a mysqldump of the on-premises MySQL server.
4 Upload the dump to a Cloud Storage bucket.
5. Import the dump into Cloud SQL
6 Modify the source code of the application to write queries to both databases and read from its local database
7. Start the Compute Engine application
8 Stop the on-premises application.
B
1 Set up Cloud SQL proxy and MySQL proxy
2 Create a mysqldump of the on-premises MySQL server
3. Upload the dump to a Cloud Storage bucket. 4 Import the dump into Cloud SQL
5 Stop the on-premises application
6 Start the Compute Engine application
C.
1 Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server
2 Stop the on-premises application
3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server
4 Create the replication configuration in Cloud SQL
5 Configure the source database server to accept connections from the Cloud SQL replica
6 Finalize the Cloud SQL replica configuration
7. When replication has been completed, stop the Compute Engine application
8 Promote the Cloud SQL replica to a standalone instance
9 Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance
D
1. Stop the on-premises application
2 Create a mysqldump of the on-premises MySQL server
3 Upload the dump to a Cloud Storage bucket
4 Import the dump into Cloud SQL
5 Start the application on Compute Engine
C.
1 Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server
2 Stop the on-premises application
3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server
4 Create the replication configuration in Cloud SQL
5 Configure the source database server to accept connections from the Cloud SQL replica
6 Finalize the Cloud SQL replica configuration
7. When replication has been completed, stop the Compute Engine application
8 Promote the Cloud SQL replica to a standalone instance
9 Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance
113.
Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this requirement across all of your Virtual Private Clouds (VPCs). What should you do?
A Remove the default route on all VPCs Move all approved instances into a new subnet that has a default route to an internet gateway
B Create a new VPC in custom mode Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet
C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
D Set an Organization Policy with a constraint on constraints/compute vmExternallpAccess. List theapproved instances in the allowedValues list
D Set an Organization Policy with a constraint on constraints/compute vmExternallpAccess. List the approved instances in the allowedValues list
114.
Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances. You need to evaluate the efficiency of the applied firewall ruleset When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to
display. What should you do to troubleshoot the issue?
A Enable Virtual Private Cloud (VPC) flow logging
B Enable Firewall Rules Logging for the firewall rules you want to monitor.
C. Verify that your user account is assigned the compute networkAdmin Identity and Access Management (IAM) role.
D Install the Google Cloud SDK. and verify that there are no Firewall logs in the command line output
Answer is B
when you create a firewall rule there is an option for firewall rule logging on/off. It is set to off by default.
To get firewall insights or view the logs for a specific firewall rule you need to enable logging while creating the rule or you can enable it by editing that rule.
115.
Your company has sensitive data in Cloud Storage buckets Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?
A 1 Create a VPC Service Controls perimeter that includes the projects with the buckets
2 Create an access level with the CIDR of the office network
B 1 Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range
2. Use the Classless Inter-domain Routing (CIDR) of the office network
C. 1 Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets.
2 Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business.
D. 1 Create a Cloud VPN to the office network
2. Configure Private Google Access for on-premises hosts
A 1 Create a VPC Service Controls perimeter that includes the projects with the buckets
2 Create an access level with the CIDR of the office network
Should be A.
For all Google Cloud services secured with VPC Service Controls, you can ensure that:
Resources within a perimeter are accessed only from clients within authorized VPC networks using Private Google Access with either Google Cloud or on-premises.
116.
You have developed a non-critical update to your application that is running in a managed instance group, and have created a new instance template with the update that you
want to release. To prevent any possible impact to the application, you don’
t want to update any running instances. You want any new instances that are created by the
managed instance group to contain the new update. What should you do?
A Start a new rolling restart operation
B Start a new rolling replace operation.
C. Start a new rolling update Select the Proactive update mode.
D Start a new rolling update Select the Opportunistic update mode
IMHO the correct answer is d) opportunistic mode, not c) proactive mode. The requirement is not to update any running instances. see: https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups For automated rolling updates, you must set the mode to proactive. Alternatively, if an automated update is potentially too disruptive, you can choose to perform an opportunistic update. The MIG applies an opportunistic update only when you manually initiate the update on selected instances or when new instances are created. New instances can be created when you or another service, such as an autoscaler, resizes the MIG.
117.
Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in another zone as quickly as
possible with the latest application data. You need to design the solution to meet this requirement. What should you do?
A Create a snapshot schedule for the disk containing the application data Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same
zone
B Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region Use the regional persistent disk for the application data
C. Create a snapshot schedule for the disk containing the application data Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region.
D Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data Whenever a zonal
outage occurs, use the instance template to spin up the application in another region Use the regional persistent disk for the application data tvt_vn/ebay
B. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region Use the regional persistent disk for the application data
Answer is B
it only request zonal resiliency.
Regional persistent disk is a storage option that provides synchronous replication of data between two zones in a region. Regional persistent disks can be a good building block to use when you implement HA services in Compute Engine.
118.
Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company’s data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company’s Virtual Private Cloud (VPC) overlap with your data center IP space.
What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established?
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space
B Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP
space
D Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space
Correct Answer: A
- IP Should not overlap so applying new IP address is the solution
119.
You need to migrate Hadoop jobs for your company’s Data Science team without modifying the underlying infrastructure. You want to minimize costs and infrastructure management effort. What should you do?
A Create a Dataproc cluster using standard worker instances.
B Create a Dataproc cluster using preemptible worker instances
C Manually deploy a Hadoop cluster on Compute Engine using standard instances
D Manually deploy a Hadoop cluster on Compute Engine using preemptible instances
B. Create a Dataproc cluster using preemptible worker instances
“Should be B, you want to minimize costs.”
120.
Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network subnets do not overlap
and must remain separated. The network configuration is shown below.
VPC#2
subnet #2
Instance #2
Compute Engine VPC#1
subnet #!
Instance #1
Compute Engine VPC#3
subnet #3
Instance #3
Compute Engine
Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this?
A. Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1.
B. Add two additional NICs to Instance #1 with the following configuration: - NIC1 O VPC: VPC #2 o SUBNETWORK: subnet #2 - NIC2 o VPC: VPC #3 O SUBNETWORK: subnet #3 Update firewall rules to enable traffic between instances.
C. Create two VPN tunnels via CloudVPN:
• 1 between VPC #1 and VPC #2
- 1 between VPC #2 and VPC #3
D. Peer all three VPCs: • Peer VPC #1 with VPC #2. - Peer VPC #2 with VPC #3. Update firewall rules to enable traffic between the instances.
B. Add two additional NICs to Instance #1 with the following configuration: - NIC1 O VPC: VPC #2 o SUBNETWORK: subnet #2 - NIC2 o VPC: VPC #3 O SUBNETWORK: subnet #3 Update firewall rules to enable traffic between instances.
According to my understanding the requirement is that only VM1 shall be able to communicate with VM2 and VM3, but not VM2 with VM3.
We can exclude d) as d) would enable VM2 to communicate with VM3 as well - my assumption is, that if the quizzer wanted that d) is the correct answer, he would make just 2 peerings - 1x between VM1 and VM2 and 1x between VM1 and VM3 repectively the VPCs.
We can exclude c) as well - there is no connection between VPC1 and VPC3.
IMHO a) will not work.
So the only correct answer seems to be b) - what I don’t understand is why we have to update the firewall rules as IMHO the default firewall rules enable such communication (maybe some restrictive rules are implemented - not enough details in the question to clarify that part). Please correct me if I am wrong.
121.
You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What should you do?
A Create a Compute Engine instance template using the most recent Debian image Create an instance from this template, and install and configure the application as
part of the startup script Repeat this process whenever a new Google-managed Debian image becomes available
B Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
C Create an instance with the latest available Debian image Connect to the instance via SSH, and install and configure the application on the instance Repeat this
process whenever a new Google-managed Debian image becomes available.
D Create a Docker container with Debian as the base image Install and configure the application as part of the Docker image creation process Host the container on
Google Kubernetes Engine and restart the container whenever a new update is available
B Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
“B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.”
- You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do?
A 1. Update your GKE cluster to use Cloud Operations for GKE
2 Use the GKE Monitoring dashboard to investigate logs from affected Pods
B 1 Create a new GKE cluster with Cloud Operations for GKE enabled
2 Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster
3. Use the GKE Monitoring dashboard to investigate logs from affected Pods
C. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus.
2 Set an alert to trigger whenever the application returns an error.
D 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus.
2 Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster
3 Set an alert to trigger whenever the application returns an error.
A
1. Update your GKE cluster to use Cloud Operations for GKE
2 Use the GKE Monitoring dashboard to investigate logs from affected Pods
According to the reference, answer should be A.
https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running-kubernetes-engine
123.
You need to deploy a stateful workload on Google Cloud The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do?
A Use a persistent disk for each instance.
B Use a regional persistent disk for each instance
C Create a Cloud Filestore instance and mount it in each instance.
D Create a Cloud Storage bucket and mount it in each instance using gcsfuse
C.
Create a Cloud Filestore instance and mount it in each instance.
Answer should be C,
https://cloud.google.com/storage/docs/gcs-fuse#notes
124.
Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that is running multiple microservices. The cluster has both Anthos Service Mesh and
Anthos Config Management configured End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay.
What should you do?
A Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices
B Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads.
C Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads
D Reinstall istio using the default istio profile in order to collect request latency Evaluate the telemetry between the microservices in the Cloud Console
A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices
The Anthos Service Mesh pages in the Google Cloud Console provide both summary and in-depth metrics, charts, and graphs that enable you to observe service behavior. You can monitor the overall health of your services, or drill down on a specific service to set a service level objective (SLO) or troubleshoot an issue.
125.
You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years What should you do?
A Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.
B Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer Use the service account to upload new files
C Use a customer-managed key for the encryption of the bucket Rotate the key after 5 years
D Create the bucket with fine-grained access control, and grant a service account the role of Object Writer Use the service account to upload new files.
A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.
- If a bucket has a retention policy, objects in the bucket can only be deleted or replaced once their age is greater than the retention period.
- Once you lock a retention policy, you cannot remove it or reduce the retention period it has.
126.
Your team will start developing a new application using microservices architecture on Kubernetes Engine As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment You want to ensure that all code deployed in the development environment follows this process.
What should you do?
A Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch After a
successful commit, have the developer deploy the newly built container image on the development cluster.
B Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch After a
successful commit, have the developer deploy the newly built container image on the development cluster
C Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry Create a deployment
pipeline that watches for new images and deploys the new image on the development cluster Ensure only the deployment tool has access to deploy new versions
D Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry Rely on Vulnerability Scanning to
ensure the code tests succeed As the final step of the Cloud Build process, deploy the new container image on the development cluster Ensure only Cloud Buildhas access to deploy new versions.
C Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry Create a deployment
pipeline that watches for new images and deploys the new image on the development cluster Ensure only the deployment tool has access to deploy new versions
“Questions say “relevant microservice will be deployed automatically in the development environment.” Therefore A and B are out. D says “Rely on Vulnerability Scanning to ensure the code tests succeed.” Vulnerability Scanning is not test so D is out. The correct Answer is therefore C.”
127.
Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again
as quickly as possible. Which action should you recommend?
A Change the autoscaling metric to agent googleapis.com/memory/percent_used.
B Restart the affected instances on a staggered schedule
C. SSH to each instance and restart the application process
D Increase the maximum number of instances in the autoscaling group
D Increase the maximum number of instances in the autoscaling group
“Cannot be A), since changing the metric used for autoscaling will not solve the issue, the CPU is already over utilized, hence the unique “workaround” meanwhile the application causing the issue is fixed (connection leaks, infinite loops, etc.) is to allow introducing new nodes/workers/VMs.”
128.
You are implementing the infrastructure for a web service on Google Cloud The web service needs to receive and store the data from 500.000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business
wants to keep costs low. Which web service platform and database should you use for the application?
A Cloud Run and BigQuery
B Cloud Run and Cloud Bigtable
C. A Compute Engine autoscaling managed instance group and BigQuery
D A Compute Engine autoscaling managed instance group and Cloud Bigtable
B Cloud Run and Cloud Bigtable
“Any correct answer must involve Cloud Bigtable over BigQuery since Bigtable is optimized for heavy write loads. That leaves B and D. I would suggest B b/c it is lower cost (“The business wants to keep costs low”)”
129.
You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific
number of replicas You also want to be able to address a specific microservice from any other microservice in a uniform way. regardless of the number of replicas the
microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do?
A Deploy each microservice as a Deployment Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.
B Deploy each microservice as a Deployment Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster,
C Deploy each microservice as a Pod Expose the Pod in the cluster using a Service , and use the Service DNS name to address the microservice from other microservices within the cluster
D Deploy each microservice as a Pod Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from othermicroservices within the cluster.
A
Deploy each microservice as a Deployment Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.
130.
Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team The development team does not want the networking team to have access to the sensitive data on the instances. What should you do?
A 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team
2 Create a second project with a standalone VPC and assign the Compute Admin role to the development team
3. Use Cloud VPN to join the two VPCs.
B
1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team
C.
1 Create a project with a Shared VPC and assign the Network Admin role to the networking team
2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team
D
- Create a project with a standalone VPC and assign the Network Admin role to the networking team
- Create a second project with a standalone VPC and assign the Compute Admin role to the development team.
- Use VPC Peering to join the two VPCs
C.
1 Create a project with a Shared VPC and assign the Network Admin role to the networking team
2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team
131.
Your company wants you to build a highly reliable web application with a few public APIs as the backend.You don’t expect a lot of user traffic, but traffic could spike
occasionally. You want to leverage Cloud Load Balancing,and the solution must be cost-effective for users. What should you do?
A Store static content such as HTML and images in Cloud CDN Host the APIs on App Engine and store the user data in Cloud SQL.
B Store static content such as HTML and images in a Cloud Storage bucket Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner
C Store static content such as HTML and images in Cloud CDN Use Cloud Run to host the APIs and save the user data in Cloud SQL
D Store static content such as HTML and images in a Cloud Storage bucket Use Cloud Functions to host the APIs and save the user data in Firestore
D.
Store static content such as HTML and images in a Cloud Storage bucket Use Cloud Functions to host the APIs and save the user data in Firestore
“Answer should be D”
https: //cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#gcloud:-cloud-functions
https: //cloud.google.com/blog/products/networking/better-load-balancing-for-app-engine-cloud-run-and-functions