pca Flashcards

1
Q

Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?
A. Load data into Google BigQuery
B. Insert data into Google Cloud SQL
C. Put flat files into Google Cloud Storage
D. Stream data into Google Cloud Datastore

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? (Choose three.)
A. Port the application code to run on Google App Engine
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger
D. Select an automation framework to reliably provision the cloud infrastructure
E. Deploy a continuous integration tool with automated testing in a staging environment
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable

A

ADE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.
What is the most likely cause of this problem?

A. The session variable is local to just a single instance
B. The session variable is being overwritten in Cloud Datastore
C. The URL of the API needs to be modified to prevent caching
D. The HTTP Expires header needs to be set to -1 stop caching

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs.
What should you do?
A. Direct them to download and install the Google StackDriver logging agent
B. Send them a list of online resources about logging best practices
C. Help them define their requirements and assess viable logging tools
D. Help them upgrade their current tool to take advantage of any new features

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company’s web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)
A. Introduce a green-blue deployment model
B. Replace the QA environment with canary releases
C. Fragment the monolithic platform into microservices
D. Reduce the platform’s dependency on relational database systems
E. Replace the platform’s relational database systems with a NoSQL database

A

AC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines
(VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.
Which two steps should you take? (Choose two.)
A. Use the - -no-auto-delete flag on all persistent disks and stop the VM
B. Use the - -auto-delete flag on all persistent disks and terminate the VM
C. Apply VM CPU utilization label and include it in the BigQuery billing export
D. Use Google BigQuery billing export and labels to associate cost to groups
E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM
F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM

A

AD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations.
Which database type should you use?
A. Flat file
B. NoSQL
C. Relational
D. Blobstore

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address.
You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.
What should you do?
A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to
BigQuery.
What should you do to fix the script?
A. Install the latest BigQuery API client library for Python
B. Run your script on a new virtual machine with the BigQuery access scope enabled
C. Create a new service account with BigQuery access and execute your script with that user
D. Install the bq component for gcloud with the command gcloud components install bq.

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords.
What authentication strategy should they use?
A. Use G Suite Password Sync to replicate passwords into Google
B. Federate authentication via SAML 2.0 to the existing Identity Provider
C. Provision users in Google using the Google Cloud Directory Sync tool
D. Ask users to set their Google password to match their corporate password

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in.
Which technology should they use for this?
A. Google Cloud Dataproc
B. Google Cloud Dataflow
C. Google Container Engine with Bigtable
D. Google Compute Engine with Google BigQuery

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users.
This behavior was not reported before the update.
What strategy should you take?
A. Work with your ISP to diagnose the problem
B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.
How can you remediate the problem with the least amount of downtime?
A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.
How should you design your architecture?
A. Create a tokenizer service and store only tokenized data
B. Create separate projects that only process credit card data
C. Create separate subnetworks and isolate the components that process credit card data
D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a
custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been
stored for future analysis by your data science and user experience teams.
Which storage infrastructure should you choose?
A. Google Cloud SQL
B. Google Cloud Bigtable
C. Google Cloud Storage
D. Google Cloud Datastore

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing
Cloud Storage spend.
What should you do?
A. Write a lifecycle management rule in XML and push it to the bucket with gsutil
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil
C. Schedule a cron script using gsutil ls ג€”lr gs://backups/** to find and remove items older than 90 days
D. Schedule a cron script using gsutil ls ג€”l gs://backups/** to find and remove items older than 90 days and schedule it with cron

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You
want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?
A. Google Cloud Dataflow
B. Google Cloud Dataproc
C. Google Compute Engine
D. Google Kubernetes Engine

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

The database administration team has asked you to help them improve the performance of their new database server running on Google Compute
Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an
n1-standard-8 virtual machine with 80 GB of SSD persistent disk.
What should they change to get better performance from this system?
A. Increase the virtual machine’s memory to 64 GB
B. Create a new virtual machine running PostgreSQL
C. Dynamically resize the SSD persistent disk to 500 GB
D. Migrate their performance metrics warehouse to BigQuery
E. Modify all of their batch jobs to use bulk inserts into the database

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10
readings a second, in the format of a timestamp and sensor reading.
Where should you store the data?
A. Google BigQuery
B. Google Cloud SQL
C. Google Cloud Bigtable
D. Google Cloud Storage

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Your company’s user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and
uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the
portal. The portal meets a
99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including
unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional
user load.
What should you do?
A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all
resources in one of the zones
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to
the system by terminating random resources on both zones
C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the
same time, terminate random resources on both zones
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users
based on existing user’s usage of the app, and deploy enough resources to handle 200% of expected load

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their
application deployments are taking too long.
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? (Choose two.)
A. Remove Python after running pip
B. Remove dependencies from requirements.txt
C. Use a slimmed-down base image like Alpine Linux
D. Use larger machine types for your Google Container Engine node pools
E. Copy the source after he package dependencies (Python and pip) are installed

A

CE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and
deployment procedures to avoid this problem in the future.
What should you do?
A. Deploy fewer changes to production
B. Deploy smaller changes to production
C. Increase the load on your test and staging environments
D. Deploy changes to a small subset of users before rolling out to production

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse
many services.
You want to know which service takes the longest in those cases.
What should you do?
A. Set timeouts on your application so that you can fail requests faster
B. Send custom metrics for each of your requests to Stackdriver Monitoring
C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid
this in the future.
What should you do?
A. Use a different database
B. Choose larger instances for your database
C. Create snapshots of your database more regularly
D. Implement routinely scheduled failovers of your databases

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.
Which approach should you use?
A. Grant the security team access to the logs in each Project
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery
C. Configure Stackdriver Monitoring for all Projects with the default retention policies
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform.
The database is 4
TB, and large updates are frequent. Replication requires private address space communication.
Which networking approach should you use?
A. Google Cloud Dedicated Interconnect
B. Google Cloud VPN connected to the data center network
C. A NAT and TLS translation gateway installed on-premises
D. A Google Compute Engine instance with a VPN server installed connected to the data center network

A. Google Cloud Dedicated Interconnect

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in
the previous 12 months. You want to streamline and expedite the analysis and audit process.
What should you do?
A. Create custom Google Stackdriver alerts and send them to the auditor
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor’s view
D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database
back-end. You want to store the credentials securely.
Where should you store the credentials?
A. In the source code
B. In an environment variable
C. In a secret management system
D. In a config file that has restricted access through ACLs

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud
environment.
You want to advocate for the adoption of Google Cloud Deployment Manager.
What are two business risks of migrating to Cloud Deployment Manager? (Choose two.)
A. Cloud Deployment Manager uses Python
B. Cloud Deployment Manager APIs could be deprecated in the future
C. Cloud Deployment Manager is unfamiliar to the company’s engineers
D. Cloud Deployment Manager requires a Google APIs service account to run
E. Cloud Deployment Manager can be used to permanently delete cloud resources
F. Cloud Deployment Manager only supports automation of Google Cloud resources

A

BF

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to
meet them. The application must:
1. Be based on open-source technology for cloud portability
2. Dynamically scale compute capacity based on demand
3. Support continuous software delivery
4. Run multiple segregated copies of the same application stack
5. Deploy application bundles using dynamic templates
6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?
A. Google Kubernetes Engine, Jenkins, and Helm
B. Google Kubernetes Engine and Cloud Load Balancing
C. Google Kubernetes Engine and Cloud Deployment Manager
D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You have created several pre-emptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your
application before the virtual machines are preempted.
What should you do?
A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory
B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service
C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when
you create the new virtual machine instance
D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify
the service URL as the value for a new metadata entry with the key shutdown-script-url

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales
independently of the others. Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow
between the web and the database tier.
How should you configure the network?
A. Add each tier to a different subnetwork
B. Set up software based firewalls on individual VMs
C. Add tags to each tier and set up routes to allow the desired traffic flow
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to
speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect
details on the failure to pass back to the development team.
Which three actions should you take? (Choose three.)
A. Use Stackdriver Logging to search for the module log entries
B. Read the debug GCE Activity log using the API or Cloud Console
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs
D. Identify whether a live migration event of the failed server occurred, using in the activity log
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics
F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen

A

ACE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the
analytics features available to them there, while also retaining that data as a long-term disaster recovery backup.
Which two steps should you take? (Choose two.)
A. Load logs into Google BigQuery
B. Load logs into Google Cloud SQL
C. Import logs into Google Stackdriver
D. Insert logs into Google Cloud Bigtable
E. Upload log files into Google Cloud Storage

A

AE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self-healing. One of the changes
negatively affects your key performance indicator. You are not sure how to fix it, and investigation could take up to a week.
What should you do?
A. Log in to a server, and iterate on the fox locally
B. Revert the source code change, and rerun the deployment pipeline
C. Log into the servers with the bad code change, and swap in the previous code
D. Change the instance group template to the previous one, and delete all instances

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Your organization wants to control IAM policies for different departments independently, but centrally.
Which approach should you take?
A. Multiple Organizations with multiple Folders
B. Multiple Organizations, one for each department
C. A single Organization with Folders for each department
D. A single Organization with multiple projects, each with a central owner

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

You deploy your custom Java application to Google App Engine. It fails to deploy and gives you the following stack trace.
What should you do?
A. Upload missing JAR files and redeploy your application.
B. Digitally sign all of your JAR files and redeploy your application
C. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a
specific user.
What should you do?
A. Tag messages client side with the originating user identifier and the destination user.
B. Encrypt the message client side using block-based encryption with a shared key.
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user’s private key.
D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data
center to their
GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the
replication.
What should they do?
A. Configure their replication to use UDP.
B. Configure a Google Cloud Dedicated Interconnect.
C. Restore their database daily using Google Cloud SQL.
D. Add additional VPN connections and load balance them.
E. Send the replicated transaction to Google Cloud Pub/Sub.

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended
approach for sanitizing this data of personally identifiable information or payment card information before initial storage?
A. Hash all data using SHA256
B. Encrypt all data using elliptic curve cryptography
C. De-identify the data with the Cloud Data Loss Prevention API
D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file so it is in the default execution
path and persists across sessions?
A. ~/bin
B. Cloud Storage
C. /google/scripts
D. /usr/local/bin

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection
of at least 20
Gbps. You want to follow Google-recommended practices. How should you set up the connection?
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN.
C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect.
D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

You are analyzing and defining business processes to support your startup’s trial usage of GCP, and you don’t yet know what consumer demand
for your product will be. Your manager requires you to minimize GCP service costs and adhere to Google best practices. What should you do?
A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management.
B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.
C. Utilize free tier and committed use discounts. Provision a staff position for service cost management.
D. Utilize free tier and committed use discounts. Provide training to the team about service cost management.

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

You are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensure that code changes can be
verified before deploying to production. What should you do?
A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back.
B. Use Spinnaker to deploy builds to production and run tests on production deployments.
C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a
complete rollout.
D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for
production and deploy that to the production environment.

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

You have an outage in your Compute Engine managed instance group: all instances keep restarting after 5 seconds. You have a health check
configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can
access the VMs. What should you do?
A. Grant your colleague the IAM role of project Viewer
B. Perform a rolling restart on the instance group
C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH Keys
D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Google Kubernetes Engine
(GKE) for workload orchestration. Parts of your architecture must also be PCI DSS-compliant. Which of the following is most accurate?
A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.
B. GKE cannot be used under PCI DSS because it is considered shared hosting.
C. GKE and GCP provide the tools you need to build a PCI DSS-compliant environment.
D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become
degraded over time.
You want to use Google-recommended practices to detect anomalies in your company data. What should you do?
A. Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.
B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.
C. Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.
D. Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When Cloud Identity and Access
Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy?
A. The effective policy is determined only by the policy set at the node
B. The effective policy is the policy set at the node and restricted by the policies of its ancestors
C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors
D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors

A

C

50
Q

You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your
on-premises systems and Google Cloud until the migration is completed. You want to make sure all your on-premise systems remain reachable
during this period. How should you organize your networking in Google Cloud?
A. Use the same IP range on Google Cloud as you use on-premises
B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap
with the range you use on-premises
C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises
D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary
range with the same IP range as you use on-premises

A

C

51
Q

You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the
required indexes and want to deploy these new indexes to Cloud Datastore. What should you do?
A. Point gcloud datastore create-indexes to your configuration file
B. Upload the configuration file to App Engine’s default Cloud Storage bucket, and have App Engine detect the new indexes
C. In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file
D. Create an HTTP request to the built-in python module to send the index configuration file to your application

A

A

52
Q

You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that
requires your application to fail over to another region in case of a regional outage. What should you do?
A. Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic,
and use the HTTP load balancing service to fail over to the standby instance in case of a disaster.
B. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over
to an instance on your premises in case of a disaster.
C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance
group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
D. Deploy the application on two Compute Engine instance groups, each in a separate project and a different region. Use the first instance
group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.

A

C

53
Q

You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises
database must not be accessible through the public internet. What should you do?
A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises
database.
B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database.
C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database.
D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.

A

D

54
Q

You are working in a highly secured environment where public Internet access from the Compute Engine VMs is not allowed. You do not yet have a
VPN connection to access an on-premises file server. You need to install specific software on a Compute Engine instance. How should you install
the software?
A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an
internal IP address to the VM. Download the installation files to the VM using gsutil.
B. Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud
Storage. Download the files to the VM using gsutil.
C. Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet.
Assign only an internal IP address to the VM. Download the installation files to the VM using gcloud.
D. Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for
Cloud Source Repositories. Download the files to the VM using gsutil.

A

A

55
Q

Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should
you do?
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.
C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage.
D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.

A

A

56
Q

You have an application deployed on Google Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a
Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do?
A. Use kubectl set image deployment/echo-deployment <new-image>
B. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster
C. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create ג€"f</new-image>

<yaml-file>
D. Update the service yaml file which the new container image. Use kubectl delete service/echo-service and kubectl create ג€"f <yaml-file>
</yaml-file></yaml-file>

A

A

57
Q

Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery
need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be
able to query the datasets, but not edit them.
How should you configure users’ access roles?
A. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that
contain the data.
B. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that
contain the data.
C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that
contain the data.
D. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that
contain the data.

A

C

58
Q

You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the
application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users
upload images?
A. Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours.
B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
C. Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours.
Authenticate users via Cloud Identity.
D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity

A

B

59
Q

Your web application must comply with the requirements of the European Union’s General Data Protection Regulation (GDPR). You are responsible
for the technical architecture of your web application. What should you do?
A. Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various
certifications and provides ג€pass-onג€ compliance when you use native features.
B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application.
C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps.
D. Define a design for the security of data in your web application that meets GDPR requirements

A

D

60
Q

You need to set up Microsoft SQL Server on GCP. Management requires that there’s no downtime in case of a data center outage in any of the
zones within a
GCP region. What should you do?
A. Configure a Cloud SQL instance with high availability enabled.
B. Configure a Cloud Spanner instance with a regional instance configuration.
C. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different
subnets.
D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones

A

D

61
Q

The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application.
What should you do?
A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment.
D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.

A

B

62
Q

You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan which incorporates
the business goal of cost optimization. Your team has deployed two GCP projects successfully to date. What should you do?
A. Allocate budget for team training. Set a deadline for the new GCP project.
B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
C. Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project.
D. Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job
role.

A

A

63
Q

You are designing an application for use only during business hours. For the minimum viable product release, you’d like to use a managed product
that automatically scales to zero so you don’t incur costs when there is no activity.
Which primary compute resource should you choose?
A. Cloud Functions
B. Compute Engine
C. Google Kubernetes Engine
D. AppEngine flexible environment

A

A

64
Q

You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which
you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore. What should you do?
A. Create the Key object for each Entity and run a batch get operation
B. Create the Key object for each Entity and run multiple get operations, one operation for each entity
C. Use the identifiers to create a query filter and run a batch query operation
D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity

A

A

65
Q

You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using
customer-supplied encryption keys. What should you do?
A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files.
B. Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket.
C. Use gsutil to upload the files, and use the flag –encryption-key to supply the encryption key.
D. Use gsutil to create a bucket, and use the flag –encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.

A

A

66
Q

Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google
Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs?
A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio.
B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data
Studio.
D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.

A

B

67
Q

You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You
want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application. Which set of steps should you
take?
A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository,
check out the production branch, install the dependencies, and start the Python app. 3. Restart the instances to automatically deploy new
production releases.
B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the
production branch that contains all of the dependencies and automatically starts the Python app. 3. Rebuild the Compute Engine image, and
update the instance template to deploy new production releases.
C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines. 2. Build a Docker image from
the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the
imagePullPolicy set to ‘IfNotPresent’ in the staging namespace, and then promote it to the production namespace after testing.
D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image from the master branch with all
of the dependencies, and tag it with ‘latest’. 3. Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to
‘Always’. Restart the pods to automatically deploy new production releases.

A

B

68
Q

Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity
management.
What should you do?
A. Use the Admin Directory API to authenticate against the Active Directory domain controller.
B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
C. Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider.
D. Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using
Google Cloud Directory Sync.

A

B

69
Q

You are running a cluster on Kubernetes Engine (GKE) to serve a web application. Users are reporting that a specific part of the application is not
responding anymore. You notice that all pods of your deployment keep restarting after 2 seconds. The application writes logs to standard output.
You want to inspect the logs to find the cause of the issue. Which approach can you take?
A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster.
B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.
C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs.
D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.

A

B

70
Q

You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you
do?
A. Create a read replica instance in a different region
B. Create a failover replica instance in a different region
C. Create a read replica instance in the same region, but in a different zone
D. Create a failover replica instance in the same region, but in a different zone

A

D

71
Q

Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and
lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application’s
performance. What should you do?
A. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from
the instance template.
B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the
custom image.
C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed
instance group from the instance template.
D. Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed
instance group from the custom image.

A

C

72
Q

Your web application has several VM instances running within a VPC. You want to restrict communications between instances to only the paths
and ports you authorize, but you don’t want to rely on static IP addresses or subnets because the app can autoscale. How should you restrict
communications?
A. Use separate VPCs to restrict traffic
B. Use firewall rules based on network tags attached to the compute instances
C. Use Cloud DNS and only allow connections from authorized hostnames
D. Use service accounts and configure the web application to authorize particular service accounts to have acces

A

B

73
Q

You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don’t
run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your
requirements?
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance
type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below
75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master.
C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy
memcached to reduce CPU load. 3. Change the instance type to a 32-core machine type to reduce replication lag.
D. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy
memcached to reduce CPU load. 3. Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to
reduce replication lag.

A

A

74
Q

You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This requires a relational database
that can operate on hundreds of terabytes of data. What is the Google-recommended tool for such applications?
A. Cloud Spanner, because it is globally distributed
B. Cloud SQL, because it is a fully managed relational database
C. Cloud Firestore, because it offers real-time synchronization across devices
D. BigQuery, because it is designed for large-scale processing of tabular data

A

D

75
Q

You have deployed an application to Google Kubernetes Engine (GKE), and are using the Cloud SQL proxy container to make the Cloud SQL
database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your
company policies require a post- mortem. What should you do?
A. Use gcloud sql instances restart.
B. Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role.
C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL.
D. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubectl to restart all pods.

A

C

76
Q

Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is
the Google- recommended way for your application to authenticate to the required Google Cloud services?
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub
IAM roles.
C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM.
D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub
IAM roles

A

A

77
Q

You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an onpremises network.
How should you deploy the VPN?
A. Use VPC Network Peering between the VPC and the on-premises network.
B. Expose the VPC to the on-premises network using IAM and VPC Sharing.
C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway.
D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.

A

D

78
Q

Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days
should be removed.
You want to optimize storage and follow Google-recommended practices. What should you do?
A. Configure the expiration time for your tables at 45 days
B. Make the tables time-partitioned, and configure the partition expiration at 45 days
C. Rely on BigQuery’s default behavior to prune application logs older than 45 days
D. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days

A

B

79
Q

You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPU load.
What should you do?
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the
gcloud command.
C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command.
D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group
from the GCP Console.

A

A

80
Q

You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted onpremises. You need to establish a secure, redundant connection between your on-premises network and the GCP network.
What should you do?
A. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your
networks if Dedicated Interconnect fails.
B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your
networks if Dedicated Interconnect fails.
C. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your
networks if the Transfer Appliance fails.
D. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your
networks if the Transfer Appliance fails.

A

B

81
Q

Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not time-critical. You also need to
use GCP services that are HIPAA-certified and manage service costs.
How should you design to meet Google best practices?
A. Provision preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
B. Provision preemptible VMs to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant.
D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not
HIPAA-compliant.

A

B

82
Q

Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public
REST API that reads from and writes to a Cloud SQL instance.
What should you do?
A. Engage with a security company to run web scrapers that look your for users’ authentication data om malicious websites and notify you if
any is found.
B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access.
C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
D. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while
monitoring KPIs for our REST API.

A

C

83
Q

Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month. What should you
do?
A. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user.
B. In the BigQuery interface, execute a query on the JOBS table to get the required information.
C. Use ‘bq show’ to list all jobs. Per job, use ‘bq ls’ to list job information and get the required information.
D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.

A

D

84
Q

You want to automate the creation of a managed instance group. The VMs have many OS package dependencies. You want to minimize the
startup time for new
VMs in the instance group.
What should you do?
A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies.
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the
VM image.
C. Use Puppet to create the managed instance group and install the OS package dependencies.
D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.

A

B

85
Q

Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has
multiple tables.
You want analysts from each country to be able to see and query only the data for their respective countries.
How should you configure the access rights?
A. Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’, and add all country-groups
as members. Grant the ‘all_analysts’ group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each
respective analyst country-group.
B. Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’, and add all country-groups
as members. Grant the ‘all_analysts’ group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each
respective analyst country-group.
C. Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’, and add all country-groups
as members. Grant the ‘all_analysts’ group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each
respective analyst country- group.
D. Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’, and add all country-groups
as members. Grant the ‘all_analysts’ group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each
respective analyst country-group.

A

A

86
Q

You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the onpremises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as
follows: 20 TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of
customer session state data that allows customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?
A. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails,
and VM boot/data volumes.
C. Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud
Storage for log archives and thumbnails.
D. Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-backed instances for VM boot/data
volumes. Cloud Storage for log archives and thumbnails.

A

D

87
Q

Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even
after pod scaling and relaunches.
Which feature of Kubernetes should you use to accomplish this?
A. StatefulSets
B. Role-based access control
C. Container environment variables
D. Persistent Volumes

A

A

88
Q

You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache
hit ratio.
What should you do?
A. Customize the cache keys to omit the protocol from the key.
B. Shorten the expiration time of the cached objects.
C. Make sure the HTTP(S) header ג€Cache-Regionג€ points to the closest region of your users.
D. Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket

A

A

89
Q

Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.
How should you collect these logs from both VMs and services?
A. All admin and VM system logs are automatically collected by Stackdriver.
B. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance
to collect system logs.
C. Launch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it.
D. Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.

A

B

90
Q

You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current
application version.
What should you do?
A. Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing.
B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
C. Deploy the update in a new VPC, and use Google’s global HTTP load balancing to split traffic between the update and current applications.
D. Deploy the update as a new App Engine application, and use Google’s global HTTP load balancing to split traffic between the new and
current applications.

A

B

91
Q

All Compute Engine instances in your VPC should be able to connect to an Active Directory server on specific ports. Any other traffic emerging
from your instances is not allowed. You want to enforce this using VPC firewall rules.
How should you configure the firewall rules?
A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active
Directory traffic for all instances.
B. Create an egress rule with priority 100 to deny all traffic for all instances. Create another egress rule with priority 1000 to allow the Active
Directory traffic for all instances.
C. Create an egress rule with priority 1000 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 100 to block
all traffic for all instances.
D. Create an egress rule with priority 100 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 1000 to block
all traffic for all instances.

A

A

92
Q

Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting
with a machine learning model on Google Cloud Platform to improve the quality of results.
What should the customer do to improve their model’s results over time?
A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model.
B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results.
C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are
available for additional performance.
D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.

A

D

93
Q

A development team at your company has created a dockerized HTTPS web application. You need to deploy the application on Google Kubernetes
Engine (GKE) and make sure that the application scales automatically.
How should you deploy to GKE?
A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
B. Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer
to load-balance the HTTPS traffic.
C. Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the HTTPS traffic.
D. Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic

A

B

94
Q

You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and endto-end in- transit encryption based on Google best practices.
What should you do?
A. Create a cross-region load balancer with URL Maps.
B. Create an HTTPS load balancer with URL Maps.
C. Create appropriate instance groups and instances. Configure SSL proxy load balancing.
D. Create a global forwarding rule. Configure SSL proxy load balancing

A

B

95
Q

You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with HTTP status codes of 5xx and 429.
How should you handle these types of errors?
A. Use gRPC instead of HTTP for better performance.
B. Implement retry logic using a truncated exponential backoff strategy.
C. Make sure the Cloud Storage bucket is multi-regional for geo-redundancy.
D. Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting an incident.

A

B

96
Q

You need to develop procedures to test a disaster plan for a mission-critical application. You want to use Google-recommended practices and
native capabilities within GCP.
What should you do?
A. Use Deployment Manager to automate service provisioning. Use Activity Logs to monitor and debug your tests.
B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests.
C. Use gcloud scripts to automate service provisioning. Use Activity Logs to monitor and debug your tests.
D. Use gcloud scripts to automate service provisioning. Use Stackdriver to monitor and debug your tests.

A

B

97
Q

Your company creates rendering software which users can download from the company website. Your company has customers all over the world.
You want to minimize latency for all your customers. You want to follow Google-recommended practices.
How should you store the files?
A. Save the files in a Multi-Regional Cloud Storage bucket.
B. Save the files in a Regional Cloud Storage bucket, one bucket per zone of the region.
C. Save the files in multiple Regional Cloud Storage buckets, one bucket per zone per region.
D. Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.

A

A

98
Q

Your company acquired a healthcare startup and must retain its customers’ medical information for up to 4 more years, depending on when it was
created. Your corporate policy is to securely retain this data, and then delete it as soon as regulations allow.
Which approach should you take?
A. Store the data in Google Drive and manually delete records as they expire.
B. Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely.
C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire.
D. Store the data in Cloud Storage and run a nightly batch script that deletes all expired data.

A

C

99
Q

You are deploying a PHP App Engine Standard service with Cloud SQL as the backend. You want to minimize the number of queries to the
database.
What should you do?
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before
issuing a query to Cloud SQL.
B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the cache with keys containing query
results.
C. Set the memcache service level to shared. Create a cron task that runs every minute to save all expected queries to a key called
€.גqueries_cached€ג
D. Set the memcache service level to shared. Create a key called ג€cached_queriesג€, and return database values from the key before using a
query to Cloud SQL.

A

A

100
Q

You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP. Leveraging Google
best practices, what should you do?
A. Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute
Engine instances.
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a messageprocessing utility service running on Compute Engine instances.
C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service
running on Compute Engine instances.
D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing
utility service running on Compute Engine instances.

A

B

101
Q

Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your
company’s mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to
process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an
existing 100-MB internet connection.
What actions will meet your company’s needs?
A. Compress and upload both archived files and files uploaded daily using the gsutil ג€”m option.
B. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a
connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
C. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish one Cloud
VPN Tunnel to VPC networks over the public internet, and compress and upload files daily using the gsutil ג€”m option.
D. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud
VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.

A

B

102
Q

You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with
no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?
A. Cloud Pub/Sub alone
B. Cloud Pub/Sub to Cloud Dataflow
C. Cloud Pub/Sub to Stackdriver
D. Cloud Pub/Sub to Cloud SQL

A

B

103
Q

Your company is planning to perform a lift and shift migration of their Linux RHEL 6.5+ virtual machines. The virtual machines are running in an
on-premises
VMware environment. You want to migrate them to Compute Engine following Google-recommended practices. What should you do?
A. 1. Define a migration plan based on the list of the applications and their dependencies. 2. Migrate all virtual machines into Compute Engine
individually with Migrate for Compute Engine.
B. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Create images of all disks. Import disks on
Compute Engine. 3. Create standard virtual machines where the boot disks are the ones you have imported.
C. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Define a migration plan, prepare a Migrate for
Compute Engine migration RunBook, and execute the migration.
D. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Install a third-party agent on all selected
virtual machines. 3. Migrate all virtual machines into Compute Engine.

A

C

104
Q

You need to deploy an application to Google Cloud. The application receives traffic via TCP and reads and writes data to the filesystem. The
application does not support horizontal scaling. The application process requires full control over the data on the file system because concurrent
access causes corruption. The business is willing to accept a downtime when an incident occurs, but the application must be available 24/7 to
support their business operations. You need to design the architecture of this application on Google Cloud. What should you do?
A. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use an HTTP load balancer in front of the
instances.
B. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use a network load balancer in front of the
instances.
C. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use an HTTP
load balancer in front of the instances.
D. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network
load balancer in front of the instances.

A

D

105
Q

Your company has an application running on multiple Compute Engine instances. You need to ensure that the application can communicate with
an on-premises service that requires high throughput via internal IPs, while minimizing latency. What should you do?
A. Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
B. Configure a direct peering connection between the on-premises environment and Google Cloud.
C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud.
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.

A

C

106
Q

You are managing an application deployed on Cloud Run for Anthos, and you need to define a strategy for deploying new versions of the
application. You want to evaluate the new code with a subset of production traffic to decide whether to proceed with the rollout. What should you
do?
A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions.
B. Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancing instance in front of both services.
C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch. As part of
the Cloud Build trigger, configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new
version.
D. In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run.
Configure Traffic Director to send a small percentage of traffic to the new version of the application.

A

C

107
Q

You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace. As a Site Reliability Engineer (SRE), you need to
triage incidents quickly. What should you do?
A. Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies.
B. Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting software on a Compute
Engine instance.
C. Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery, and make a
Data Studio dashboard.
D. Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies.

A

D

108
Q

You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data. You want to ensure
that the minimum amount of data is lost in case of catastrophic failure. Which two features should you implement? (Choose two.)
A. Sharding
B. Read replicas
C. Binary logging
D. Automated backups
E. Semisynchronous replication

A

CD

109
Q

You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such
as sustained injuries. You are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the
subject. You want to design a solution that can accommodate such a request. What should you do?
A. Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier.
B. When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part
of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information.
C. Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject’s data from
this view. Use this view instead of the source table for all analysis tasks.
D. Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of
its value.

A

B

110
Q

Your company has announced that they will be outsourcing operations functions. You want to allow developers to easily stage new versions of a
cloud-based application in the production environment and allow the outsourced operations team to autonomously promote staged versions to
production. You want to minimize the operational overhead of the solution. Which Google Cloud product should you migrate to?
A. App Engine
B. GKE On-Prem
C. Compute Engine
D. Google Kubernetes Engine

A

A

111
Q

Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and
development environments. The production environment is business-critical and is used 24/7, while the acceptance and development
environments are only critical during office hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle
times. What should you do?
A. Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller
machine type outside of office hours. Schedule the shell script on one of the production instances to automate the task.
B. Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start
them just before office hours.
C. Deploy the development and acceptance applications on a managed instance group and enable autoscaling.
D. Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development
environments

A

B

112
Q

You are moving an application that uses MySQL from on-premises to Google Cloud. The application will run on Compute Engine and will use Cloud
SQL. You want to cut over to the Compute Engine deployment of the application with minimal downtime and no data loss to your customers. You
want to migrate the application with minimal modification. You also need to determine the cutover strategy. What should you do?
A. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2.
Stop the on-premises application. 3. Create a mysqldump of the on-premises MySQL server. 4. Upload the dump to a Cloud Storage bucket. 5.
Import the dump into Cloud SQL. 6. Modify the source code of the application to write queries to both databases and read from its local
database. 7. Start the Compute Engine application. 8. Stop the on-premises application.
B. 1. Set up Cloud SQL proxy and MySQL proxy. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud
Storage bucket. 4. Import the dump into Cloud SQL. 5. Stop the on-premises application. 6. Start the Compute Engine application.
C. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2.
Stop the on-premises application. 3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server. 4.
Create the replication configuration in Cloud SQL. 5. Configure the source database server to accept connections from the Cloud SQL replica.
6. Finalize the Cloud SQL replica configuration. 7. When replication has been completed, stop the Compute Engine application. 8. Promote the
Cloud SQL replica to a standalone instance. 9. Restart the Compute Engine application, configured to read and write to the Cloud SQL
standalone instance.
D. 1. Stop the on-premises application. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage
bucket. 4. Import the dump into Cloud SQL. 5. Start the application on Compute Engine.

A

C

113
Q

Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this
requirement across all of your Virtual Private Clouds (VPCs). What should you do?
A. Remove the default route on all VPCs. Move all approved instances into a new subnet that has a default route to an internet gateway.
B. Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a default route to the internet gateway on this
new subnet.
C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely.
D. Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved instances in the allowedValues
list.

A

D

114
Q

Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute
Engine instances.
You need to evaluate the efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page in the Google Cloud Console, you
notice that there are no log rows to display. What should you do to troubleshoot the issue?
A. Enable Virtual Private Cloud (VPC) flow logging.
B. Enable Firewall Rules Logging for the firewall rules you want to monitor.
C. Verify that your user account is assigned the compute.networkAdmin Identity and Access Management (IAM) role.
D. Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output.

A

B

115
Q

Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the
buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?
A. 1. Create a VPC Service Controls perimeter that includes the projects with the buckets. 2. Create an access level with the CIDR of the office
network.
B. 1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range. 2. Use the Classless Inter-domain
Routing (CIDR) of the office network.
C. 1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the
buckets. 2. Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the
end of business.
D. 1. Create a Cloud VPN to the office network. 2. Configure Private Google Access for on-premises hosts.

A

A

116
Q

You have developed a non-critical update to your application that is running in a managed instance group, and have created a new instance
template with the update that you want to release. To prevent any possible impact to the application, you don’t want to update any running
instances. You want any new instances that are created by the managed instance group to contain the new update. What should you do?
A. Start a new rolling restart operation.
B. Start a new rolling replace operation.
C. Start a new rolling update. Select the Proactive update mode.
D. Start a new rolling update. Select the Opportunistic update mode.

A

D

117
Q

Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in
another zone as quickly as possible with the latest application data. You need to design the solution to meet this requirement. What should you
do?
A. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to
restore the disk in the same zone.
B. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the
application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region.
Use the regional persistent disk for the application data.
C. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to
restore the disk in another zone within the same region.
D. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the
application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another region. Use the regional
persistent disk for the application data.

A

B

118
Q

Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your
company’s data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company’s Virtual Private
Cloud (VPC) overlap with your data center IP space. What should you do to enable connectivity and make sure that there are no routing conflicts
when connectivity is established?
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no
overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP
space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to
block the overlapping IP space.
D. Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space.

A

A

119
Q

You need to migrate Hadoop jobs for your company’s Data Science team without modifying the underlying infrastructure. You want to minimize
costs and infrastructure management effort. What should you do?
A. Create a Dataproc cluster using standard worker instances.
B. Create a Dataproc cluster using preemptible worker instances.
C. Manually deploy a Hadoop cluster on Compute Engine using standard instances.
D. Manually deploy a Hadoop cluster on Compute Engine using preemptible instances.

A

A

120
Q

Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network
subnets do not overlap and must remain separated. The network configuration is shown below.
Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish
this?
A. Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1.
B. Add two additional NICs to Instance #1 with the following configuration: ג€¢ NIC1 ג ›—VPC: VPC #2 ג ›—SUBNETWORK: subnet #2 ג€¢ NIC2
ג ›—VPC: VPC #3 ג ›—SUBNETWORK: subnet #3 Update firewall rules to enable traffic between instances.
C. Create two VPN tunnels via CloudVPN: 1 ¢€ג between VPC #1 and VPC #2. 1 ¢€ג between VPC #2 and VPC #3. Update firewall rules to
enable traffic between the instances.
D. Peer all three VPCs: ג€¢ Peer VPC #1 with VPC #2. ג€¢ Peer VPC #2 with VPC #3. Update firewall rules to enable traffic between the
instances.

A

B