Practice Test Bank Flashcards

1
Q

You have been tasked with interviewing line-of-business owners about their needs for a new cloud application. Which of the following do you expect to find?
A. A comprehensive list of defined business and technical requirements
B. That their business requirements do not have a one-to-one correlation with technical requirements
C. Business and technical requirements in conflict
D. Clear consensus on all requirements

A

B. The correct answer is B. Business requirements are high-level, business-oriented requirements rarely met by a single technical requirement. Option A is incorrect because business sponsors seldom have a sufficient understanding of technical requirements to provide a comprehensive list. Option C is wrong because business requirements constrain technical options but should not be in conflict. Option D is incorrect because there is rarely a clear consensus on all requirements. Part of an architect’s job is to help stakeholders reach a consensus.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You have been asked by stakeholders to suggest ways to reduce operational expenses as part of a cloud migration project. Which of the following would you recommend?
A. Managed services, preemptible machines, access controls
B. Managed services, preemptible machines, autoscaling
C. NoSQL databases, preemptible machines, autoscaling
D. NoSQL databases, preemptible machines, access controls

A

B. The correct answer is B. Managed services relieve DevOps work, preemptible machines cost significantly less than standard VMs, and autoscaling reduces the chances of running unnecessary resources. Options A and D are incorrect because access controls will not help reduce costs, but they should be used anyway. Options C and D are incorrect because there is no indication that a NoSQL database should be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Some executives are questioning your recommendation to employ continuous integration/continuous delivery (CI/CD). What reasons would you give to justify your recommendation?
A. CI/CD supports small releases, which are easier to debug and enable faster feedback.
B. CI/CD is used only with preemptible machines and therefore saves money.
C. CI/CD fits well with waterfall methodology but not agile methodologies.
D. CI/CD limits the number of times code is released.

A

A. The correct answer is A. CI/CD supports small releases, which are easier to debug and enable faster feedback. Option B is incorrect, as CI/CD does not use only preemptible machines. Option C is incorrect because CI/CD works well with agile methodologies. Option D is incorrect, as there is no limit to the number of times new versions of code can be released.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The finance director has asked your advice about complying with a document retention regulation. What kind of service-level objective (SLO) would you recommend to ensure that the finance director will be able to retrieve sensitive documents for at least the next seven years? When a document is needed, the finance director will have up to seven days to retrieve it. The total storage required will be approximately 100 TB.
A. High availability SLO
B. Durability SLO
C. Reliability SLO
D. Scalability SLO

A

B. The correct answer is B. The finance director needs to have access to documents for seven years. This requires durable storage. Option A is incorrect because the access does not have to be highly available; as long as the finance director can access the document in a reasonable period of time, the requirement can be met. Option C is incorrect because reliability is a measure of being available to meet workload demands successfully. Option D is incorrect because the requirement does not specify the need for increasing and decreasing storage to meet the requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You are facilitating a meeting of business and technical managers to solicit requirements for a cloud migration project. The term incident comes up several times. Some of the business managers are unfamiliar with this term in the context of IT. How would you describe an incident?
A. A disruption in the ability of a DevOps team to complete work on time
B. A disruption in the ability of the business managers to approve a project plan on schedule
C. A disruption that causes a service to be degraded or unavailable
D. A personnel problem on the DevOps team

A

C. The correct answer is C. An incident in the context of IT operations and service reliability is a disruption that degrades or stops a service from functioning. Options A and B are incorrect—incidents are not related to scheduling. Option D is incorrect; in this context, incidents are about IT services, not personnel.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have been asked to consult on a cloud migration project that includes moving private medical information to a storage system in the cloud. The project is for a company in the United States. What regulation would you suggest that the team review during the requirements-gathering stages?
A. General Data Protection Regulations (GDPR)
B. Sarbanes–Oxley (SOX)
C. Payment Card Industry Data Security Standard (PCI DSS)
D. Health Insurance Portability and Accountability Act (HIPAA)

A

D. The correct answer is D. HIPAA governs, among other things, privacy and data protection for private medical information. Option A is incorrect, as GDPR is a European Union regulation. Option B is inaccurate, as SOX is a U.S. financial reporting regulation. Option C is inaccurate, as PCI DSS is a payment card industry regulation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You are in the early stages of gathering business and technical requirements. You have noticed several references about needing up-to-date and consistent information regarding product inventory and support for SQL reporting tools. Inventory is managed on a global scale, and the warehouses storing inventory are located in North America, Africa, Europe, and Asia. Which managed database solution in Google Cloud would you include in your set of options for an inventory database?
A. Cloud Storage
B. BigQuery
C. Cloud Spanner
D. Microsoft SQL Server

A

C. The correct answer is C. Cloud Spanner is a globally consistent, horizontally scalable relational database. Option A is incorrect. Cloud Storage does not support SQL. Option B is incorrect because BigQuery is an analytical database used for data warehousing and related operations. Option D is incorrect; Microsoft SQL Server is a Cloud SQL database option, and Cloud SQL is a managed database, but Cloud SQL scales regionally, not globally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A developer at Mountkirk Games is interested in how architects decide which database to use. The developer describes a use case that requires a document store. The developer would rather not manage database servers or have to run backups. What managed service would you suggest the developer consider?
A. Cloud Firestore
B. Cloud Spanner
C. Cloud Storage
D. BigQuery

A

A. The correct answer is A. Cloud Firestore is a managed document database and a good fit for storing documents. Option B is incorrect because Cloud Spanner is a relational database and globally scalable. There is no indication that the developer needs a globally scalable solution, which implies higher cost. Option C is incorrect, as Cloud Storage is an object storage system, not a managed database. Option D is incorrect because BigQuery is an analytical database designed for data warehousing and similar applications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Members of your company’s legal team are concerned about using a public cloud service because other companies, organizations, and individuals will be running their systems in the same cloud. You assure them that your company’s resources will be isolated and not network-accessible to others because of what networking resource in Google Cloud?
A. CIDR blocks
B. Direct connections
C. Virtual private clouds
D. Cloud Pub/Sub

A

C. The correct answer is C. VPCs isolate cloud resources from resources in other VPCs, unless VPCs are intentionally linked. Option A is incorrect because a CIDR block has to do with subnet IP addresses. Option B is incorrect, as direct connections are for transmitting data between a data center and Google Cloud—it does not protect resources in the cloud. Option D is incorrect because Cloud Pub/Sub is a messaging service, not a networking service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A startup has recently migrated to Google Cloud using a lift-and-shift migration. They are now considering replacing a self-managed MySQL database running in Compute Engine with a managed service. Which Google Cloud service would you recommend that they consider?
A. Cloud Dataproc
B. Cloud Dataflow
C. Cloud SQL
D. PostgreSQL

A

C. The correct answer is C. Cloud SQL offers a managed MySQL service. Options A and B are incorrect, as neither is a database. Cloud Dataproc is a managed Hadoop and Spark service. Cloud Dataflow is a stream and batch processing service. Option D is incorrect, because PostgreSQL is another relational database, but it is not a managed service. PostgreSQL is an option in Cloud SQL,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which of the following requirements from a customer make you think the application should run in Compute Engine and not App Engine?
A. Dynamically scale up or down based on workload
B. Connect to a database
C. Run a hardened Linux distro on a virtual machine
D. Don’t lose data

A

C. The correct answer is C. In Compute Engine, you create virtual machines and choose which operating system to run. All other requirements can be realized in App Engine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

The original video captured during helicopter races by the Helicopter Racing League are transcoded and stored for frequent access. The original captured videos are not used for viewing but are stored in case they are needed for unanticipated reasons. The files require high durability but are not likely to be accessed more than once in a five-year period. What type of storage would you use for the original video files?
A. BigQuery Long Term Storage
B. BigQuery Active Storage
C. Cloud Storage Nearline class
D. Cloud Storage Archive class

A

D. The correct answer is D. Cloud Storage Archive class is the most cost-effective option and meets durability requirements. Option C is incorrect; Cloud Storage Nearline class would meet durability requirements, but since the videos are likely accessed less than once per year, Cloud Storage Archive class would meet durability requirements and cost less. Options A and B are incorrect because videos are large binary objects best stored in object storage, not an analytical database such as BigQuery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

The game analytics platform for Mountkirk Games requires analysts to be able to query up to 10 TB of data. What is the best managed database solution for this requirement?
A Cloud Spanner
B BigQuery
C Cloud Storage
D Cloud Dataprep

A

B. The correct answer is B. This is a typical use case for BigQuery, and it fits well with its capabilities as an analytic database. Option A is incorrect, as Cloud Spanner is best used for transaction processing on a global scale. Options C and D are not managed databases. Cloud Storage is an object storage service; Cloud Dataprep is a tool for preparing data for analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

EHR Healthcare business requirements frequently discuss the need to improve system observability. Which of the following Google Cloud Platform services could be used to help improve observability?
A. Cloud Build and Artifact Registry
B Cloud Pub/Sub and Cloud Dataflow
C Cloud Monitoring and Cloud Logging
D Cloud Storage and Cloud Pub/Sub

A

C. The correct answer is C. Cloud Monitoring collects metrics, and Cloud Logging collects event data from infrastructure, services, and other applications that provide insight into the state of those systems. Cloud Build and Artifact Registry are important CI/CD services. Cloud Pub/Sub is a messaging service, Cloud Dataflow is a batch and stream processing service, and Cloud Storage is an object storage system; none of these directly supports improved observability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You need to restrict access to your Google Cloud load-balanced application so that only specific IP addresses can connect.
What should you do?

A. Create a secure perimeter using the Access Context Manager feature of VPC Service Controls and restrict access to the source IP range of the allowed clients and Google health check IP ranges.
B. Create a secure perimeter using VPC Service Controls, and mark the load balancer as a service restricted to the source IP range of the allowed clients and Google health check IP ranges.
C. Tag the backend instances “application,” and create a firewall rule with target tag “application” and the source IP range of the allowed clients and Google health check IP ranges.
D. Label the backend instances “application,” and create a firewall rule with the target label “application” and the source IP range of the allowed clients and Google health check IP ranges.

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Your end users are located in close proximity to us-east1 and europe-west1. Their workloads need to communicate with each other. You want to minimize cost and increase network efficiency.
How should you design this topology?

A. Create 2 VPCs, each with their own regions and individual subnets. Create 2 VPN gateways to establish connectivity between these regions.
B. Create 2 VPCs, each with their own region and individual subnets. Use external IP addresses on the instances to establish connectivity between these regions.
C. Create 1 VPC with 2 regional subnets. Create a global load balancer to establish connectivity between the regions.
D. Create 1 VPC with 2 regional subnets. Deploy workloads in these subnets and have them communicate using private RFC1918 IP addresses.

A

Answer : D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Your organization is deploying a single project for 3 separate departments. Two of these departments require network connectivity between each other, but the third department should remain in isolation. Your design should create separate network administrative domains between these departments. You want to minimize operational overhead.
How should you design the topology?

A. Create a Shared VPC Host Project and the respective Service Projects for each of the 3 separate departments.
B. Create 3 separate VPCs, and use Cloud VPN to establish connectivity between the two appropriate VPCs.
C. Create 3 separate VPCs, and use VPC peering to establish connectivity between the two appropriate VPCs.
D. Create a single project, and deploy specific firewall rules. Use network tags to isolate access between the departments.

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

You are migrating to Cloud DNS and want to import your BIND zone file.
Which command should you use?

A. gcloud dns record-sets import ZONE_FILE –zone MANAGED_ZONE
B. gcloud dns record-sets import ZONE_FILE –replace-origin-ns –zone MANAGED_ZONE
C. gcloud dns record-sets import ZONE_FILE –zone-file-format –zone MANAGED_ZONE
D. gcloud dns record-sets import ZONE_FILE –delete-all-existing –zone MANAGED ZONE

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

You created a VPC network named Retail in auto mode. You want to create a VPC network named Distribution and peer it with the Retail VPC.
How should you configure the Distribution VPC?

A. Create the Distribution VPC in auto mode. Peer both the VPCs via network peering.
B. Create the Distribution VPC in custom mode. Use the CIDR range 10.0.0.0/9. Create the necessary subnets, and then peer them via network peering.
C. Create the Distribution VPC in custom mode. Use the CIDR range 10.128.0.0/9. Create the necessary subnets, and then peer them via network peering.
D. Rename the default VPC as “Distribution” and peer it via network peering.

A

Answer : B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You created a VPC network named Retail in auto mode. You want to create a VPC network named Distribution and peer it with the Retail VPC.
How should you configure the Distribution VPC?

A. Create the Distribution VPC in auto mode. Peer both the VPCs via network peering.
B. Create the Distribution VPC in custom mode. Use the CIDR range 10.0.0.0/9. Create the necessary subnets, and then peer them via network peering.
C. Create the Distribution VPC in custom mode. Use the CIDR range 10.128.0.0/9. Create the necessary subnets, and then peer them via network peering.
D. Rename the default VPC as “Distribution” and peer it via network peering.

A

Answer : B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

You are using a third-party next-generation firewall to inspect traffic. You created a custom route of 0.0.0.0/0 to route egress traffic to the firewall. You want to allow your VPC instances without public IP addresses to access the BigQuery and Cloud Pub/Sub APIs, without sending the traffic through the firewall.
Which two actions should you take? (Choose two.)

A. Turn on Private Google Access at the subnet level.
B. Turn on Private Google Access at the VPC level.
C. Turn on Private Services Access at the VPC level.
D. Create a set of custom static routes to send traffic to the external IP addresses of Google APIs and services via the default internet gateway.
E. Create a set of custom static routes to send traffic to the internal IP addresses of Google APIs and services via the default internet gateway.

Question 7 ( Single Topic )
All the instances in your project are configured with the custom metadata enable-oslogin value set to FALSE and to block project-wide SSH keys. None of the instances are set with any SSH key, and no project-wide SSH keys have been configured. Firewall rules are set up to allow SSH sessions from any IP address range. You want to SSH into one instance.
What should you do?

A. Open the Cloud Shell SSH into the instance using gcloud compute ssh.
B. Set the custom metadata enable-oslogin to TRUE, and SSH into the instance using a third-party tool like putty or ssh.
C. Generate a new SSH key pair. Verify the format of the private key and add it to the instance. SSH into the instance using a third-party tool like putty or ssh.
D. Generate a new SSH key pair. Verify the format of the public key and add it to the project. SSH into the instance using a third-party tool like putty or ssh.

Question 8 ( Single Topic )
You work for a university that is migrating to GCP.
These are the cloud requirements:
“¢ On-premises connectivity with 10 Gbps
“¢ Lowest latency access to the cloud
“¢ Centralized Networking Administration Team
New departments are asking for on-premises connectivity to their projects. You want to deploy the most cost-efficient interconnect solution for connecting the campus to Google Cloud.
What should you do?

A. Use Shared VPC, and deploy the VLAN attachments and Interconnect in the host project.
B. Use Shared VPC, and deploy the VLAN attachments in the service projects. Connect the VLAN attachment to the Shared VPC’s host project.
C. Use standalone projects, and deploy the VLAN attachments in the individual projects. Connect the VLAN attachment to the standalone projects’ Interconnects.
D. Use standalone projects and deploy the VLAN attachments and Interconnects in each of the individual projects.

Question 9 ( Single Topic )
You have deployed a new internal application that provides HTTP and TFTP services to on-premises hosts. You want to be able to distribute traffic across multiple
Compute Engine instances, but need to ensure that clients are sticky to a particular instance across both services.
Which session affinity should you choose?

A. None
B. Client IP
C. Client IP and protocol
D. Client IP, port and protocol

Question 10 ( Single Topic )
You created a new VPC network named Dev with a single subnet. You added a firewall rule for the network Dev to allow HTTP traffic only and enabled logging.
When you try to log in to an instance in the subnet via Remote Desktop Protocol, the login fails. You look for the Firewall rules logs in Stackdriver Logging, but you do not see any entries for blocked traffic. You want to see the logs for blocked traffic.
What should you do?

A. Check the VPC flow logs for the instance.
B. Try connecting to the instance via SSH, and check the logs.
C. Create a new firewall rule to allow traffic from port 22, and enable logs.
D. Create a new firewall rule with priority 65500 to deny all traffic, and enable logs.

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

You are using a third-party next-generation firewall to inspect traffic. You created a custom route of 0.0.0.0/0 to route egress traffic to the firewall. You want to allow your VPC instances without public IP addresses to access the BigQuery and Cloud Pub/Sub APIs, without sending the traffic through the firewall.
Which two actions should you take? (Choose two.)

A. Turn on Private Google Access at the subnet level.
B. Turn on Private Google Access at the VPC level.
C. Turn on Private Services Access at the VPC level.
D. Create a set of custom static routes to send traffic to the external IP addresses of Google APIs and services via the default internet gateway.
E. Create a set of custom static routes to send traffic to the internal IP addresses of Google APIs and services via the default internet gateway.

nd enable logs.

A

Answer A:D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Question 7 ( Single Topic )
All the instances in your project are configured with the custom metadata enable-oslogin value set to FALSE and to block project-wide SSH keys. None of the instances are set with any SSH key, and no project-wide SSH keys have been configured. Firewall rules are set up to allow SSH sessions from any IP address range. You want to SSH into one instance.
What should you do?

A. Open the Cloud Shell SSH into the instance using gcloud compute ssh.
B. Set the custom metadata enable-oslogin to TRUE, and SSH into the instance using a third-party tool like putty or ssh.
C. Generate a new SSH key pair. Verify the format of the private key and add it to the instance. SSH into the instance using a third-party tool like putty or ssh.
D. Generate a new SSH key pair. Verify the format of the public key and add it to the project. SSH into the instance using a third-party tool like putty or ssh.

A

Answer : A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Question 8 ( Single Topic )
You work for a university that is migrating to GCP.
These are the cloud requirements:
“¢ On-premises connectivity with 10 Gbps
“¢ Lowest latency access to the cloud
“¢ Centralized Networking Administration Team
New departments are asking for on-premises connectivity to their projects. You want to deploy the most cost-efficient interconnect solution for connecting the campus to Google Cloud.
What should you do?

A. Use Shared VPC, and deploy the VLAN attachments and Interconnect in the host project.
B. Use Shared VPC, and deploy the VLAN attachments in the service projects. Connect the VLAN attachment to the Shared VPC’s host project.
C. Use standalone projects, and deploy the VLAN attachments in the individual projects. Connect the VLAN attachment to the standalone projects’ Interconnects.
D. Use standalone projects and deploy the VLAN attachments and Interconnects in each of the individual projects.

A

Answer : A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Question 9 ( Single Topic )
You have deployed a new internal application that provides HTTP and TFTP services to on-premises hosts. You want to be able to distribute traffic across multiple
Compute Engine instances, but need to ensure that clients are sticky to a particular instance across both services.
Which session affinity should you choose?

A. None
B. Client IP
C. Client IP and protocol
D. Client IP, port and protocol

A

Answer : B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Question 10 ( Single Topic )
You created a new VPC network named Dev with a single subnet. You added a firewall rule for the network Dev to allow HTTP traffic only and enabled logging.
When you try to log in to an instance in the subnet via Remote Desktop Protocol, the login fails. You look for the Firewall rules logs in Stackdriver Logging, but you do not see any entries for blocked traffic. You want to see the logs for blocked traffic.
What should you do?

A. Check the VPC flow logs for the instance.
B. Try connecting to the instance via SSH, and check the logs.
C. Create a new firewall rule to allow traffic from port 22, and enable logs.
D. Create a new firewall rule with priority 65500 to deny all traffic, and enable logs.

A

Answer : D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

You are trying to update firewall rules in a shared VPC for which you have been assigned only Network Admin permissions. You cannot modify the firewall rules.
Your organization requires using the least privilege necessary.
Which level of permissions should you request?

A. Security Admin privileges from the Shared VPC Admin.
B. Service Project Admin privileges from the Shared VPC Admin.
C. Shared VPC Admin privileges from the Organization Admin.
D. Organization Admin privileges from the Organization Admin.

A

Answer : A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

You want to create a service in GCP using IPv6.
What should you do?

A. Create the instance with the designated IPv6 address.
B. Configure a TCP Proxy with the designated IPv6 address.
C. Configure a global load balancer with the designated IPv6 address.
D. Configure an internal load balancer with the designated IPv6 address.

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

You want to deploy a VPN Gateway to connect your on-premises network to GCP. You are using a non BGP-capable on-premises VPN device. You want to minimize downtime and operational overhead when your network grows. The device supports only IKEv2, and you want to follow Google-recommended practices.
What should you do?

A. “¢ Create a Cloud VPN instance. “¢ Create a policy-based VPN tunnel per subnet. “¢ Configure the appropriate local and remote traffic selectors to match your local and remote networks. “¢ Create the appropriate static routes.
B. “¢ Create a Cloud VPN instance. “¢ Create a policy-based VPN tunnel. “¢ Configure the appropriate local and remote traffic selectors to match your local and remote networks. “¢ Configure the appropriate static routes.
C. “¢ Create a Cloud VPN instance. “¢ Create a route-based VPN tunnel. “¢ Configure the appropriate local and remote traffic selectors to match your local and remote networks. “¢ Configure the appropriate static routes.
D. “¢ Create a Cloud VPN instance. “¢ Create a route-based VPN tunnel. “¢ Configure the appropriate local and remote traffic selectors to 0.0.0.0/0. “¢ Configure the appropriate static routes.

A

Answer : B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Your company just completed the acquisition of Altostrat (a current GCP customer). Each company has a separate organization in GCP and has implemented a custom DNS solution. Each organization will retain its current domain and host names until after a full transition and architectural review is done in one year.
These are the assumptions for both GCP environments.
“¢ Each organization has enabled full connectivity between all of its projects by using Shared VPC.
“¢ Both organizations strictly use the 10.0.0.0/8 address space for their instances, except for bastion hosts (for accessing the instances) and load balancers for serving web traffic.
“¢ There are no prefix overlaps between the two organizations.
“¢ Both organizations already have firewall rules that allow all inbound and outbound traffic from the 10.0.0.0/8 address space.
“¢ Neither organization has Interconnects to their on-premises environment.
You want to integrate networking and DNS infrastructure of both organizations as quickly as possible and with minimal downtime.
Which two steps should you take? (Choose two.)

A. Provision Cloud Interconnect to connect both organizations together.
B. Set up some variant of DNS forwarding and zone transfers in each organization.
C. Connect VPCs in both organizations using Cloud VPN together with Cloud Router.
D. Use Cloud DNS to create A records of all VMs and resources across all projects in both organizations.
E. Create a third organization with a new host project, and attach all projects from your company and Altostrat to it using shared VPC.

A

Answer : BC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Your on-premises data center has 2 routers connected to your Google Cloud environment through a VPN on each router. All applications are working correctly; however, all of the traffic is passing across a single VPN instead of being load-balanced across the 2 connections as desired.
During troubleshooting you find:
“¢ Each on-premises router is configured with a unique ASN.
“¢ Each on-premises router is configured with the same routes and priorities.
“¢ Both on-premises routers are configured with a VPN connected to a single Cloud Router.
“¢ BGP sessions are established between both on-premises routers and the Cloud Router.
“¢ Only 1 of the on-premises router’s routes are being added to the routing table.
What is the most likely cause of this problem?

A. The on-premises routers are configured with the same routes.
B. A firewall is blocking the traffic across the second VPN connection.
C. You do not have a load balancer to load-balance the network traffic.
D. The ASNs being used on the on-premises routers are different.

A

Answer : D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

You have ordered Dedicated Interconnect in the GCP Console and need to give the Letter of Authorization/Connecting Facility Assignment (LOA-CFA) to your cross-connect provider to complete the physical connection.
Which two actions can accomplish this? (Choose two.)

A. Open a Cloud Support ticket under the Cloud Interconnect category.
B. Download the LOA-CFA from the Hybrid Connectivity section of the GCP Console.
C. Run gcloud compute interconnects describe <interconnect>.
D. Check the email for the account of the NOC contact that you specified during the ordering process.
E. Contact your cross-connect provider and inform them that Google automatically sent the LOA/CFA to them via email, and to complete the connection.</interconnect>

A

Answer : BD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Your company’s web server administrator is migrating on-premises backend servers for an application to GCP. Libraries and configurations differ significantly across these backend servers. The migration to GCP will be lift-and-shift, and all requests to the servers will be served by a single network load balancer frontend.
You want to use a GCP-native solution when possible.
How should you deploy this service in GCP?

A. Create a managed instance group from one of the images of the on-premises servers, and link this instance group to a target pool behind your load balancer.
B. Create a target pool, add all backend instances to this target pool, and deploy the target pool behind your load balancer.
C. Deploy a third-party virtual appliance as frontend to these servers that will accommodate the significant differences between these backend servers.
D. Use GCP’s ECMP capability to load-balance traffic to the backend servers by installing multiple equal-priority static routes to the backend servers.

A

Answer : B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT.
What is the most likely cause of this problem?

A. The instance has been configured with multiple interfaces.
B. An external IP address has been configured on the instance.
C. You have created static routes that use RFC1918 ranges.
D. The instance is accessible by a load balancer external IP address.

A

Answer : B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

You want to set up two Cloud Routers so that one has an active Border Gateway Protocol (BGP) session, and the other one acts as a standby.
Which BGP attribute should you use on your on-premises router?

A. AS-Path
B. Community
C. Local Preference
D. Multi-exit Discriminator

A

Answer : D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

You are increasing your usage of Cloud VPN between on-premises and GCP, and you want to support more traffic than a single tunnel can handle. You want to increase the available bandwidth using Cloud VPN.
What should you do?

A. Double the MTU on your on-premises VPN gateway from 1460 bytes to 2920 bytes.
B. Create two VPN tunnels on the same Cloud VPN gateway that point to the same destination VPN gateway IP address.
C. Add a second on-premises VPN gateway with a different public IP address. Create a second tunnel on the existing Cloud VPN gateway that forwards the same IP range, but points at the new on-premises gateway IP.
D. Add a second Cloud VPN gateway in a different region than the existing VPN gateway. Create a new tunnel on the second Cloud VPN gateway that forwards the same IP range, but points to the existing on-premises VPN gateway IP address.

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

You are disabling DNSSEC for one of your Cloud DNS-managed zones. You removed the DS records from your zone file, waited for them to expire from the cache, and disabled DNSSEC for the zone. You receive reports that DNSSEC validating resolves are unable to resolve names in your zone.
What should you do?

A. Update the TTL for the zone.
B. Set the zone to the TRANSFER state.
C. Disable DNSSEC at your domain registrar.
D. Transfer ownership of the domain to a new registrar.

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

You have an application hosted on a Compute Engine virtual machine instance that cannot communicate with a resource outside of its subnet. When you review the flow and firewall logs, you do not see any denied traffic listed.
During troubleshooting you find:
“¢ Flow logs are enabled for the VPC subnet, and all firewall rules are set to log.
“¢ The subnetwork logs are not excluded from Stackdriver.
“¢ The instance that is hosting the application can communicate outside the subnet.
“¢ Other instances within the subnet can communicate outside the subnet.
“¢ The external resource initiates communication.
What is the most likely cause of the missing log lines?

A. The traffic is matching the expected ingress rule.
B. The traffic is matching the expected egress rule.
C. The traffic is not matching the expected ingress rule.
D. The traffic is not matching the expected egress rule.

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

You have configured Cloud CDN using HTTP(S) load balancing as the origin for cacheable content. Compression is configured on the web servers, but responses served by Cloud CDN are not compressed.
What is the most likely cause of the problem?

A. You have not configured compression in Cloud CDN.
B. You have configured the web servers and Cloud CDN with different compression types.
C. The web servers behind the load balancer are configured with different compression types.
D. You have to configure the web servers to compress responses even if the request has a Via header.

A

Answer : D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

You have a web application that is currently hosted in the us-central1 region. Users experience high latency when traveling in Asia. You’ve configured a network load balancer, but users have not experienced a performance improvement. You want to decrease the latency.
What should you do?

A. Configure a policy-based route rule to prioritize the traffic.
B. Configure an HTTP load balancer, and direct the traffic to it.
C. Configure Dynamic Routing for the subnet hosting the application.
D. Configure the TTL for the DNS zone to decrease the time between updates.

A

Answer : B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

You have an application running on Compute Engine that uses BigQuery to generate some results that are stored in Cloud Storage. You want to ensure that none of the application instances have external IP addresses.
Which two methods can you use to accomplish this? (Choose two.)

A. Enable Private Google Access on all the subnets.
B. Enable Private Google Access on the VPC.
C. Enable Private Services Access on the VPC.
D. Create network peering between your VPC and BigQuery.
E. Create a Cloud NAT, and route the application traffic via NAT gateway.

A

Answer : AE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

You are designing a shared VPC architecture. Your network and security team has strict controls over which routes are exposed between departments. Your
Production and Staging departments can communicate with each other, but only via specific networks. You want to follow Google-recommended practices.
How should you design this topology?

A. Create 2 shared VPCs within the shared VPC Host Project, and enable VPC peering between them. Use firewall rules to filter access between the specific networks.
B. Create 2 shared VPCs within the shared VPC Host Project, and create a Cloud VPN/Cloud Router between them. Use Flexible Route Advertisement (FRA) to filter access between the specific networks.
C. Create 2 shared VPCs within the shared VPC Service Project, and create a Cloud VPN/Cloud Router between them. Use Flexible Route Advertisement (FRA) to filter access between the specific networks.
D. Create 1 VPC within the shared VPC Host Project, and share individual subnets with the Service Projects to filter access between the specific networks.

A

Answer : D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

You are adding steps to a working automation that uses a service account to authenticate. You need to drive the automation the ability to retrieve files from a
Cloud Storage bucket. Your organization requires using the least privilege possible.
What should you do?

A. Grant the compute.instanceAdmin to your user account.
B. Grant the iam.serviceAccountUser to your user account.
C. Grant the read-only privilege to the service account for the Cloud Storage bucket.
D. Grant the cloud-platform privilege to the service account for the Cloud Storage bucket.

A

Answer : C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

You converted an auto mode VPC network to custom mode. Since the conversion, some of your Cloud Deployment Manager templates are no longer working.
You want to resolve the problem.
What should you do?

A. Apply an additional IAM role to the Google API”™s service account to allow custom mode networks.
B. Update the VPC firewall to allow the Cloud Deployment Manager to access the custom mode networks.
C. Explicitly reference the custom mode networks in the Cloud Armor whitelist.
D. Explicitly reference the custom mode networks in the Deployment Manager templates.

A

Answer : D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

You have recently been put in charge of managing identity and access management for your organization. You have several projects and want to use scripting and automation wherever possible. You want to grant the editor role to a project member.
Which two methods can you use to accomplish this? (Choose two.)

A. GetIamPolicy() via REST API
B. setIamPolicy() via REST API
C. gcloud pubsub add-iam-policy-binding Sprojectname –member user:Susername –role roles/editor
D. gcloud projects add-iam-policy-binding Sprojectname –member user:Susername –role roles/editor
E. Enter an email address in the Add members field, and select the desired role from the drop-down menu in the GCP Console.

A

Answer : BD

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?

A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer

A

Most Voted D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?

A. Load data into Google BigQuery
B. Insert data into Google Cloud SQL
C. Put flat files into Google Cloud Storage
D. Stream data into Google Cloud Datastore

A

Most Voteda A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? (Choose three.)

A. Port the application code to run on Google App Engine
B. Integrate Cloud Dataflow into the application to capture real-time metrics
C. Instrument the application with a monitoring tool like Stackdriver Debugger Most Voted
D. Select an automation framework to reliably provision the cloud infrastructure Most Voted
E. Deploy a continuous integration tool with automated testing in a staging environment Most Voted
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable

A

C D E

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?

A. Configure a new load balancer for the new version of the API
B. Reconfigure old clients to use a new endpoint for the new API
C. Have the old API forward traffic to the new API based on the path
D. Use separate backend pools for each API path behind the load balancer Most Voted

A

D is the answer because HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL. A is not correct because configuring a new load balancer would require a new or different SSL and DNS records which conflicts with the requirements to keep the same SSL and DNS records. B is not correct because it goes against the requirements. The company wants to keep the old API available while new customers and testers try the new API. C is not correct because it is not a requirement to decommission the implementation behind the old API. Moreover, it introduces unnecessary risk in case bugs or incompatibilities are discovered in the new API.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?

A. Load data into Google BigQuery Most Voted
B. Insert data into Google Cloud SQL
C. Put flat files into Google Cloud Storage
D. Stream data into Google Cloud Datastore

A

his question could go either way for A or B. But Big Query was designed with this in mind, according to numerous Google presentation and videos. Cloud Datastore is a NoSQL database (https://cloud.google.com/datastore/docs/concepts/overview)
Cloud Storage does not have an SQL interface. The previous two sentences eliminate options C and D. So I’d pick “A”.
IMHO, it should be A only. The reason is that they want to perform analysis on the data and BigQuery excels in that over Cloud SQL. You can run SQL queries in both but I BigQuery has better analytical tools. It can do ad-hoc analysis like Cloud SQL using Cloud Standard SQL and it can do geo-spatial and ML analysis via its Cloud Standard SQL interface.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.
What is the most likely cause of this problem?

A. The session variable is local to just a single instance
B. The session variable is being overwritten in Cloud Datastore
C. The URL of the API needs to be modified to prevent caching
D. The HTTP Expires header needs to be set to -1 stop caching

A

It’s A. AppEngine spins up new containers automatically according to the load. During peak traffic, HTTP requests originated by the same user could be served by different containers. Given that the variable sessions is recreated for each container, it might store different data.
The problem here is that this Flask app is stateful. The sessions variable is the state of this app. And stateful variables in AppEngine / Cloud Run / Cloud Functions are problematic.
A solution would be to store the session in some database (e.g. Firestore, Memorystore) and retrieve it from there. This way the app would fetch the session from a single place and would be stateless.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.
What is the most likely cause of this problem?

A. The session variable is local to just a single instance
B. The session variable is being overwritten in Cloud Datastore
C. The URL of the API needs to be modified to prevent caching
D. The HTTP Expires header needs to be set to -1 stop caching

A

It’s A. AppEngine spins up new containers automatically according to the load. During peak traffic, HTTP requests originated by the same user could be served by different containers. Given that the variable sessions is recreated for each container, it might store different data.
The problem here is that this Flask app is stateful. The sessions variable is the state of this app. And stateful variables in AppEngine / Cloud Run / Cloud Functions are problematic.
A solution would be to store the session in some database (e.g. Firestore, Memorystore) and retrieve it from there. This way the app would fetch the session from a single place and would be stateless.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs.
What should you do?

A. Direct them to download and install the Google StackDriver logging agent
B. Send them a list of online resources about logging best practices
C. Help them define their requirements and assess viable logging tools
D. Help them upgrade their current tool to take advantage of any new features

A

A. This is GCP exam. They will always promote their services. Not a third party solution.
upvoted 118 times
willrof 4 years, 3 months ago
Totally Agree. offering Stackdriver Logging is what they want from a GCA. answer is A.
upvoted 7 times
Ziegler 4 years, 9 months ago
Remember that agent is only required for non cloud based resources. The question is saying their cloud based… feel C meets this need

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company’s web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)

A. Introduce a green-blue deployment model
B. Replace the QA environment with canary releases
C. Fragment the monolithic platform into microservices
D. Reduce the platform’s dependency on relational database systems
E. Replace the platform’s relational database systems with a NoSQL database

A

D) and E) are pointless in this context.
C) is certainly a good practice.
Now between A) and B)
A) Blue green deployment is an application release model that gradually transfers user traffic from a previous version of an app or microservice to a nearly identical new release—both of which are running in production.
c) In software, a canary process is usually the first instance that receives live production traffic about a new configuration update, either a binary or configuration rollout. The new release only goes to the canary at first. The fact that the canary handles real user traffic is key: if it breaks, real users get affected, so canarying should be the first step in your deployment process, as opposed to the last step in testing in production. “
While both green-blue and canary releases are useful, B) suggests “replacing QA” with canary releases - which is not good. QA got the issue down by 80%. Hence A) and C)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines
(VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.
Which two steps should you take? (Choose two.)

A. Use the - -no-auto-delete flag on all persistent disks and stop the VM
B. Use the - -auto-delete flag on all persistent disks and terminate the VM
C. Apply VM CPU utilization label and include it in the BigQuery billing export
D. Use Google BigQuery billing export and labels to associate cost to groups
E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM
F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM

A

I spent all morning researching this question. I just popped over and took the GCP Practice exam on Google’s website and guess what… this question was on it word for word, but it had slightly different answers, but not by much here is what I learned. The correct answer is 100% A / D and here is why. On the sample question, the “F” option is gone. “A” is there but slightly reworked, it now says: “Use persistent disks to store the state. Start and stop the VM as needed” which makes much more sense. The practice exam says A and D are correct. Given the wording of this question, if A and B, where there then both would be correct because of the word “persistent” and not because of the flag. The “no-auto-delete” makes A slightly safer than B, but it is the “persistent disk” that makes them right, not the flag. Hope that helps! F is not right because that is a complex way of solving the issue that by choosing Persistent Disk solves it up front. HTH

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations.
Which database type should you use?

A. Flat file
B. NoSQL
C. Relational
D. Blobstore

A

This is time series data. We also have no idea what kinds of data are being captured so it doesn’t appear structurd.

A does not seem reasonable because a flat file is not easy to query and analyze.
B seems reasonable because this accommodates unstructured data.
C seems unreasonable because we have no idea on the structure of the data.
D seems unreasonable beacause there is no such Google database type.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address.
You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.
What should you do?

A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.
B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.

A

“A” and “B” wouldn’t turn the VMs on or off, it would jsut prevent traffic. “C” would turn them off if the health check is configured to terminate the VM is it fails. “D” is the start of a pseudo health check without any logic, so it also isn’t an answer because it is like “A” and “B”. Correct Answer: “C”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to
BigQuery.
What should you do to fix the script?

A. Install the latest BigQuery API client library for Python
B. Run your script on a new virtual machine with the BigQuery access scope enabled
C. Create a new service account with BigQuery access and execute your script with that user
D. Install the bq component for gcloud with the command gcloud components install bq.

A

A - If client library was not installed, the python scripts won’t run - since the question states the script reports “cannot connect” - the client library must have been installed. so it’s B or C.

B - https://cloud.google.com/bigquery/docs/authorization an access scope is how your client application retrieve access_token with access permission in OAuth when you want to access services via API call - in this case, it is possible that the python script use an API call instead of library, if this is true, then access scope is required. client library requires no access scope (as it does not go through OAuth)

C - service account is Google Cloud’s best practice
So prefer C.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company’s web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)

A. Introduce a green-blue deployment model
B. Replace the QA environment with canary releases
C. Fragment the monolithic platform into microservices
D. Reduce the platform’s dependency on relational database systems
E. Replace the platform’s relational database systems with a NoSQL database

A

D) and E) are pointless in this context.
C) is certainly a good practice.
Now between A) and B)
A) Blue green deployment is an application release model that gradually transfers user traffic from a previous version of an app or microservice to a nearly identical new release—both of which are running in production.
c) In software, a canary process is usually the first instance that receives live production traffic about a new configuration update, either a binary or configuration rollout. The new release only goes to the canary at first. The fact that the canary handles real user traffic is key: if it breaks, real users get affected, so canarying should be the first step in your deployment process, as opposed to the last step in testing in production. “
While both green-blue and canary releases are useful, B) suggests “replacing QA” with canary releases - which is not good. QA got the issue down by 80%. Hence A) and C)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords.
What authentication strategy should they use?

A. Use G Suite Password Sync to replicate passwords into Google
B. Federate authentication via SAML 2.0 to the existing Identity Provider
C. Provision users in Google using the Google Cloud Directory Sync tool
D. Ask users to set their Google password to match their corporate password

A

The correct answer is B.
GCDS tool only copies the usernames, not the passwords. And more over strict security requirements for the passwords. Not allowed to copy them onto Google, I think.

Federation technique help resolve this issue. Please correct me if I am wrong.

GCDS syncs passwords - Ok but which passwords? Clients need to provide a new password for accessing Google Cloud after GCDS sync.
Google recognizes the user because GCDS populated the user list. The user is
redirected to a standard Google sign-in screen where they enter their standard username and Google Cloud-specific password.
The issue here is the two sets of passwords. Even if a user manually sets them both to the same value, they aren’t managed in a single place. If you need to update your password, you’d have to do that in AD and then again in Google Cloud Identity. In some cases, this approach can allow for better separation between your on-premises environment and Google Cloud, but it’s also one more password to manage for your users.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in.
Which technology should they use for this?

A. Google Cloud Dataproc
B. Google Cloud Dataflow
C. Google Container Engine with Bigtable
D. Google Compute Engine with Google BigQuery

A

All four options can accomplish what the question asks, in regards to batching and streaming processes. “A” is for Apache Spark and Hadoop, a juggernaut in speed of data processing. “B” is Google’s best attempt at TIBCO, Ab Initio, and other processing technology, built explicity for visualizing batch operations and streams without through various labeled circuit boards. “C” and “D” are used within “A” and “B” and would require more work and higher risk. I’d guess Google wants you to select “B”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users.
This behavior was not reported before the update.
What strategy should you take?

A. Work with your ISP to diagnose the problem
B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application
C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem

A

Key word: This behavior was not reported before the update
A - Not Correct as it was working before with same ISP
B - New code update caused an issue- why to open support ticket
C - I agree with C
D - This requires downtime and live prod affected too

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.
How can you remediate the problem with the least amount of downtime?

A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine
C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux
D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk
E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service

A

A is the correct answer because the question says “with minimum downtime”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.
How should you design your architecture?

A. Create a tokenizer service and store only tokenized data
B. Create separate projects that only process credit card data
C. Create separate subnetworks and isolate the components that process credit card data
D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data
E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor

A

Final Decision to go with Option A. I have done PCI DSS Audit for my project and thats the best suited case. 100% sure to use tokenised data instead of actual card number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.
Which storage infrastructure should you choose?

A. Google Cloud SQL
B. Google Cloud Bigtable
C. Google Cloud Storage
D. Google Cloud Datastore

A

Google Cloud Bigtable is well-suited for handling high-throughput, low-latency workloads like clickstream data. It is optimized for analytics on time-series and event data at scale, and it supports high write rates, with the capacity to handle thousands of writes per second. This makes it ideal for storing large volumes of clickstream data with bursts, ensuring data is available for analysis by data science and user experience teams.

  1. High throughput for clickstream data: Bigtable is a NoSQL database designed for high write throughput, making it ideal for handling the continuous stream of click data with bursts up to 8,500 clicks per second.
  2. Scalability: Bigtable is highly scalable, allowing you to handle increasing data volumes as your website portfolio grows.
  3. Low latency: Bigtable provides low latency data access, which is important for real-time analysis and reporting on clickstream data.
  4. Integration with BigQuery: Bigtable integrates well with BigQuery, enabling your data science team to perform complex analysis and generate insights from the clickstream data.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.
What should you do?

A. Write a lifecycle management rule in XML and push it to the bucket with gsutil
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil
C. Schedule a cron script using gsutil ls ג€”lr gs://backups/** to find and remove items older than 90 days
D. Schedule a cron script using gsutil ls ג€”l gs://backups/** to find and remove items older than 90 days and schedule it with cron

A

All four are correct answers. Google has built in cron job schduling with Cloud Schedule, so that would place “D” behind “C” in Google’s perspective. Google also has it’s own lifecycle management command line prompt gcloud lifecycle so “A” or “B” could be used. JSON is slightly faster than XML because of the “{“ verse “<c>" distinguisher, with a Trie tree used for alphanumeric parsing. So between "A" and "B", choose "B". Between "B" and "A", "B" is slightly more efficient from the GCP operator perspective. So choose "B".</c>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?

A. Google Cloud Dataflow
B. Google Cloud Dataproc
C. Google Compute Engine
D. Google Kubernetes Engine

A

Dataproc is a managed Spark and Hadoop service that lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. Dataproc automation helps you create clusters quickly, manage them easily, and save money by turning clusters off when you don’t need them. With less time and money spent on administration, you can focus on your jobs and your data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?

A. Google Cloud Dataflow
B. Google Cloud Dataproc
C. Google Compute Engine
D. Google Kubernetes Engine

A

Selected Answer: B
1. Managed Hadoop and Spark: Dataproc is specifically designed for running and managing Apache Spark and Hadoop clusters, which directly addresses your company’s needs.
2. Scalability: Dataproc allows you to easily scale your clusters to handle the increasing number and size of jobs. You can add or remove nodes as needed to accommodate the workload.
3. Minimal Operations Work: Dataproc automates cluster creation, configuration, and management, minimizing the operational overhead. This is crucial since you want to reduce operations work.
4. Code Compatibility: Dataproc is compatible with existing Spark and Hadoop code, so you can migrate your jobs with minimal or no code changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?

A. Google Cloud Dataflow
B. Google Cloud Dataproc
C. Google Compute Engine
D. Google Kubernetes Engine

A

Selected Answer: B
1. Managed Hadoop and Spark: Dataproc is specifically designed for running and managing Apache Spark and Hadoop clusters, which directly addresses your company’s needs.
2. Scalability: Dataproc allows you to easily scale your clusters to handle the increasing number and size of jobs. You can add or remove nodes as needed to accommodate the workload.
3. Minimal Operations Work: Dataproc automates cluster creation, configuration, and management, minimizing the operational overhead. This is crucial since you want to reduce operations work.
4. Code Compatibility: Dataproc is compatible with existing Spark and Hadoop code, so you can migrate your jobs with minimal or no code changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?

A. Google Cloud Dataflow
B. Google Cloud Dataproc
C. Google Compute Engine
D. Google Kubernetes Engine

A

Selected Answer: B
1. Managed Hadoop and Spark: Dataproc is specifically designed for running and managing Apache Spark and Hadoop clusters, which directly addresses your company’s needs.
2. Scalability: Dataproc allows you to easily scale your clusters to handle the increasing number and size of jobs. You can add or remove nodes as needed to accommodate the workload.
3. Minimal Operations Work: Dataproc automates cluster creation, configuration, and management, minimizing the operational overhead. This is crucial since you want to reduce operations work.
4. Code Compatibility: Dataproc is compatible with existing Spark and Hadoop code, so you can migrate your jobs with minimal or no code changes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk.
What should they change to get better performance from this system?

A. Increase the virtual machine’s memory to 64 GB
B. Create a new virtual machine running PostgreSQL
C. Dynamically resize the SSD persistent disk to 500 GB
D. Migrate their performance metrics warehouse to BigQuery
E. Modify all of their batch jobs to use bulk inserts into the database

A

Answer is C because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.
Where should you store the data?

A. Google BigQuery
B. Google Cloud SQL
C. Google Cloud Bigtable
D. Google Cloud Storage

A

Selected Answer: C
Cloud Bigtable is right solution and correct database choice, which provides high throughput, low latency and scalability for time series data such as this case.
Selected Answer: C
1. High Write Throughput: Bigtable excels at handling high-volume write operations, which is crucial for your application receiving data from 50,000 sensors sending 10 readings per second.
2. Low Latency: Bigtable offers very low latency for read operations, essential for real-time charting and data visualization.
3. Time-Series Data: Bigtable is well-suited for storing and querying time-series data, like your weather sensor readings with timestamps.
4. Scalability: Bigtable can handle massive amounts of data and scale seamlessly as your application grows.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
74
Q

20 You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.

Where should you store the data?

A. Google BigQuery
B. Google Cloud SQL
C. Google Cloud Bigtable
D. Google Cloud Storage

A

Selected Answer: C
Cloud Bigtable is right solution and correct database choice, which provides high throughput, low latency and scalability for time series data such as this case.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
75
Q

Question #21Topic 1
Your company’s user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a
99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.
What should you do?

A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to the system by terminating random resources on both zones
C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones
D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user’s usage of the app, and deploy enough resources to handle 200% of expected load

A

resilience test is not about load, is about terminate resources and service not affected. Think it’s B. The best for resilience in to introduce chaos in the infraestructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
76
Q

Question #22Topic 1
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.

You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? (Choose two.)

A. Remove Python after running pip
B. Remove dependencies from requirements.txt
C. Use a slimmed-down base image like Alpine Linux
D. Use larger machine types for your Google Container Engine node pools
E. Copy the source after he package dependencies (Python and pip) are installed

A

C & E:
C: Smaller the base image with minimum dependency faster the container will start
E: Docker image build uses caching. Docker Instructions sequence matter because
application’s dependencies change less frequently than the Python code which will help to reuse the cached layer of dependency and only add new layer for code change for Python Source code.

C & E are the correct answers.
Kindly refer - https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
77
Q

Question #22
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.

You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? (Choose two.)

A. Remove Python after running pip
B. Remove dependencies from requirements.txt
C. Use a slimmed-down base image like Alpine Linux
D. Use larger machine types for your Google Container Engine node pools
E. Copy the source after he package dependencies (Python and pip) are installed

A

C & E:
C: Smaller the base image with minimum dependency faster the container will start
E: Docker image build uses caching. Docker Instructions sequence matter because
application’s dependencies change less frequently than the Python code which will help to reuse the cached layer of dependency and only add new layer for code change for Python Source code.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
78
Q

Question #23
Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the future.
What should you do?

A. Deploy fewer changes to production
B. Deploy smaller changes to production
C. Increase the load on your test and staging environments
D. Deploy changes to a small subset of users before rolling out to production

A

C. Increase the load on your test and staging environments.

As you have pointed out in “Question Statement”, I do not see C covering “deployment procedures”. Test and Staging environment is more on testing, but not about deployment procedure to production.

So, the only option that cover test and deployment is D. (Yes, kind of unacceptable to have the users to do “testing”, but we make it “ok” by calling it “canary deployment”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
79
Q

Question #24Topic 1
A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services.
You want to know which service takes the longest in those cases.
What should you do?

A. Set timeouts on your application so that you can fail requests faster
B. Send custom metrics for each of your requests to Stackdriver Monitoring
C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice

A

D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice

Stackdriver Trace is a distributed tracing system that allows you to understand the relationships between requests and the various microservices that they touch as they pass through your application. By instrumenting your application with Stackdriver Trace, you can get a detailed breakdown of the latencies at each microservice, which can help you identify which service is taking the longest in those cases where a small number of API requests take a very long time.

Setting timeouts on your application or sending custom metrics to Stackdriver Monitoring may not provide the level of detail that you need to identify the specific service that is causing the latency issues. Looking for insights in Stackdriver Monitoring may also not provide the necessary level of detail, as it may not show the individual latencies at each microservice.

80
Q

Question #25Topic 1
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future.
What should you do?

A. Use a different database
B. Choose larger instances for your database
C. Create snapshots of your database more regularly
D. Implement routinely scheduled failovers of your databases

A

Answer is D
upvoted 100 times
Jos 5 years, 3 months ago
Yep, +1 for D
upvoted 20 times
Eroc Highly Voted 5 years, 4 months ago
@chiar, I agree the question i s not clear. In GCP larger instances have larger number of CPUs, Memory and come with their own private network. So increases the instance size would help prevent the need for failover during high traffic times. However, routinely scheduled failovers would allow the team to test the failover when it is not requried. This would make sure it is working when it is required.

81
Q

Question #26Topic 1
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.
Which approach should you use?

A. Grant the security team access to the logs in each Project
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery
C. Configure Stackdriver Monitoring for all Projects with the default retention policies
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage

A

D Correct
A and C can be quickly ruled out because none of them is solution for the requirements “retained for 5 years”

Between B and D, the different is where to store, BigQuery or Cloud Storage. Since the main concern is extended storing period, D (Correct Answer) is better choice, and the “retained for 5 years for future analysis” further qualifies it, for example, using Coldline storage class.

With regards of BigQuery, while it is also a low-cost storage, but the main purpose is for analysis. Also, logs stored in Cloud Storage is easy to transport to BigQuery or do query directly against the files saved in Cloud Storage if and whenever needed.

82
Q

Question #27Topic 1
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4
TB, and large updates are frequent. Replication requires private address space communication.
Which networking approach should you use?

A. Google Cloud Dedicated Interconnect
B. Google Cloud VPN connected to the data center network
C. A NAT and TLS translation gateway installed on-premises
D. A Google Compute Engine instance with a VPN server installed connected to the data center network

A

Let’s go with option elimination
A. Google Cloud Dedicated Interconnect
» Secured, fast connection, hence the choice. This will allow private connection from GCP to the data centre with a fast connection. Cost is not mentioned in the requirement to eliminate this option.
B. Google Cloud VPN connected to the data centre network
» We have to think about data flowing on the internet and the requirement talks about private connect. Also not sure how well you connect VPN with Data Center until you use the hybrid option. https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview hence eliminate
C. A NAT and TLS translation gateway installed on-premises
»This is a VM option to reach outside won’t for this requirement hence eliminate
D. A Google Compute Engine instance with a VPN server installed connected to the data centre network
»This is a slow option hence eliminate

83
Q

Question #28Topic 1
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process.
What should you do?

A. Create custom Google Stackdriver alerts and send them to the auditor
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor’s view
D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket

A

B. https://cloud.google.com/iam/docs/roles-audit-logging#scenario_external_auditors

84
Q

Question #29
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.
Where should you store the credentials?

A. In the source code
B. In an environment variable
C. In a secret management system
D. In a config file that has restricted access through ACLs

A

C
Google Secret Management was designed explicitly for this purpose.

86
Q

What is the primary service offered by Google Cloud for computing?

A

Google Compute Engine

87
Q

True or False: Google Cloud Storage is a service for storing and retrieving any amount of data at any time.

88
Q

Fill in the blank: Google Cloud’s managed Kubernetes service is called _______.

A

Google Kubernetes Engine

89
Q

What is the purpose of Google Cloud Pub/Sub?

A

To provide messaging services for real-time analytics and event-driven architectures.

90
Q

Which service is used for data warehousing in Google Cloud?

91
Q

What type of database is Google Cloud Firestore?

A

NoSQL document database

92
Q

True or False: Google Cloud Functions is a serverless execution environment.

93
Q

What is the main benefit of using Google Cloud VPC?

A

To provide a private network for resources in Google Cloud.

94
Q

Which Google Cloud service is designed for machine learning?

A

AI Platform

95
Q

What does IAM stand for in Google Cloud?

A

Identity and Access Management

96
Q

True or False: Google Cloud Spanner is a relational database service.

97
Q

What is the key benefit of using Google Cloud Load Balancing?

A

To distribute incoming traffic across multiple resources.

98
Q

Fill in the blank: Google Cloud’s service for managing APIs is called _______.

99
Q

Which service allows you to manage and deploy containerized applications on Google Cloud?

A

Google Kubernetes Engine

100
Q

What is the primary function of Google Cloud Dataflow?

A

To process and analyze large data sets in real-time.

101
Q

True or False: Google Cloud Identity provides identity management for users and groups.

102
Q

What is the purpose of Google Cloud Monitoring?

A

To provide visibility into the performance and availability of applications.

103
Q

Which service is specifically designed for serverless application development?

A

Google Cloud Functions

104
Q

What does the term ‘region’ refer to in Google Cloud?

A

A specific geographical location where resources are hosted.

105
Q

Fill in the blank: Google Cloud’s storage solution for unstructured data is called _______.

A

Google Cloud Storage

106
Q

What is the primary use case for Google Cloud Dataproc?

A

To run Apache Spark and Apache Hadoop clusters.

107
Q

True or False: Google Cloud’s Bigtable is designed for real-time analytics.

108
Q

Which Google Cloud service is ideal for building data pipelines?

A

Google Cloud Dataflow

109
Q

What is the function of Google Cloud Scheduler?

A

To automate the execution of tasks on a scheduled basis.

110
Q

Fill in the blank: Google Cloud’s service for scalable NoSQL databases is called _______.

111
Q

What is the purpose of Google Cloud Security Command Center?

A

To provide security and risk management for Google Cloud resources.

112
Q

Question #26
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings. Which approach should you use?

A. Grant the security team access to the logs in each Project
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery
C. Configure Stackdriver Monitoring for all Projects with the default retention policies
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage

A

✅ Answer: B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery

Explanation: Stackdriver (now Cloud Monitoring) has limited retention periods, which are not sufficient for a 5-year requirement. BigQuery provides scalable and cost-effective long-term storage while allowing efficient querying and analysis of historical data.

113
Q

Question #27
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4TB, and large updates are frequent. Replication requires private address space communication. Which networking approach should you use?

A. Google Cloud Dedicated Interconnect
B. Google Cloud VPN connected to the data center network
C. A NAT and TLS translation gateway installed on-premises
D. A Google Compute Engine instance with a VPN server installed connected to the data center network

A

✅ Answer: A. Google Cloud Dedicated Interconnect

Explanation: Dedicated Interconnect provides private, high-bandwidth, low-latency connectivity between on-premises and GCP, making it the best option for frequent large updates. Cloud VPN (option B) is a more affordable but lower-bandwidth alternative, which might not be sufficient for a 4TB database with large updates.

114
Q

Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process. What should you do?

A. Create custom Google Stackdriver alerts and send them to the auditor
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor’s view
D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket

A

✅ Answer: B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor

Explanation: BigQuery allows efficient querying and analysis of logs. Using ACLs and views, you can restrict access to only necessary data for auditors. This is the most efficient way to streamline IAM policy audits without manual intervention.

115
Q

Question #29
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely. Where should you store the credentials?

A. In the source code
B. In an environment variable
C. In a secret management system
D. In a config file that has restricted access through ACLs

A

✅ Answer: C. In a secret management system

Explanation: Secret management systems (such as Google Secret Manager) provide a secure, centralized, and access-controlled way to store sensitive data. Storing credentials in source code (A) is highly insecure. Environment variables (B) can be leaked through logs. Config files with ACLs (D) provide some security but are not ideal for dynamic secret management.

116
Q

Question #30
A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment. You want to advocate for the adoption of Google Cloud Deployment Manager. What are two business risks of migrating to Cloud Deployment Manager? (Choose two.)

A. Cloud Deployment Manager uses Python
B. Cloud Deployment Manager APIs could be deprecated in the future
C. Cloud Deployment Manager is unfamiliar to the company’s engineers
D. Cloud Deployment Manager requires a Google APIs service account to run
E. Cloud Deployment Manager can be used to permanently delete cloud resources
F. Cloud Deployment Manager only supports automation of Google Cloud resources

A

✅ Answer: B. Cloud Deployment Manager APIs could be deprecated in the future & C. Cloud Deployment Manager is unfamiliar to the company’s engineers

Explanation:

(B) APIs can be deprecated, which could impact long-term maintenance and require migration efforts later.
(C) If engineers are unfamiliar with Cloud Deployment Manager, there will be a learning curve, which could slow adoption and increase training costs.
Other options:

(A) Python usage is not a risk—many engineers are familiar with it.
(D) Requiring a Google APIs service account is standard practice in cloud automation.
(E) Any deployment tool can delete resources, but proper IAM policies and safeguards mitigate this risk.
(F) While Cloud Deployment Manager is limited to Google Cloud, this is not necessarily a business risk unless multi-cloud support is required.

117
Q

Question #31
A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must:

Be based on open-source technology for cloud portability
Dynamically scale compute capacity based on demand
Support continuous software delivery
Run multiple segregated copies of the same application stack
Deploy application bundles using dynamic templates
Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?

A. Google Kubernetes Engine, Jenkins, and Helm
B. Google Kubernetes Engine and Cloud Load Balancing
C. Google Kubernetes Engine and Cloud Deployment Manager
D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing

A

✅ Answer: A. Google Kubernetes Engine, Jenkins, and Helm

Explanation:

Google Kubernetes Engine (GKE) provides scalability, portability, and support for segregated application stacks.
Jenkins supports continuous software delivery.
Helm allows for dynamic application deployment using templated configurations.
Cloud Load Balancing is useful for traffic routing but is not enough on its own to meet all the requirements.

118
Q

Question #32
You have created several preemptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted. What should you do?

A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory
B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service
C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance
D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url

A

✅ Answer: C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance

Explanation:

Preemptible instances are terminated without warning, but Google Compute Engine (GCE) provides a short grace period for shutdown scripts to execute.
The correct method is to use the shutdown-script metadata key, which is automatically triggered before termination.

119
Q

Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?

A. Add each tier to a different subnetwork
B. Set up software-based firewalls on individual VMs
C. Add tags to each tier and set up routes to allow the desired traffic flow
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow

A

✅ Answer: D. Add tags to each tier and set up firewall rules to allow the desired traffic flow

Explanation:

Firewall rules are the best way to enforce network segmentation and security while ensuring proper traffic flow.
Tags allow the application of specific rules per tier to allow only the necessary communication (e.g., Web → API, API → Database).

120
Q

Question #34
Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team. Which three actions should you take? (Choose three.)

A. Use Stackdriver Logging to search for the module log entries
B. Read the debug GCE Activity log using the API or Cloud Console
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs
D. Identify whether a live migration event of the failed server occurred using the activity log
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics
F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen

A

✅ Answers: A. Use Stackdriver Logging to search for the module log entries, C. Use gcloud or Cloud Console to connect to the serial console and observe the logs, E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics

Explanation:

Stackdriver Logging (A) helps identify errors in system logs related to the kernel module.
The serial console (C) is useful for diagnosing boot issues or crashes caused by kernel modifications.
Stackdriver Metrics (E) can reveal any performance issues, CPU spikes, or memory leaks after the module was installed.
Other options:

(B) GCE Activity logs do not provide detailed Linux kernel logs.
(D) Live migration would not cause failures if properly configured.
(F) Running a debug VM locally is impractical and does not provide a cloud-native solution.

121
Q

Question #35
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should you take? (Choose two.)

A. Load logs into Google BigQuery
B. Load logs into Google Cloud SQL
C. Import logs into Google Stackdriver
D. Insert logs into Google Cloud Bigtable
E. Upload log files into Google Cloud Storage

A

✅ Answers: A. Load logs into Google BigQuery, E. Upload log files into Google Cloud Storage

Explanation:

Google Cloud Storage (E) is ideal for low-cost, long-term archival and disaster recovery.
BigQuery (A) enables efficient analytics on large datasets, making it ideal for log analysis.
Other options:

(B) Cloud SQL is for relational databases and is not designed for large-scale log analysis.
(C) Stackdriver is useful for monitoring but not designed for long-term storage.
(D) Bigtable is useful for high-throughput operations but is not the best choice for archiving logs.

123
Q

What is the primary purpose of Google Cloud?

A

To provide a suite of cloud computing services that run on the same infrastructure that Google uses internally.

124
Q

True or False: Google Cloud offers both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS).

125
Q

Fill in the blank: Google Cloud’s compute service is known as _______.

A

Google Compute Engine

126
Q

What is the main benefit of using Google Cloud for businesses?

A

Scalability, flexibility, and cost-effectiveness.

127
Q

Which service allows for the management of application programming interfaces (APIs) in Google Cloud?

A

Google Cloud Endpoints

128
Q

What is the key feature of Google Cloud’s BigQuery?

A

A serverless data warehouse that enables fast SQL queries and interactive analysis of large datasets.

129
Q

Which Google Cloud service is used for data storage?

A

Google Cloud Storage

130
Q

True or False: Google Cloud provides built-in security features for data protection.

131
Q

What is the purpose of Google Kubernetes Engine (GKE)?

A

To manage and orchestrate containerized applications using Kubernetes.

132
Q

Which Google Cloud service is specifically designed for machine learning?

A

Google Cloud AI Platform

133
Q

What is a major compliance certification held by Google Cloud?

134
Q

Fill in the blank: Google Cloud’s network infrastructure is designed to be _______.

A

highly reliable and low-latency

135
Q

What pricing model does Google Cloud primarily use?

A

Pay-as-you-go

136
Q

Which service in Google Cloud is used for serverless computing?

A

Google Cloud Functions

137
Q

True or False: Google Cloud supports multi-cloud strategies.

138
Q

What is the role of Google Cloud’s Identity and Access Management (IAM)?

A

To manage access to resources by defining who (identity) has what access (roles) to which resources.

139
Q

Fill in the blank: Google Cloud’s database service for relational databases is called _______.

140
Q

What is the primary function of Google Cloud Pub/Sub?

A

To enable real-time messaging between applications.

141
Q

Which Google Cloud service allows for the creation of virtual machines?

A

Google Compute Engine

142
Q

True or False: Google Cloud can be used for both development and production environments.

143
Q

What is the function of Google Cloud’s Data Loss Prevention (DLP) API?

A

To help discover, classify, and protect sensitive data.

144
Q

Which tool does Google Cloud provide for managing infrastructure as code?

A

Google Cloud Deployment Manager

145
Q

What is the advantage of using Google Cloud’s global network?

A

Improved performance and reduced latency for users worldwide.

146
Q

Fill in the blank: Google Cloud’s serverless analytics service is called _______.

A

Google Cloud Dataflow

147
Q

What type of data storage is Google Cloud Firestore designed for?

A

NoSQL document database

148
Q

What feature does Google Cloud provide to ensure application availability?

A

Load balancing

149
Q

True or False: Google Cloud provides tools for both developers and IT operations teams.

150
Q

Your company is migrating a mission-critical application to Google Cloud. The application must:

Provide 99.99% availability
Handle regional failures automatically
Scale based on traffic
Minimize operational overhead
Which architecture should you implement?

A. Deploy the application to a single Google Cloud region with auto-scaling enabled
B. Use a multi-region deployment with a global load balancer and managed services
C. Deploy the application across two zones in a single region with a regional load balancer
D. Use Compute Engine instances in multiple zones and set up a manual failover process

A

✅ Answer: B. Use a multi-region deployment with a global load balancer and managed services

Explanation:

Multi-region deployment ensures availability even during regional failures.
Global load balancing allows traffic to be automatically routed to available regions.
Managed services (e.g., Cloud Run, GKE, or App Engine) reduce operational overhead and improve scalability.
Option A lacks regional redundancy.
Option C is limited to one region and does not handle regional failures.
Option D requires manual intervention, which is not ideal for a mission-critical system.

151
Q

Your company is planning to deploy a new customer-facing application. The compliance team requires that:

All data is encrypted at rest and in transit
Customer data is only stored within a specific geographic region
The application meets SOC 2 compliance standards
Which two Google Cloud solutions should you use? (Choose two.)

A. Enable Customer-Managed Encryption Keys (CMEK) for data storage
B. Use Cloud Key Management Service (KMS) to generate and control encryption keys
C. Deploy Cloud SQL with a multi-region storage configuration
D. Store data in Google Cloud Storage (GCS) with a custom encryption policy
E. Use Cloud Spanner to store customer data across multiple continents

A

✅ Answers: A. Enable CMEK for data storage, B. Use Cloud Key Management Service (KMS) to generate and control encryption keys

Explanation:

CMEK (A) ensures that only your organization controls encryption keys, meeting compliance requirements.
Cloud KMS (B) provides centralized key management, supporting encryption at rest and in transit.
Option C (Cloud SQL multi-region) conflicts with the geographic restriction.
Option D (GCS with custom encryption) helps with security but does not explicitly address regional restrictions.
Option E (Cloud Spanner) stores data globally, which violates the geographic compliance requirement.

152
Q

Your company needs to process real-time financial transactions with the following requirements:

Latency must be below 100ms
System must support ACID transactions
The solution must be highly available with minimal maintenance
Automatic scaling should be enabled
Which database solution should you use?

A. Cloud SQL with a read replica setup
B. Cloud Spanner with multi-region deployment
C. BigQuery with federated queries
D. Firestore in native mode

A

✅ Answer: B. Cloud Spanner with multi-region deployment

Explanation:

Cloud Spanner is the only Google Cloud database that provides strong consistency (ACID transactions) and high availability.
Multi-region deployment ensures low latency and failover protection.
Option A (Cloud SQL) has replication, but read replicas do not support strong consistency.
Option C (BigQuery) is for batch analytics, not real-time transactions.
Option D (Firestore) is a NoSQL database and does not guarantee ACID transactions.

153
Q

A retail company wants to migrate its existing e-commerce platform to Google Cloud with the following requirements:

Minimize downtime during migration
Allow rollback in case of issues
Ensure customer data is not lost during migration
Enable auto-scaling to handle peak traffic periods
Which migration strategy should you use?

A. Lift-and-shift migration with Compute Engine instances
B. Deploy the new application on Google Cloud and switch traffic gradually using a blue/green deployment strategy
C. Migrate all data first, then launch the new application once migration is complete
D. Deploy the new application on a separate VPC and switch traffic using Cloud VPN

A

✅ Answer: B. Deploy the new application on Google Cloud and switch traffic gradually using a blue/green deployment strategy

Explanation:

Blue/Green deployments allow gradual traffic shifting, minimizing risk and enabling rollback if needed.
Option A (Lift-and-shift) does not support rollback and can result in downtime.
Option C (Migrating all data first) risks long downtime and customer impact.
Option D (Cloud VPN) is useful for hybrid connections but does not solve deployment risks.

154
Q

Question #40
Your team is building a cloud-native analytics application that processes streaming data in real-time. The application must:

Process millions of events per second
Provide low-latency transformations
Ensure exactly-once event processing
Scale dynamically based on traffic
Which Google Cloud service should you use?

A. Cloud Pub/Sub + Cloud Functions
B. Dataflow + Cloud Pub/Sub
C. Cloud Data Fusion + BigQuery
D. Cloud Dataproc + Cloud Storage

A

✅ Answer: B. Dataflow + Cloud Pub/Sub

Explanation:

Cloud Pub/Sub handles event ingestion and message distribution at scale.
Dataflow is designed for real-time stream processing with exactly-once processing and auto-scaling.
Option A (Cloud Functions) is serverless but not optimized for large-scale streaming.
Option C (Data Fusion + BigQuery) is more suitable for batch ETL pipelines.
Option D (Dataproc + Cloud Storage) is better for batch processing, not real-time streaming.

155
Q

Your company is planning to migrate its legacy applications to Google Cloud. The CTO emphasizes the importance of aligning cloud solutions with both business and technical requirements.

Which statement best describes the relationship between business and technical requirements in cloud architecture?

A. Business requirements focus on technical features, while technical requirements determine business goals.
B. Business requirements define organizational needs, while technical requirements describe how cloud solutions fulfill them.
C. Business requirements and technical requirements are independent and do not influence each other.
D. Technical requirements should always take priority over business requirements.

A

✅ Answer: B. Business requirements define organizational needs, while technical requirements describe how cloud solutions fulfill them.

Explanation:

Business requirements are high-level objectives (e.g., cost reduction, scalability, security compliance).
Technical requirements define the cloud solutions that meet those business goals (e.g., using Cloud Spanner for scalability or CMEK for security compliance).
Option A is incorrect because business requirements do not focus on technical features—they set the strategic goals.
Option C is incorrect because business and technical requirements are interconnected.
Option D is incorrect because both requirements must be balanced.

156
Q

Business Requirements in Cloud Migration
A financial services company is moving to Google Cloud. The business team has defined the following objectives:

Reduce operational costs by 30%
Improve system uptime to 99.99%
Ensure regulatory compliance with industry standards (e.g., GDPR, PCI DSS)
Provide a seamless customer experience
Which two technical requirements align with these business objectives? (Choose two.)

A. Use preemptible VMs to reduce compute costs
B. Deploy applications in multiple Google Cloud regions for high availability
C. Store customer data in Cloud Storage with multi-region replication
D. Run applications only in a single Google Cloud region to minimize complexity
E. Disable encryption to improve application performance

A

✅ Answers: B. Deploy applications in multiple Google Cloud regions for high availability, C. Store customer data in Cloud Storage with multi-region replication

Explanation:

Objective 1 (Cost Reduction) → Using cost-effective storage like Cloud Storage aligns with this.
Objective 2 (High Availability) → Multi-region deployment ensures 99.99% uptime.
Objective 3 (Compliance) → Cloud Storage supports compliance with regulatory frameworks.
Option A (Preemptible VMs) is cost-effective but not reliable for a financial system.
Option D (Single-region deployment) does not meet high availability needs.
Option E (Disabling encryption) violates security compliance requirements.

157
Q

Balancing Business and Technical Trade-offs
A retail company wants to modernize its e-commerce platform on Google Cloud. The business team wants:

Lower costs
High availability (99.99%)
Fast performance for global users
The technical team suggests using Cloud Spanner for the database, but the CFO is concerned about its high cost.

How should the cloud architect balance business and technical trade-offs?

A. Use Cloud Spanner because it meets the technical requirement of availability, regardless of cost.
B. Use Cloud SQL instead, as it is cheaper and supports replication, even if availability is slightly lower.
C. Deploy an on-premises database and connect it to Google Cloud using a VPN.
D. Store all data in Cloud Storage to avoid database costs.

A

✅ Answer: B. Use Cloud SQL instead, as it is cheaper and supports replication, even if availability is slightly lower.

Explanation:

Cloud Spanner provides high availability but is expensive.
Cloud SQL (with read replicas) provides a balance between cost and availability.
Option A (Cloud Spanner) meets technical needs but ignores cost concerns.
Option C (On-premises database) does not modernize the platform as intended.
Option D (Cloud Storage) is not a database solution and would cause performance issues.

158
Q

Security and Compliance in Business Requirements
A healthcare provider is migrating patient records to Google Cloud. HIPAA compliance requires:

Encryption of all patient data
Access control policies for sensitive information
Audit logs to track access and modifications
Which two Google Cloud services should be used to meet these requirements? (Choose two.)

A. Cloud KMS (Key Management Service)
B. Google Cloud Armor
C. IAM (Identity and Access Management)
D. Firebase for patient data storage
E. Google Cloud CDN

A

✅ Answers: A. Cloud KMS (Key Management Service), C. IAM (Identity and Access Management)

Explanation:

Cloud KMS ensures HIPAA-compliant encryption of patient data.
IAM allows role-based access control to enforce security policies.
Option B (Cloud Armor) protects against DDoS attacks but does not handle data access controls.
Option D (Firebase) is not designed for HIPAA-compliant healthcare records.
Option E (Cloud CDN) speeds up content delivery but does not handle compliance or security.

159
Q

Performance vs. Cost in Cloud Architecture
Your company needs to run a machine learning workload on Google Cloud. The business team wants:

High performance for large datasets
Low operational cost
Scalability to handle unpredictable traffic
Which cloud solution best meets these requirements?

A. Use Compute Engine with GPUs and manually scale instances
B. Use Vertex AI with auto-scaling and managed infrastructure
C. Run the workload on Cloud Functions for cost savings
D. Store data in Cloud SQL and run ML models locally

A

✅ Answer: B. Use Vertex AI with auto-scaling and managed infrastructure

Explanation:

Vertex AI provides high-performance ML with auto-scaling while reducing operational costs.
Option A (Compute Engine with GPUs) provides performance but requires manual scaling, increasing complexity.
Option C (Cloud Functions) is not suitable for ML workloads.
Option D (Cloud SQL + local ML) does not provide cloud-based scalability.
Key Takeaways
Business requirements define strategic goals (e.g., cost reduction, security, scalability).
Technical requirements define cloud solutions that fulfill those goals (e.g., using Cloud SQL for cost-effective scalability).
Cloud architects must balance performance, availability, security, and cost to meet both business and technical needs.

160
Q

Identifying Business and Technical Needs in Cloud Migration
A manufacturing company is planning to migrate to Google Cloud. The business team defines these objectives:

Reduce IT costs by 40%
Improve real-time analytics for production data
Ensure high availability of mission-critical applications
Enhance data security and regulatory compliance
Which two technical solutions best align with these business needs? (Choose two.)

A. Use Google Kubernetes Engine (GKE) to deploy scalable microservices
B. Store production data in Cloud Storage Nearline to reduce costs
C. Deploy databases in Cloud Spanner with multi-region availability
D. Use Compute Engine with preemptible VMs for long-running production workloads
E. Run analytics on BigQuery for real-time production insights

A

Answers: A. Google Kubernetes Engine (GKE), E. BigQuery

Explanation:

Objective 1 (Cost Reduction) → GKE optimizes cost with auto-scaling.
Objective 2 (Real-Time Analytics) → BigQuery is designed for fast, scalable analytics.
Objective 3 (High Availability) → GKE ensures application availability.
Objective 4 (Compliance & Security) → GKE supports security best practices.
Option B (Cloud Storage Nearline) is for archival storage, not real-time data.
Option C (Cloud Spanner) is great for availability but may be too expensive.
Option D (Preemptible VMs) is not ideal for mission-critical workloads.

161
Q

Choosing a Cloud Architecture to Meet Business Goals
A global media company needs a cloud storage solution for video content. The business team has the following requirements:

Fast delivery of videos worldwide
Low storage costs for older content
Scalability for increasing content volume
Which combination of Google Cloud solutions meets these requirements?

A. Store all videos in Cloud SQL and serve them using Cloud CDN
B. Use Cloud Storage Multi-Regional for new videos and Cloud Storage Coldline for archived content
C. Use Bigtable to store video files and Pub/Sub to stream them
D. Store all videos in a Google Cloud Dataproc cluster for processing

A

✅ Answer: B. Use Cloud Storage Multi-Regional for new videos and Cloud Storage Coldline for archived content

Explanation:

Cloud Storage Multi-Regional provides fast delivery and scalability.
Cloud Storage Coldline reduces costs for older content.
Option A (Cloud SQL + CDN) is incorrect because Cloud SQL is not a storage solution for video.
Option C (Bigtable + Pub/Sub) is incorrect because Bigtable is not optimized for video storage.
Option D (Dataproc) is incorrect because Dataproc is for processing, not storage and delivery.

162
Q

Balancing Security and Cost for Cloud-Based Applications
A financial company is designing an online banking application in Google Cloud. The business team requires:

High security to protect financial transactions
Fast response times for users
Cost-effective scaling
Which Google Cloud approach meets these business and technical requirements?

A. Deploy the application on Cloud Run with Cloud SQL
B. Use App Engine with Identity-Aware Proxy (IAP) for authentication
C. Run the application on Google Kubernetes Engine (GKE) with private networking and Cloud Load Balancing
D. Use Compute Engine with Preemptible VMs for cost savings

A

✅ Answer: C. Google Kubernetes Engine (GKE) with private networking and Cloud Load Balancing

Explanation:

GKE with private networking provides secure transactions.
Cloud Load Balancing ensures fast and scalable response times.
Option A (Cloud Run + Cloud SQL) lacks enterprise security controls.
Option B (App Engine + IAP) is good for security but less scalable than GKE.
Option D (Preemptible VMs) is not suitable for a financial application that needs stability.

163
Q

Cloud Cost Optimization Strategy
Your company is moving to Google Cloud and wants to optimize costs. The business team wants to:

Minimize cloud infrastructure expenses
Maintain performance and availability
Scale up or down based on demand
Which two strategies should you recommend? (Choose two.)

A. Use preemptible VMs for non-critical workloads
B. Deploy all workloads in a single region to reduce costs
C. Implement autoscaling for Compute Engine or Kubernetes clusters
D. Use Cloud Spanner for all databases, regardless of workload type
E. Store infrequently accessed data in Cloud Storage Coldline

A

Answers: A. Use preemptible VMs for non-critical workloads, C. Implement autoscaling

Explanation:

Preemptible VMs reduce compute costs for batch jobs.
Autoscaling optimizes resource allocation and minimizes costs.
Option B (Single region) reduces costs but lowers availability.
Option D (Cloud Spanner) is powerful but expensive for small workloads.
Option E (Cloud Storage Coldline) is good for archival storage, but not for active workloads.

164
Q

Question #41: Sanitizing PII Before Storing in Bigtable
Your customer support tool logs email and chat conversations to Cloud Bigtable for retention and analysis.
What is the recommended approach for sanitizing this data of personally identifiable information (PII) or payment card information (PCI)?

A

✅ Answer: C. De-identify the data with the Cloud Data Loss Prevention (DLP) API

Explanation:

Cloud DLP is a Google-managed service that scans and de-identifies PII and PCI before storage.
Option A (SHA-256 hashing) does not remove sensitive data—it just makes it non-reversible (but still exists).
Option B (Elliptic Curve Encryption) encrypts data but does not remove PII.
Option D (Regex-based redaction) is error-prone and hard to maintain compared to Cloud DLP.

165
Q

Storing a Custom Utility in Cloud Shell
You are using Cloud Shell and need to install a custom utility for use in a few weeks.
Where can you store the file so it persists across sessions and is in the default execution path?

A

✅ Answer: A. ~/bin

Explanation:

Cloud Shell sessions reset after a period of inactivity. However, the home directory (~/bin) persists across sessions.
Option B (Cloud Storage) is persistent but not in the execution path.
Option C (/google/scripts) is read-only.
Option D (/usr/local/bin) requires root access, which Cloud Shell does not provide.

166
Q

Private Connection Between Compute Engine and On-Premises
You want to create a private connection between your Compute Engine instances and your on-premises data center.
You require a minimum 20 Gbps connection and want to follow Google best practices.

A

✅ Answer: A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.

Explanation:

Dedicated Interconnect provides private, high-speed (up to 100 Gbps) connections between on-premises and Google Cloud VPCs.
Option B (Cloud VPN) is limited to 3 Gbps per tunnel (even with multiple tunnels, it won’t reach 20 Gbps).
Options C & D (Cloud CDN) are for caching web content, not private connections.

167
Q

Cost-Effective GCP Usage for a Startup
You are analyzing and defining business processes to support your startup’s trial usage of Google Cloud.
You do not yet know what consumer demand will be.
Your manager requires you to minimize costs while following Google best practices.

A

✅ Answer: B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.

Explanation:

Google Free Tier provides limited free usage of GCP services.
Sustained Use Discounts (SUDs) automatically lower costs for long-running workloads.
Training employees on cost management is critical for long-term efficiency.
Option C & D (Committed Use Discounts, CUDs) require long-term commitments, which are not ideal for an unpredictable startup.

168
Q

CI/CD Pipeline with Code Verification
You are building a continuous deployment pipeline for a Git repository and want to verify code changes before deployment.

E

A

✅ Answer: D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
DExplanation:

Jenkins automates CI/CD workflows, ensuring code changes go through staging before production.
Option A (Spinnaker Red/Black Deployments) ensures safe rollbacks, but it does not explicitly test before production.
Option B (Spinnaker Testing in Production) violates best practices by skipping a staging phase.
Option C (Jenkins with 10% rollout) is useful for gradual releases but does not explicitly test before deployment.

169
Q

Question #46: Compute Engine Instances Restarting
Your Compute Engine managed instance group is experiencing an outage—all instances keep restarting every 5 seconds.
You have a health check configured, but autoscaling is disabled. Your colleague, a Linux expert, needs access to debug the issue.

A

✅ Answer: C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH Keys.

Explanation:

The health check is likely causing restarts because it detects the instances as unhealthy and forces recreation.
Disabling the health check temporarily allows instances to remain running for debugging.
Adding an SSH key to the project-wide SSH keys ensures your colleague can access all instances.
Option A (Project Viewer role) does not grant SSH access.
Option B (Rolling Restart) would not fix the root cause.
Option D (Disable Autoscaling) is unnecessary—autoscaling is already disabled.

170
Q

Question #47: GKE and PCI DSS Compliance
Your company is migrating on-premises workloads to Google Cloud and using Google Kubernetes Engine (GKE) for orchestration.
Some parts of your architecture must be PCI DSS-compliant.

A

✅ Answer: C. GKE and GCP provide the tools you need to build a PCI DSS-compliant environment.

Explanation:

Google Cloud itself is PCI DSS-compliant, but it is your responsibility to configure GKE properly to meet PCI DSS requirements.
Option A (Only App Engine is PCI-certified) is false—multiple GCP services can be used for PCI DSS.
Option B (GKE is considered shared hosting and non-compliant) is incorrect—GKE supports PCI DSS compliance when properly configured.
Option D (All GCP services are PCI-compliant by default) is false—compliance depends on your architecture.

171
Q

Question #48: Detecting Data Anomalies
Your company has multiple on-premises systems used for reporting, but the data quality has degraded over time.
You want to detect anomalies using Google-recommended practices.

A

✅ Answer: B. Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.

Explanation:

Cloud Dataprep is a Google-recommended tool for exploring, cleaning, and transforming data using machine learning-based anomaly detection.
Option A (Cloud Datalab + Cloud Storage) is not ideal because Cloud Datalab is a Jupyter-based notebook tool for exploratory analysis, not data cleansing.
Option C & D (Connecting on-premises directly to Dataprep or Datalab) are unnecessary overhead—first upload data to Cloud Storage.

172
Q

Question #49: IAM Policy Inheritance in GCP
Google Cloud Platform (GCP) resources are managed hierarchically using organizations, folders, and projects.
How does Cloud Identity and Access Management (IAM) policy inheritance work?

A

✅ Answer: C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors.

Explanation:

IAM policies in GCP inherit permissions from higher levels (Organization → Folder → Project → Resource).
The effective IAM policy is a combination (union) of the policies applied at that level and inherited policies.
Option A (Only the node’s policy matters) is false—inherited policies apply too.
Option B (Restricted by ancestors) is misleading—permissions accumulate, not restrict.
Option D (Intersection of policies) is false—permissions do not “cancel out”—they add up.

173
Q

Question #50: Cloud VPN and IP Ranges
Your company is migrating in phases to Google Cloud while maintaining Cloud VPN connectivity to on-premises systems.
How should you organize your Google Cloud networking to ensure all on-prem systems remain reachable?

A

✅ Answer: C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises.

Explanation:

GCP VPCs must have non-overlapping IP ranges to avoid routing conflicts with on-premises networks.
Option A (Same IP range as on-prem) will cause routing conflicts.
Option B & D (Using secondary overlapping ranges) still cause conflicts in routing tables.

174
Q

What are minimum duration times for stoarage in Google Cloud.

A

Standard Storage:
There is no minimum storage duration. This is designed for frequently accessed data.
Nearline Storage:
Minimum storage duration is 30 days. This is intended for data accessed less frequently, typically once a month or less.
Coldline Storage:
Minimum storage duration is 90 days. This is for data accessed infrequently, ideally once a quarter or less.
Key points to understand:

These minimum durations mean that even if you delete your data before the specified time, you’ll still be charged for that minimum duration.
These storage classes are designed for different access frequencies, with trade-offs between storage cost and retrieval cost.

175
Q

Cymbal Direct is working with Cymbal Retail, a
separate, autonomous division of Cymbal with
different staff, networking teams, and data
center. Cymbal Direct and Cymbal Retail are
not in the same Google Cloud organization. Cymbal Retail needs access to Cymbal Direct’s
web application for making bulk orders, but the
application will not be available on the
public internet. You want to ensure that
Cymbal Retail has access to your
application with low latency. You also want to
avoid egress network charges if possible.
A. Verify that the subnet range
Cymbal Retail is using doesn’t overlap with Cymbal Direct’s subnet
range, and then enable VPC Network Peering for the project.
B. If Cymbal Retail does not have access to a Google Cloud data
center, use Carrier Peering to connect the two networks.
C. Specify Cymbal Direct’s project as the Shared VPC host project,
and then configure Cymbal Retail’s project as a service project.
D. Verify that the subnet Cymbal Retail is using has the same IP
address range with Cymbal Direct’s subnet range, and then enable
VPC Network Peering for the project.

A

Feedback:
A. Correct! VPC Peering allows for shared networking between organizations.
B. Incorrect. Carrier Peering lets you connect to Google’s public infrastructure when
you can’t satisfy the peering requirements yourself.
C. Incorrect. Because the partner is in a different organization, you can’t use a Shared
VPC.
D. Incorrect. If both subnets use the same IP address range, there could be IP
address conflicts and issues with routing.
Where to look:
https://cloud.google.com/vpc/docs/vpc-peering
Content mapping:
● Architecting with Google Compute Engine (ILT)
○ M8 Interconnecting Networks
● Elastic Google Cloud Infrastructure: Scaling and Automation (On-demand)
○ M1 Interconnecting Networks
Summary:
Use VPC Network Peering to configure private communication between VPC networks in different organizations. You can use it for different projects within the same organization, but that’s less common. Make sure that there aren’t overlapping IP address ranges. Shared VPC only works within the same organization, so that’s an important piece of information you can use to help determine which solution is more
appropriate.

176
Q

Customers need to have a good experience when accessing your web application so they will continue to use your service. You want to define key performance indicators (KPIs) to establish a service level objective (SLO). Which KPI could you use?

A Eighty-five percent of requests are successful
B Low latency for > 85% of requests when aggregated over 1 minute
C Eighty-five percent of customers are satisfied users
check
D Eighty-five percent of requests succeed when aggregated over 1 minute

A

D
This is specific, and you can reasonably expect to meet this KPI.
Don’t use word low, need specific numbers.

177
Q

Cymbal Direct’s employees will use Google Workspace. Your current on-premises network cannot meet the requirements to connect to Google’s public infrastructure. What should you do?
A Connect the on-premises network to Google’s public infrastructure via a partner that supports Carrier Peering.
B Order a Dedicated Interconnect from a Google Cloud partner, and ensure that proper routes are configured.
C Order a Partner Interconnect from a Google Cloud partner, and ensure that proper routes are configured.
close
D Connect the network to a Google point of presence, and enable Direct Peering.

A

Cymbal Direct’s on-premises network cannot meet the requirements for peering.
Use Carrier peering
Carrier Peering:
This method uses a service provider to obtain enterprise-grade network services that connect your infrastructure to Google, allowing access to Google Workspace applications.
Cloud Interconnect:
This offers two options:
Dedicated Interconnect: Provides a direct physical connection between your on-premises network and the Google network.
Partner Interconnect: Provides connectivity between your on-premises and VPC networks through a supported service provider.
Direct Peering:
Enables you to establish a direct peering connection between your business network and Google’s edge network and exchange high-throughput cloud traffic.

178
Q

You are creating a new project. You plan to set up a Dedicated interconnect between two of your data centers in the near future and want to ensure that your resources are only deployed to the same regions where your data centers are located. You need to make sure that you don’t have any overlapping IP addresses that could cause conflicts when you set up the interconnect. You want to use RFC 1918 class B address space. What should you do?

A Create a new project, delete the default VPC network, set up the network in custom mode, and then use IP addresses in the 192.168.x.x address range to create subnets in your desired zones. Use VPC Network Peering to connect the zones in the same region to create regional networks.

B Create a new project, leave the default network in place, and then use the default 10.x.x.x network range to create subnets in your desired regions.

C Create a new project, delete the default VPC network, set up an auto mode VPC network, and then use the default 10.x.x.x network range to create subnets in your desired regions.

D Create a new project, delete the default VPC network, set up a custom mode VPC network, and then use IP addresses in the 172.16.x.x address range to create subnets in your desired regions.

A

D
Feedback:
A. Incorrect. Default mode networks create subnets for you automatically in each
zone and could allow people to accidentally provision resources in other regions.
B. Incorrect. Auto mode networks create subnets for you automatically in each zone
and could allow people to accidentally provision resources in other regions.
C. Correct! Custom networks give you full control.
D. Incorrect. Subnets are regional, not zonal.
Where to look:
https://cloud.google.com/vpc/docs/vpc
Content mapping:
● Architecting with Google Compute Engine (ILT)
○ M2 Virtual Networks
● Essential Google Cloud Infrastructure: Foundation (On-demand)
○ M2 Virtual Networks
Summary:
A custom mode VPC network does not automatically create subnets. This type of network provides you with complete control over its subnets and IP address ranges.
You decide which subnets to create, in regions you choose, and which IP address ranges you use for subnets, but only if they fall within the RFC 1918 address space.
RFC 1918 Class B uses the 172.16.x.x address space. Default networks are created when you set up a new project but are a form of auto mode VPC network, and thus do not give you the ability to specify where your subnets are created. Subnets that use default IP address ranges are created automatically in all regions.

179
Q

Cymbal Direct drones continuously send data during deliveries. You need to process and analyze the incoming telemetry data. After processing, the data should be retained, but it will only be accessed once every month or two. Your CIO has issued a directive to incorporate managed services wherever possible. You want a cost-effective solution to process the incoming streams of data. What should you do?
A. Ingest data with ClearBlade IoT Core, process it with Dataprep, and
store it in a Coldline Cloud Storage bucket.
B. Ingest data with ClearBlade IoT Core, and then publish to Pub/Sub.
Use Dataflow to process the data, and store it in a Nearline Cloud
Storage bucket.
C. Ingest data with ClearBlade IoT Core, and then publish to Pub/Sub.
Use BigQuery to process the data, and store it in a Standard Cloud
Storage bucket.
D. Ingest data with ClearBlade IoT Core, and then store it in BigQuery.

A

C
A. Incorrect. Dataprep is used to normalize data before processing, if necessary.
Coldline could be used, but Nearline is probably a better fit because the data could be
accessed every month. Coldline has a higher cost for data access than Nearline,
which makes it a poor choice for data accessed “every month or two.”
B. Correct! Dataflow is a fully managed service that can be used to process both
streams and batches of data. Nearline is a good fit because the data could be
accessed every month.
C. Incorrect. BigQuery is Google Cloud’s serverless data warehousing tool tool, which
is not used for processing real-time data. Standard could be used but is not the most
cost-effective solution. Nearline would be more cost-effective, because the data is
only accessed once per month at most. Standard storage has a lower access cost,
but has higher storage costs, so it’s a better fit for frequently accessed data which will
not be retained for long periods of time.
D. Incorrect. BigQuery is Google Cloud’s serverless data warehousing tool, which is
not used for processing real-time data.
Where to look:
https://cloud.google.com/architecture/data-lifecycle-cloud-platform?hl=en#process_an
d_analyze
https://www.clearblade.com/iot-core

180
Q

Cymbal Direct is evaluating database options to store the analytics data from its experimental drone deliveries. You’re currently using a small cluster of MongoDB NoSQL database servers. You want to move to a managed NoSQL database service with consistent low latency that can scale throughput seamlessly and can handle the petabytes of data you expect after expanding to additional markets. What should you do?
check
A Create a Bigtable instance, extract the data from MongoDB, and insert the data into Bigtable.
B Extract the data from MongoDB. Insert the data into Firestore using Native mode.
C Extract the data from MongoDB, and insert the data into BigQuery.
D Extract the data from MongoDB. Insert the data into Firestore using Datastore mode.

A

A
Feedback:
A. Incorrect. Firestore does not meet the requirements for consistent low latency,
scaling throughput seamlessly, and petabyte-scaling data.
B. Correct! Bigtable is ideal for IoT, gives consistently sub-10ms latency, and can be
used at a petabyte scale.
C. Incorrect. Firestore does not meet the requirements for consistent low latency,
scaling throughput seamlessly, and petabyte-scale.
D. Incorrect. BigQuery is used for Enterprise data warehouse and building reports and
extracting insights. Bigtable meets the requirements for consistent low latency, scaling
throughput seamlessly, and petabyte-scale.
Where to look:
https://cloud.google.com/bigtable/
Content mapping:
● Architecting with Google Compute Engine (ILT)
○ M5 Storage and Database Services
● Essential Google Cloud Infrastructure: Core Services (On-demand)
○ M2 Storage and Database Services
Summary:
Bigtable is often used as a solution when working with IoT and analytics because it’s
very good at storing heavy read/write data and scaling while maintaining linear performance. Bigtable is the basis for several Google products because it scales into the petabyte range.

181
Q

You are working with a client who is using Google Kubernetes Engine (GKE) to migrate applications from a virtual machine–based environment to a microservices-based architecture. Your client has a complex legacy application that stores a significant amount of data on the file system of its VM. You do not want to re-write the application to use an external service to store the file system data.
What should you do?

A In Cloud Shell, create a YAML file defining your Deployment called deployment.yaml. Create a Deployment in GKE by running the command kubectl apply -f deployment.yaml

B In Cloud Shell, create a YAML file defining your Pod called pod.yaml. Create a Pod in GKE by running the command kubectl apply -f pod.yaml

C In Cloud Shell, create a YAML file defining your Container called build.yaml. Create a Container in GKE by running the command gcloud builds submit –config build.yaml .
check

D In Cloud Shell, create a YAML file defining your StatefulSet called statefulset.yaml. Create a StatefulSet in GKE by running the command kubectl apply -f statefulset.yaml

A

Feedback:
A. Incorrect. A Deployment represents multiple identical Pods. A Deployment is used
to ensure that Pods are available, but it does not attempt to maintain state.
Containers are considered disposable. The kubectl command can be used from gcloud to create or modify deployments defined in a YAML file or manifest.
B. Incorrect. A container must be run within a pod. Cloud Build is used to build a container image using either a YAML file or a Dockerfile. The container can then be run in Kubernetes by referencing that image.
C. Correct! A StatefulSet represents a group of persistent Pods. The YAML file will define a PersistentVolumeClaim (PVC) that allows for an application to retain state. A StatefulSet is commonly used with applications like databases.
D. incorrect. A Pod is the smallest unit of Deployment and contains one or more containers. A Pod doesn’t define how many containers must be running, so they’re generally managed by the Deployment or StatefulSet.
Where to look:
https://cloud.google.com/kubernetes-engine/docs/concepts/pod
Content mapping:
Getting Started with Google Kubernetes Engine, M3
Summary:
As a Cloud Architect, you need to understand the relationships between a
Deployment, Container, StatefulSet, and Pod. A Container must run in an environment, and that environment is the Pod. A Deployment is multiple instances of a Pod, and a StatefulSet ensures that the Pods will persist when state information must be retained. Most containers externalize state so that they can be destroyed and/recreated on demand. A StatefulSet ensures the containers persist so they can maintain state. Although as a Professional Cloud Architect it generally won’t be your responsibility to manage Kubernetes, you should be familiar with the basic Kubernetes commands and concepts.

182
Q

Cymbal Direct developers have written a new application. Based on initial usage estimates, you decide to run the application on Compute Engine instances with 15 Gb of RAM and 4 CPUs. These instances store persistent data locally. After the application runs for several months, historical data indicates that the application requires 30 Gb of RAM. Cymbal Direct management wants you to make adjustments that will minimize costs.

What should you do?
A. Stop the instance, and then use the
command gcloud compute instances
set-machine-type VM_NAME –machine-type e2-standard-8. Start
the instance again.
B. Stop the instance, and then use the command gcloud compute instances
set-machine-type VM_NAME –machine-type e2-standard-8. Set the
instance’s metadata to: preemptible: true. Start the instance again.
C. Stop the instance, and then use the command gcloud compute instances
set-machine-type VM_NAME –machine-type 2-custom-4-30720. Start the instance again.
D. Stop the instance, and then use the command gcloud compute instances
set-machine-type VM_NAME –machine-type 2-custom-4-30720. Set the instance’s metadata to: preemptible: true. Start the instance
again.

A

A
Correct! Custom instances are a good way to optimize costs. You don’t have to pay for resources you don’t need.
A. Incorrect. An e2-standard-8 instance will have the appropriate amount of memory.
However, this instance type will have more CPU than necessary and incur additional unnecessary costs.
B. Incorrect. An e2-standard-8 instance will have the appropriate amount of memory.
However, this instance type will have more CPU than necessary and incur additional unnecessary costs. Although preemptible instances can save substantial money, they are not appropriate for instances that need to store persistent data locally.
C. Correct! Custom instances are a good way to optimize costs. You don’t have to pay for resources you don’t need.
D. Incorrect. Although preemptible instances can save substantial money, they are not appropriate for instances that need to store persistent data locally.
Where to look:
https://cloud.google.com/compute/docs/instances/changing-machine-type-of-stoppedinstance
Content mapping:
● Architecting with Google Compute Engine (ILT)
○ M3 Virtual Machines
● Essential Google Cloud Infrastructure: Foundation (On-demand)
○ M3 Virtual Machines
Cost is often a factor when architecting a solution, and you can use tools to optimize your spending in Google Cloud. Use the pricing calculator to calculate the cost of resources, and check for recommendations in the console for over-sized instances.
Use preemptible and committed-use instances where appropriate. Remember that the standard instance sizes are based on what makes sense for most general purpose applications, but your environment may differ. You can use custom instances to make sure you only pay for the resources you need.

183
Q

You are working in a mixed environment of
VMs and Kubernetes. Some of your
resources are on-premises, and some
are in Google Cloud. Using containers as
a part of your CI/CD pipeline has sped up
releases significantly. You want to start
migrating some of those VMs to
containers so you can get similar benefits.
You want to automate the migration
process where possible.

A. Manually create a GKE cluster, and then use Migrate to
Containers (Migrate for Anthos) to set up the cluster, import VMs,
and convert them to containers.
B. Use Migrate to Containers (Migrate for Anthos) to automate the
creation of Compute Engine instances to import VMs and convert
them to containers.
C. Manually create a GKE cluster. Use Cloud Build to import VMs and
convert them to containers.
D. Use Migrate for Compute Engine to import VMs and convert them
to containers.

A

Feedback:
A. Correct. You must initially create a GKE cluster. Then you can use Migrate to
Containers (Migrate for Anthos) to set up the cluster and import the VMs.
B. Incorrect. Migrate to Containers (Migrate for Anthos) uses containers in GKE to
migrate the VMs; It does not use Compute Engine instances.
C. Cloud Build lets you build Docker-compatible containers, but you can’t use it to
automate the importing of VMs.
D. Incorrect. Use Migrate for Compute Engine to import VMs and create Compute
Engine instances. You can’t use Migrate for Compute Engine to create containers.
Where to look:
https://cloud.google.com/migrate/containers
Content mapping:
● Getting Started with Google Kubernetes Engine (ILT and On-demand)
○ M3 Kubernetes Architecture
Summary:
Migrate for Anthos is a very powerful tool that can be used to import VMs to GKE. It
automates the configuration of a GKE cluster and the importing of the VMs. Migrate
for Compute Engine is also a useful tool, but instead converts VMs to Compute
Engine instances.
Creating a migration plan involves many considerations. Remember, the diagnostic
question we just reviewed only covers one scenario. Here are some links to resources
that can help you get started learning about migration plans. You’ll find this list in your
workbook.
https://cloud.google.com/migrate/containers
https://cloud.google.com/resources/cloud-migration-checklist
https://cloud.google.com/products/cloud-migration
https://cloud.google.com/solutions/application-migration
https://cloud.google.com/architecture/migration

184
Q

Cymbal Direct has created a proof of
concept for a social integration service
that highlights images of its products
from social media. The proof of concept
is a monolithic application running on a
single SuSE Linux virtual machine (VM).
The current version requires
increasing the VM’s CPU and RAM in
order to scale. You would like to
refactor the VM so that you can scale
out instead of scaling up.
A. Move the existing codebase and VM provisioning scripts to git, and
attach external persistent volumes to the VMs.
B. Make sure that the application declares any dependent requirements in a requirements.txt or equivalent statement so that they can be referenced in a startup script. Specify the startup script in a managed instance group template, and use an autoscaling policy.
C. Make sure that the application declares any dependent requirements in a requirements.txt or equivalent statement so that they can be referenced in a startup script, and attach external persistent volumes to the VMs.
D. Use containers instead of VMs, and use a GKE autoscaling deployment.

A

Feedback:
A. Incorrect. Version control allows for change control. Backing services allow for
flexibility in design, not concurrency.
B. Incorrect. Concurrency will help, but dependencies refer to declaring and isolating
code dependencies.
C. Incorrect. Backing services allow for flexibility in design, but not concurrency.
Dependencies refer to declaring and isolating code dependencies.
D. Correct! Treating each app as one or more stateless processes means
externalizing state to a separate database service. This allows for more concurrent
processing.
Where to look:
https://cloud.google.com/architecture/twelve-factor-app-development-on-gcp
Content mapping:
● Architecting with Google Cloud: Design and Process (ILT)
○ M2 Microservice Design and Architecture
● Reliable Google Cloud Infrastructure: Design and Process (On-demand)
○ M2 Microservice Design and Architecture
Summary:
Containers are a common and standard tool used to isolate processes. Using “12 factor” application development best practices can be a useful check when designing or redesigning applications. Following best practices such as the 12 factors allows for process isolation by externalizing state. This approach lets you use GKE to automatically scale instances based on CPU load.

185
Q

How are you able to analyze Storage Configuration?

186
Q

Cymbal Direct must meet compliance requirements. You need
to ensure that employees with valid accounts cannot access their
VPC network from locations outside of its secure corporate
network, including from home. You also want a high degree of
visibility into network traffic for auditing and forensics purposes.

A. Ensure that all users install Cloud VPN. Enable VPC Flow Logs for the networks you need to monitor.
B. Enable VPC Service Controls, define a network perimeter to restrict access to authorized networks, and enable VPC Flow Logs for the networks you need to monitor.
C. Enable Identity-Aware Proxy (IAP) to allow users to access services securely. Use Google Cloud Observability to view audit logs for the networks you need to monitor.
D. Enable VPC Service Controls, and use Google Cloud Observability to view audit logs for the networks you need to monitor.

A

Feedback:
A. Incorrect. Cloud VPN lets a VPN appliance establish a tunnel, but it is not the type of VPN users run directly on their systems.
B. Correct! Enabling VPC Service Controls lets you define a network perimeter. VPC Flow Logs lets you log network-level communication to Compute Engine instances.
C. Incorrect. IAP secures an application by restricting access to valid, authorized accounts. In this scenario, the intention is to restrict access based on where the request is coming from.
D. Incorrect. Enabling VPC Service Controls lets you define a network perimeter. You also need to enable VPC Flow Logs. If you do not enable it, the network traffic flows will not be logged.
Where to look:
https://cloud.google.com/vpc/docs/flow-logs
https://cloud.google.com/vpc-service-controls/
Content mapping:
NA
Summary:
Using VPC Service Controls to enable a network perimeter lets you restrict access to services behind a private endpoint. You can restrict access to specific network ranges. Although Identity-Aware Proxy (IAP) provides secure access for valid accounts, the network perimeter determines where they can access from. Google Cloud Observability is useful for viewing logs. To enable logging of network traffic, you must first enable VPC Flow Logs.

187
Q

You are working with a client who has built a secure messaging application. The application is open source and consists of two components. The first component is a web app, written in Go, which is used to register an account and authorize the user’s IP address. The second is an encrypted chat protocol that uses TCP to talk to the backend chat servers running Debian. If the client’s IP address doesn’t match the registered IP address, the application is designed to terminate
their session. The number of clients using the service varies greatly based on time of day, and the client wants to be able to
easily scale as needed.
A. Deploy the web application using the App Engine standard
environment using a global external HTTP(S) load balancer
and a network endpoint group. Use an unmanaged instance
group for the backend chat servers. Use an external network
load balancer to load-balance traffic across the backend chat servers.
B. Deploy the web application using the App Engine flexible environment using a global external HTTP(S) load balancer and a network endpoint group. Use an unmanaged instance group for the backend chat servers. Use an external network load balancer to load-balance traffic across the
backend chat servers.
C. Deploy the web application using the App Engine standard environment using a global external HTTP(S) load balancer and a network endpoint group. Use a managed instance group for the backend chat servers. Use a global SSL proxy load balancer to load-balance traffic across the backend chat servers.
D. Deploy the web application using the App Engine standard environment with a global external HTTP(S) load balancer and a network endpoint group. Use a managed instance group for the backend chat servers. Use an external network load balancer to load-balance traffic across the backend chat servers.

A

Feedback:
A. Incorrect. You should use a managed instance group to scale based on demand.
B. Incorrect. Go is supported in the App Engine standard environment, so there is no need to use the App Engine flexible environment. You should use a managed instance group to scale based on demand.
C. Incorrect. The traffic is already encrypted, so there’s no need to offload SSL to the proxy. Additionally, SSL Proxy Load Balancing does not preserve the client’s IP address.
D. Correct! Using App Engine allows for dynamic scaling based on demand, as does a managed instance group. Using an external network load balancer preserves the client’s IP address.
Where to look:
https://cloud.google.com/load-balancing/docs/choosing-load-balancer
Content mapping:
●Summary:
Networking and load balancing are key topics for a Professional Cloud Architect. The services you use in one environment can and will differ, but there will always be networking. To take advantage of working in a cloud environment, you need to be able to distribute traffic across multiple resources. You need to understand what options are available for load balancing and how to choose between them. Whenever you’re distributing traffic across multiple resources, you’re scaling horizontally. You will
probably want to be able to horizontally scale dynamically, so
nderstanding managed
instance groups is also critical. There are several serverless options in Google Cloud;
you should be familiar with all of them

188
Q

Cymbal Direct’s user account management
app allows users to delete their accounts
whenever they like. Cymbal Direct also has
a very generous 60-day return policy for
users. The customer service team wants to
make sure that they can still refund or
replace items for a customer even if the
customer’s account has been deleted.
What can you do to ensure that the
customer service team has access to
relevant account information?
A. Temporarily disable the account for 30 days. Export account information to Cloud Storage, and enable lifecycle management to delete the data in 60 days.
B. Ensure that the user clearly understands that after they delete their account, all their information will also be deleted. Remind them to download a copy of their order history and account information before deleting their account. Have the support agent copy any open or recent orders to a shared spreadsheet.
C. Restore a previous copy of the user information database from a snapshot. Have a database administrator capture needed information about the customer.
D. Disable the account. Export account information to Cloud Storage. Have the customer service team permanently delete the data after 30 days.

A

Feedback:
A. Correct! This takes a lazy deletion approach and allows support or administrators to restore data later if necessary.
B. Incorrect. This doesn’t achieve the goal of ensuring that the customer service team has access to the account information.
C. Incorrect. Support agents wouldn’t be able to complete this solution, and it would require excessive work by administrators.
D. Incorrect. This will probably introduce human error and would require excessive work.
Where to look:
https://cloud.google.com/storage/docs/lifecycle

Summary:
If information might be needed in the future, or if a user might have deleted it by mistake, it’s a good idea not to immediately delete it. Instead, take a “lazy deletion” approach and allow for the user to restore the data or for support/administrators to do it if necessary. How you implement lazy deletion will depend on what kind of storage
solution you are using and the information lifecycle related to the data. No matter which method you choose, think ahead to what will happen when and if you need to restore data as you architect a solution. Lazy deletion can be especially useful when you are dealing with compliance or regulatory environments where data must be retained for specific periods of time

189
Q

Cymbal Direct wants to create a pipeline to automate the building of new application releases.
What sequence of steps should you use?

A. Set up a source code repository. Run unit tests. Check in code. Deploy. Build a Docker container.

B. Check in code. Set up a source code repository. Run unit tests. Deploy. Build a Docker container.

C. Set up a source code repository. Check in code. Run unit tests. Build a
Docker container. Deploy.

D. Run unit tests. Deploy. Build a Docker container. Check in code. Set up a source code repository.

A

Feedback:
A. Incorrect. Unit tests can’t be run unless the code has been checked in for testing.
B. Incorrect. The source code repository must exist to check code in and do any subsequent steps.
C. Correct! Each step is dependent on the previous step. These are in the right order.
D. Incorrect. The source code repository must exist to check code in and do any subsequent steps.
Where to look:
https://cloud.google.com/build/docs/
Summary:
This sequence of steps represents a simple pipeline and could be substantially more complex, depending on the required tasks. To check in code, you must have a source code repository. Next, developers check in the code. Unit tests can be run todetermine whether the build should execute. If all tests pass, the Docker image is then built and finally deployed.

190
Q

Your existing application runs on Ubuntu Linux VMs in an
on-premises hypervisor. You want to deploy the application to Google Cloud with minimal refactoring.
What should you do?
A. Set up a Google Kubernetes Engine (GKE) cluster, and then create a
deployment with an autoscaler.
B. Isolate the core features that the application provides. Use Cloud Run to deploy y
each feature independently as a microservice.
C. Use Dedicated or Partner Interconnect to connect the on-premises network where your application is running to your VPC. Configure an endpoint for a global external HTTP(S) load balancer that connects to the existing VMs.
D. Write Terraform scripts to deploy the application as Compute Engine instances.

A

A. Incorrect. Changing from a virtual machine–based application deployment to a
container-based deployment will probably likely require refactoring.
B. Incorrect. Changing from a virtual machine–based application deployment to Cloud
Run will probably require refactoring.
C. Incorrect. This approach would allow you to leverage Google Cloud’s load
balancers, but would not be deploying to Google Cloud.
D. Correct! Terraform lets you manage how you deploy and manage a variety of
services in Google Cloud, such as Compute Engine.
Where to look:
https://cloud.google.com/docs/terraform/

Summary:
Although all of these are good ways to deploy, or expose, an application to Google Cloud, Cloud Run, and GKE will probably require some refactoring to the application.
The hybrid network approach will make the application available via the load balancer, but will not deploy it to Google Cloud. Because the application is already a virtual machine, migrating to Compute Engine with Terraform will use a lift-and-shift approach.

191
Q

Cymbal Direct needs to use a tool to deploy its infrastructure. You want something that allows for repeatable deployment processes, uses a declarative language, and allows parallel deployment. You also want to deploy infrastructure as code on Google Cloud and other cloud providers.

What should you do?
A. Automate the deployment with Terraform scripts. B. Automate the deployment using scripts containing gcloud commands. C. Use Google Kubernetes Engine (GKE) to create deployments and manifests
for your applications.
D. Develop in Docker containers for portability and ease of deployment.

A

Feedback:
A. Correct! Terraform lets you automate and manage resources in multiple clouds.
B. Incorrect. Automation using scripts adds unnecessary complexity and does not have the same benefits of modern infrastructure automation tooling.
C. Incorrect. GKE is Google’s managed Kubernetes service. Deployments accomplish many of these goals, but only for within Kubernetes. GKE is only available in Google Cloud, not other clouds.
D. Incorrect. Docker (or Docker-compatible) containers make deploying code much easier, but do not manage or orchestrate the process themselves. This is what a tool like Kubernetes is for.

Summary:
Terraform is one of the most used infrastructure automation tools and has good support for multiple cloud providers.

192
Q

Cymbal Direct wants to allow partners to make orders
programmatically, without having to speak on the phone
with an agent. What should you consider when designing the API?

A. The API backend should be loosely coupled. Clients should not be required to know too many details of the services they use. REST APIs using gRPC should be used for all external APIs.
B. The API backend should be tightly coupled. Clients should know a significant amount about the services they use. REST APIs using gRPC should be used for all external APIs.
C. The API backend should be loosely coupled. Clients should not be required to know too many details of the services they use. For REST APIs, HTTP(S) is the most common protocol.
D. The API backend should be tightly coupled. Clients should know a significant amount about the services they use. For REST APIs, HTTP(S) is the most common protocol used.

A

A. Incorrect. If clients know extensive information about backend services,
backend systems would be difficult to change or replace. REST APIs are
protocol-agnostic, and HTTP(S) is the most common protocol for external APIs.
B. Incorrect. If an API is not loosely coupled, it can become an issue for
maintenance, with large, complicated monolithic applications. REST APIs are protocol-agnostic, and HTTP(S) is the most common protocol for external APIs.
C. Correct! Loose coupling has several benefits, including maintainability,
versioning, and reduced complexity. Clients not knowing the backend systems means that these systems can be more easily replaced or modified, and HTTP(S) is the most common protocol used for external REST APIs.
D. Incorrect. If an API is not loosely coupled, it can become an issue for
maintenance, with large, complicated monolithic applications. REST APIs are protocol-agnostic, and HTTP(S) is the most common protocol for external APIs.

Summary:
An API is effectively a contract between the API provider and the clients using it. As long as the client makes a request that is valid according to the API’s specification, the request will be fulfilled. This is referred to as a loose coupling or a black box approach. A microservice-based architecture means that many independent parts (the microservices) can change. The client shouldn’t need to know about the different parts.
When making updates, you need to make sure that your changes don’t break older versions of the API specification in use by a client. One way is through a concept called a versioned contract. This common approach specifies which version the client wants to access as part of your API.
Follow the OpenAPI standard to help ensure loose coupling and versioned contracts.
Atlassian and Pact offer tools to test API contracts. Although most modern APIs, especially those designed for external use, use HTTP(S) as the transport protocol, that’s not a requirement. Many internal APIs at Google use gRPC, but that isn’t a requirement. Most modern APIs decouple the transport protocol from the API.

193
Q

Cymbal Direct wants a layered approach to security when setting up Compute Engine instances.
What are some options you could use to make your Compute Engine
instances more secure?
A. Use labels to allow traffic only from certain sources and ports. Turn on Secure boot and vTPM.
B. Use labels to allow traffic only from certain sources and ports. Use a Compute Engine service account.
C. Use network tags to allow traffic only from certain sources and ports. Turn on Secure boot and vTPM.
D. Use network tags to allow traffic only from certain sources and ports. Use a Compute Engine service account

A

Feedback:
A. Incorrect. Labels are often confused with network tags. Tags are used with firewall rules, and labels are used for billing. Secure boot and vTPM protect the OS from being compromised.
B. Incorrect. Labels are often confused with network tags. Tags are used with firewall rules, and labels are used for billing. All Compute Engine instances have an associated service account. Creating an account specifically for an instance or type of instance with limited abilities instead of the default account could be a good approach to the principle of least privilege.
C. Correct! You can use network tags with firewall rules to automatically associate instances when they are created. Secure boot and vTPM protect the OS from being compromised.
D. Incorrect. All Compute Engine instances have an associated service account. Creating an account specifically for an instance or type of instance with limited abilities instead of the default account could be a good approach to the principle of least privilege.
Where to look:
https://cloud.google.com/compute/docs/instances/create-start-instance
Summary:
You can do many things to make a Compute Engine instance more secure; the options mentioned are just a few of them. Remember that network tags are used for determining firewall rules, and labels are used for categorization and insight (such as tracking spending). Secure boot and vTPM allow for validating the operating system at boot time and are supported by several operating systems, but not all.

194
Q

You have deployed your frontend web application in Kubernetes. Based onhistorical use, you need three pods to handle normal demand. Occasionally your load will roughly double. A load balancer is already in place. How could you configure your environment to efficiently meet that demand?
A. Edit your pod’s configuration file and change the number of replicas to six.
B. Edit your deployment’s configuration file and change the number of
replicas to six.
C. Use the “kubectl autoscale” command to change the pod’s maximum
number of instances to six.
D. Use the “kubectl autoscale” command to change the deployment’s
maximum number of instances to six.

A

A. Incorrect. A deployment specifies the number of pods, not a pod itself, and setting the number to six means running additional instances when you don’t need them.
B. Incorrect. Managing your deployments as code has a lot of benefits, but setting the number to six means running additional instances when you don’t need them.
C. Incorrect. A deployment specifies the number of pods, not a pod itself.
D. Correct! This will allow Kubernetes to scale the number of pods automatically, based on a condition like CPU load or requests per second.
Where to look:
https://cloud.google.com/kubernetes-engine/docs/how-to/scaling-apps#autoscaling-deployments
Summary:
As with a Compute Engine managed instance group, you can either specify a fixednumber of instances or have GKE autoscale them for you. Autoscaling is generally going to be more efficient because unnecessary pods will not be running when they’re not needed.

195
Q

You need to deploy a load balancer for a web-based application with multiple backends in different regions. You want to direct traffic to the
backend closest to the end user, but also to different backends based on the URL the user is accessing.

Which of the following could be used to implement this?
A. The request is received by the global external HTTP(S) load balancer. A global forwarding rule sends the request to a target proxy, which checks the URL map and selects the backend service. The backend service sends the request to Compute Engine instance groups in multiple regions.

B. The request is matched by a URL map and then sent to a global external HTTP(S) load balancer. A global forwarding rule sends the request to a target proxy, which selects a backend service. The backend service sends the request to Compute Engine instance groups in multiple regions.

C. The request is received by the SSL proxy load balancer, which uses a global forwarding rule to check the URL map, then sends the request to a backend service. The request is processed by Compute Engine instance groups in multiple regions.

D. The request is matched by a URL map and then sent to a SSL proxy load balancer. A global forwarding rule sends the request to a target proxy, which selects a backend service and sends the request to Compute Engine instance groups in multiple regions.

A

Feedback:
A. Correct! This is the right order of operations.
B. Incorrect. The external global HTTP(S) load balancer must exist to provide the
multicast IP address, and then route the request through the target proxy.
C. Incorrect. The SSL Proxy is not for HTTP(S) traffic. The question specifically states
a web-based application.
D. Incorrect. The SSL Proxy is not for HTTP(S) traffic. The question specifically states
a web-based application.
Where to look:
https://cloud.google.com/load-balancing/docs/load-balancing-overview
Summary:
A request coming from the internet is processed by the HTTP(S) proxy. A URL map is then compared to the request to route it to the appropriate backend service. For example, you could have two backend services: one for serving video and one for service audio. The URL ending in “/audio” could be mapped to the backend serving audio, and the URL ending in “/video” could be mapped to the backend serving video.
Each backend service could have multiple backends, such as instance groups in different regions. The backend the traffic is sent to is determined by health, capacity, and geographic location.

196
Q

How do you categorize objectives