Topic 1 Flashcards
Question #: 64
Topic #: 1
You are creating an App Engine application that uses Cloud Datastore as its persistence layer. You need to retrieve several root entities for which you have the identifiers. You want to minimize the overhead in operations performed by Cloud Datastore. What should you do?
A. Create the Key object for each Entity and run a batch get operation B. Create the Key object for each Entity and run multiple get operations, one operation for each entity C. Use the identifiers to create a query filter and run a batch query operation D. Use the identifiers to create a query filter and run multiple query operations, one operation for each entity
https://www.examtopics.com/discussions/google/view/7290-exam-professional-cloud-architect-topic-1-question-64/
A. Create the Key object for each Entity and run a batch get operation
Create the Key object for each Entity and run a batch get operation
https://cloud.google.com/datastore/docs/best-practices
Use batch operations for your reads, writes, and deletes instead of single operations. Batch operations are more efficient because they perform multiple operations with the same overhead as a single operation.
Firestore in Datastore mode supports batch versions of the operations which allow it to operate on multiple objects in a single Datastore mode call.
Such batch calls are faster than making separate calls for each individual entity because they incur the overhead for only one service call. If multiple entity groups are involved, the work for all the groups is performed in parallel on the server side.
Question #: 160
Topic #: 1
The application reliability team at your company this added a debug feature to their backend service to send all server events to Google Cloud Storage for eventual analysis. The event records are at least 50 KB and at most 15 MB and are expected to peak at 3,000 events per second. You want to minimize data loss.
Which process should you implement?
A. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with serverName ג€" Timestamp ג€¢ Create a new bucket if bucket is older than 1 hour and save individual files to the new bucket. Otherwise, save files to existing bucket. B. ג€¢ Batch every 10,000 events with a single manifest file for metadata ג€¢ Compress event files and manifest file into a single archive file ג€¢ Name files using serverName ג€" EventSequence ג€¢ Create a new bucket if bucket is older than 1 day and save the single archive file to the new bucket. Otherwise, save the single archive file to existing bucket. C. ג€¢ Compress individual files ג€¢ Name files with serverName ג€" EventSequence ג€¢ Save files to one bucket ג€¢ Set custom metadata headers for each object after saving D. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with a random prefix pattern ג€¢ Save files to one bucket
https://www.examtopics.com/discussions/google/view/54369-exam-professional-cloud-architect-topic-1-question-160/
D. ג€¢ Append metadata to file body ג€¢ Compress individual files ג€¢ Name files with a random prefix pattern ג€¢ Save files to one bucket
https://cloud.google.com/storage/docs/request-rate#naming-convention
“A longer randomized prefix provides more effective auto-scaling when ramping to very high read and write rates. For example, a 1-character prefix using a random hex value provides effective auto-scaling from the initial 5000/1000 reads/writes per second up to roughly 80000/16000 reads/writes per second, because the prefix has 16 potential values. If your use case does not need higher rates than this, a 1-character randomized prefix is just as effective at ramping up request rates as a 2-character or longer randomized prefix.”
Example:
my-bucket/2fa764-2016-05-10-12-00-00/file1
my-bucket/5ca42c-2016-05-10-12-00-00/file2
my-bucket/6e9b84-2016-05-10-12-00-01/file3
Question #: 131
Topic #: 1
Your company wants you to build a highly reliable web application with a few public APIs as the backend. You don’t expect a lot of user traffic, but traffic could spike occasionally. You want to leverage Cloud Load Balancing, and the solution must be cost-effective for users. What should you do?
A. Store static content such as HTML and images in Cloud CDN. Host the APIs on App Engine and store the user data in Cloud SQL. B. Store static content such as HTML and images in a Cloud Storage bucket. Host the APIs on a zonal Google Kubernetes Engine cluster with worker nodes in multiple zones, and save the user data in Cloud Spanner. C. Store static content such as HTML and images in Cloud CDN. Use Cloud Run to host the APIs and save the user data in Cloud SQL. D. Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore.
https://www.examtopics.com/discussions/google/view/56615-exam-professional-cloud-architect-topic-1-question-131/
D. Store static content such as HTML and images in a Cloud Storage bucket. Use Cloud Functions to host the APIs and save the user data in Firestore.
https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#gcloud:-cloud-functions
https://cloud.google.com/blog/products/networking/better-load-balancing-for-app-engine-cloud-run-and-functions
Question #: 79
Topic #: 1
You want your Google Kubernetes Engine cluster to automatically add or remove nodes based on CPU load.
What should you do?
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console. B. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable autoscaling on the managed instance group for the cluster using the gcloud command. C. Create a deployment and set the maxUnavailable and maxSurge properties. Enable the Cluster Autoscaler using the gcloud command. D. Create a deployment and set the maxUnavailable and maxSurge properties. Enable autoscaling on the cluster managed instance group from the GCP Console.
https://www.examtopics.com/discussions/google/view/7323-exam-professional-cloud-architect-topic-1-question-79/
A. Configure a HorizontalPodAutoscaler with a target CPU usage. Enable the Cluster Autoscaler from the GCP Console.
How does Horizontal Pod Autoscaler work with Cluster Autoscaler?
Horizontal Pod Autoscaler changes the deployment’s or replicaset’s number of replicas based on the current CPU load. If the load increases, HPA will create new replicas, for which there may or may not be enough space in the cluster. If there are not enough resources, CA will try to bring up some nodes, so that the HPA-created pods have a place to run. If the load decreases, HPA will stop some of the replicas. As a result, some nodes may become underutilized or completely empty, and then CA will terminate such unneeded nodes.
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler
“Caution: Do not enable Compute Engine autoscaling for managed instance groups for your cluster nodes. GKE’s cluster autoscaler is separate from Compute Engine autoscaling”
Question #: 22
Topic #: 1
One of the developers on your team deployed their application in Google Container Engine with the Dockerfile below. They report that their application deployments are taking too long.
From ubuntu:16.04
COPY . /src
RUN apt-get update && apt-get install -y python pyhton-pip
RUN pip install -r requirements.txt
You want to optimize this Dockerfile for faster deployment times without adversely affecting the app’s functionality.
Which two actions should you take? (Choose two.)
A. Remove Python after running pip B. Remove dependencies from requirements.txt C. Use a slimmed-down base image like Alpine Linux D. Use larger machine types for your Google Container Engine node pools E. Copy the source after he package dependencies (Python and pip) are installed
https://www.examtopics.com/discussions/google/view/54406-exam-professional-cloud-architect-topic-1-question-22/
C. Use a slimmed-down base image like Alpine Linux
E. Copy the source after he package dependencies (Python and pip) are installed
C. Use a slimmed-down base image like Alpine Linux: The ubuntu:16.04 image is a full-fledged operating system, which means it’s larger and takes longer to download and build. Alpine Linux is a minimal distribution designed for containers, resulting in significantly smaller images and faster deployments.
E. Copy the source after the package dependencies (Python and pip) are installed: Docker builds images in layers. Each RUN, COPY, and ADD instruction creates a new layer. By copying the source code after installing dependencies, you can take advantage of Docker’s caching mechanism. If your source code changes, only the layers related to the source code need to be rebuilt, not the layers related to dependencies.
Question #: 61
Topic #: 1
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do?
A. Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment. B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment. C. Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment. D. Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.
https://www.examtopics.com/discussions/google/view/6330-exam-professional-cloud-architect-topic-1-question-61/
B. Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.
gcloud command to create K8s cluster https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster
Create a Google Kubernetes Engine (GKE) cluster: You can use the Google Cloud Console or the gcloud command-line tool to create a GKE cluster, which will provide the underlying infrastructure for running your application.
Deploy the application to the cluster: You can use the kubectl command-line tool to apply the Kubernetes Deployment file provided by the development team to the cluster.kubectl apply -f deployment.yaml
Compute
Instances in Multiple Zones >= 99.9%
Cloud SQL
built-in high availability (HA) option which supports 99.95% SLA.
Question #: 5
Topic #: 1
An application development team believes their current logging tool will not meet their needs for their new cloud-based product. They want a better tool to capture errors and help them analyze their historical log data. You want to help them find a solution that meets their needs.
What should you do?
A. Direct them to download and install the Google StackDriver logging agent B. Send them a list of online resources about logging best practices C. Help them define their requirements and assess viable logging tools D. Help them upgrade their current tool to take advantage of any new features
https://www.examtopics.com/discussions/google/view/6837-exam-professional-cloud-architect-topic-1-question-5/
C. Help them define their requirements and assess viable logging tools
The correct answer is C. Help them define their requirements and assess viable logging tools.
Explanation:
The development team has expressed the need for a better logging tool for their new cloud-based product. As a cloud architect, it is your role to help them find a solution that meets their needs.
Option A, directing them to download and install the Google StackDriver logging agent, is not the best solution as it assumes that StackDriver logging will be the best fit for their needs without proper evaluation of their requirements.
Option B, sending them a list of online resources about logging best practices, may be helpful, but it does not address their specific needs.
Option D, helping them upgrade their current tool, may not be the best solution either since they have already expressed their concerns that their current tool will not meet their needs.
Option C, helping them define their requirements and assess viable logging tools, is the best option. This involves understanding their needs and gathering requirements, evaluating different logging tools available in the market, and selecting the best tool that meets their needs.
By working with the development team to identify their requirements, you can help them choose a logging tool that will enable them to capture errors and analyze their historical log data efficiently. This may involve evaluating various cloud-based logging solutions, such as StackDriver, Splunk, or ELK stack, and comparing their features, functionality, and pricing to identify the most suitable option for their needs.
In summary, the best course of action is to understand the development team’s needs, help them define their requirements, and assess viable logging tools to find a solution that meets their needs.
Question #: 173
Topic #: 1
The operations team in your company wants to save Cloud VPN log events for one year. You need to configure the cloud infrastructure to save the logs. What should you do?
A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save. B. Enable the Compute Engine API, and then enable logging on the firewall rules that match the traffic you want to save. C. Set up a Cloud Logging Dashboard titled Cloud VPN Logs, and then add a chart that queries for the VPN metrics over a one-year time period. D. Set up a filter in Cloud Logging and a topic in Pub/Sub to publish the logs.
https://www.examtopics.com/discussions/google/view/68684-exam-professional-cloud-architect-topic-1-question-173/
A. Set up a filter in Cloud Logging and a Cloud Storage bucket as an export target for the logs you want to save.
Question #: 167
Topic #: 1
You want to enable your running Google Kubernetes Engine cluster to scale as demand for your application changes.
What should you do?
A. Add additional nodes to your Kubernetes Engine cluster using the following command: gcloud container clusters resize CLUSTER_Name ג€" -size 10 B. Add a tag to the instances in the cluster with the following command: gcloud compute instances add-tags INSTANCE - -tags enable- autoscaling max-nodes-10 C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 D. Create a new Kubernetes Engine cluster with the following command: gcloud alpha container clusters create mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10 and redeploy your application
https://www.examtopics.com/discussions/google/view/7073-exam-professional-cloud-architect-topic-1-question-167/
C. Update the existing Kubernetes Engine cluster with the following command: gcloud alpha container clusters update mycluster - -enable- autoscaling - -min-nodes=1 - -max-nodes=10
https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-autoscaler
A is incorrect because there is supposed to be two hypens “–” not one before size (https://cloud.google.com/sdk/gcloud/reference/container/clusters/resize). B is incorrect because it just adds a string to the cluster (https://cloud.google.com/sdk/gcloud/reference/compute/instances/add-tags). “C” is just as wrong as “A” because the documentation says it should be “–max-nodes” followed by “–min-nodes” (https://cloud.google.com/sdk/gcloud/reference/alpha/container/clusters/update), also the alpha command no longer works but it used to and is still up on google docs. This goes for “D” as well but D talks about making another, which doesn’t have to be done because one it already up. So the debate is between A and C, and C used to work so C was chosen, although C also has spaces which never worked… So this question is an absolute thug tactic by a Google team to steal from the Google kingdom preventing the establishment of their library by failing people that actually know the science behind the technology. When you see this question at a test center I’d select C.
Question #: 85
Topic #: 1
Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables.
You want analysts from each country to be able to see and query only the data for their respective countries.
How should you configure the access rights?
A. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group. B. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group. C. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access with each respective analyst country- group. D. Create a group per country. Add analysts to their respective country-groups. Create a single group 'all_analysts', and add all country-groups as members. Grant the 'all_analysts' group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.
https://www.examtopics.com/discussions/google/view/6457-exam-professional-cloud-architect-topic-1-question-85/
A. Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’, and add all country-groups as members. Grant the ‘all_analysts’ group the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
The question requires that user from each country can only view a specific data set, so BQ dataViewer cannot be assigned at project level. Only A could limit the user to query and view the data that they are supposed to be allowed to.
Data viewer role can be applied to a Table and a View.
JobUser can be applied only at a Project level not at a Dataset level
https://cloud.google.com/bigquery/docs/access-control#bigquery.dataViewer
https://cloud.google.com/bigquery/docs/access-control#bigquery.jobUser
Question #: 76
Topic #: 1
Your company pushes batches of sensitive transaction data from its application server VMs to Cloud Pub/Sub for processing and storage. What is the Google- recommended way for your application to authenticate to the required Google Cloud services?
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles. B. Ensure that VM service accounts do not have access to Cloud Pub/Sub, and use VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles. C. Generate an OAuth2 access token for accessing Cloud Pub/Sub, encrypt it, and store it in Cloud Storage for access from each VM. D. Create a gateway to Cloud Pub/Sub using a Cloud Function, and grant the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles.
https://www.examtopics.com/discussions/google/view/11818-exam-professional-cloud-architect-topic-1-question-76/
A. Ensure that VM service accounts are granted the appropriate Cloud Pub/Sub IAM roles.
The Google-recommended way for your application to authenticate to Cloud Pub/Sub and other Google Cloud services when running on Compute Engine VMs is to use VM service accounts. VM service accounts are automatically created when you create a Compute Engine VM, and they are associated with the VM instance. To authenticate to Cloud Pub/Sub and other Google Cloud services, you should ensure that the VM service accounts are granted the appropriate IAM roles.
Option B, ensuring that VM service accounts do not have access to Cloud Pub/Sub and using VM access scopes to grant the appropriate Cloud Pub/Sub IAM roles, would not be a suitable solution because VM service accounts are required for authentication to Google Cloud services.
Option C, generating an OAuth2 access token for accessing Cloud Pub/Sub, encrypting it, and storing it in Cloud Storage for access from each VM, would not be a suitable solution because it would require manual management of access tokens, which can be error-prone and insecure.
Option D, creating a gateway to Cloud Pub/Sub using a Cloud Function and granting the Cloud Function service account the appropriate Cloud Pub/Sub IAM roles, would not be a suitable solution because it would not allow the application to directly authenticate to Cloud Pub/Sub.
Question #: 140
Topic #: 1
Your company has a Kubernetes application that pulls messages from Pub/Sub and stores them in Filestore. Because the application is simple, it was deployed as a single pod. The infrastructure team has analyzed Pub/Sub metrics and discovered that the application cannot process the messages in real time. Most of them wait for minutes before being processed. You need to scale the elaboration process that is I/O-intensive. What should you do?
A. Use kubectl autoscale deployment APP_NAME --max 6 --min 2 --cpu-percent 50 to configure Kubernetes autoscaling deployment. B. Configure a Kubernetes autoscaling deployment based on the subscription/push_request_latencies metric. C. Use the --enable-autoscaling flag when you create the Kubernetes cluster. D. Configure a Kubernetes autoscaling deployment based on the subscription/num_undelivered_messages metric.
https://www.examtopics.com/discussions/google/view/60396-exam-professional-cloud-architect-topic-1-question-140/
D. Configure a Kubernetes autoscaling deployment based on the subscription/num_undelivered_messages metric.
num_undelivered_messages metric can indicate if subscribers are keeping up with message submissions.
https://cloud.google.com/pubsub/docs/monitoring#monitoring_the_backlog
Subscription Metric: Scaling based on the subscription/num_undelivered_messages metric directly ties the scaling behavior to the number of unprocessed messages in Pub/Sub. This ensures that your application scales out when there are more messages to process and scales in when the queue is short.
Relevant Metric: This metric is relevant for an I/O-intensive application that processes messages from Pub/Sub, ensuring that the scaling is directly responsive to the message processing demand.
Question #: 122
Topic #: 1
You have an application that runs in Google Kubernetes Engine (GKE). Over the last 2 weeks, customers have reported that a specific part of the application returns errors very frequently. You currently have no logging or monitoring solution enabled on your GKE cluster. You want to diagnose the problem, but you have not been able to replicate the issue. You want to cause minimal disruption to the application. What should you do?
A. 1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods. B. 1. Create a new GKE cluster with Cloud Operations for GKE enabled. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Use the GKE Monitoring dashboard to investigate logs from affected Pods. C. 1. Update your GKE cluster to use Cloud Operations for GKE, and deploy Prometheus. 2. Set an alert to trigger whenever the application returns an error. D. 1. Create a new GKE cluster with Cloud Operations for GKE enabled, and deploy Prometheus. 2. Migrate the affected Pods to the new cluster, and redirect traffic for those Pods to the new cluster. 3. Set an alert to trigger whenever the application returns an error.
https://www.examtopics.com/discussions/google/view/56425-exam-professional-cloud-architect-topic-1-question-122/
A. 1. Update your GKE cluster to use Cloud Operations for GKE. 2. Use the GKE Monitoring dashboard to investigate logs from affected Pods.
According to the reference, answer should be A.
https://cloud.google.com/blog/products/management-tools/using-logging-your-apps-running-kubernetes-engine
But updating cluster requires downtime, isn’t it?
No it actually does not require to shut down the cluster: https://cloud.google.com/stackdriver/docs/solutions/gke/installing#console_1
Question #: 162
Topic #: 1
You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage and replace the copy easily if there are changes on the production virtual machine. You will deploy the copy as a new instance in a different project in the US-East region.
What steps must you take?
A. Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtual machine instance in the US-East region. B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region. C. Create an image file from the root disk with Linux dd command, create a new virtual machine instance in the US-East region D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
https://www.examtopics.com/discussions/google/view/7018-exam-professional-cloud-architect-topic-1-question-162/
D. Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and create a new virtual machine instance in the US-East region using the image file the root disk.
B. Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtual machine instance in the US-East region.
D is correct. A and B are talking about appending the file system to a new VM, not setting it at the root in a new VM set. Option C is not offered within the GCP because the image must be on the GCP platform to run the gcloud of Google Console instructions to create a VM with the image.
=>Why Not B.
https://cloud.google.com/compute/docs/instances/create-start-instance#createsnapshot
This clearly tells we can use snapshot to create a VM instance, and only need a custom image if we need to create many instances. Here we are creating only one.
=>You can’t use the snapshot created by another project
=>According to the documentation we can now https://cloud.google.com/compute/docs/disks/create-snapshots
=>Only if its in the same zone: https://cloud.google.com/compute/docs/disks/manage-snapshots#sharing_snapshots
“Note: The disk must be in the same zone as the instance.”
But this is not the case here, we have:
Different zones and different project hence, you must use a bucket.
Question #: 84
Topic #: 1
You want to automate the creation of a managed instance group. The VMs have many OS package dependencies. You want to minimize the startup time for new
VMs in the instance group.
What should you do?
A. Use Terraform to create the managed instance group and a startup script to install the OS package dependencies. B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image. C. Use Puppet to create the managed instance group and install the OS package dependencies. D. Use Deployment Manager to create the managed instance group and Ansible to install the OS package dependencies.
https://www.examtopics.com/discussions/google/view/6873-exam-professional-cloud-architect-topic-1-question-84/
B. Create a custom VM image with all OS package dependencies. Use Deployment Manager to create the managed instance group with the VM image
Managed instance groups are a way to manage a group of Compute Engine instances as a single entity. If you want to automate the creation of a managed instance group, you can use tools such as Terraform, Deployment Manager, or Puppet to automate the process.
To minimize the startup time for new VMs in the instance group, you should create a custom VM image with all of the OS package dependencies pre-installed. This will allow you to create new VMs from the custom image, which will significantly reduce the startup time compared to installing the dependencies on each VM individually. You can then use Deployment Manager to create the managed instance group with the custom VM image.
Question #: 60
Topic #: 1
You need to set up Microsoft SQL Server on GCP. Management requires that there’s no downtime in case of a data center outage in any of the zones within a
GCP region. What should you do?
A. Configure a Cloud SQL instance with high availability enabled. B. Configure a Cloud Spanner instance with a regional instance configuration. C. Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows Failover Clustering. Place nodes in different subnets. D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
https://www.examtopics.com/discussions/google/view/6443-exam-professional-cloud-architect-topic-1-question-60/
D. Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
could also be A. Configure a Cloud SQL instance with high availability enabled.
Cloud SQL offers high availability configurations, it currently support Microsoft SQL Server
please see;
https://cloud.google.com/sql/docs/sqlserver/high-availability?_ga=2.30855355.-503483612.1582800507
he correct approach is: D
Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes in different zones.
Here’s why this is the best option:
* SQL Server Always On Availability Groups: This solution provides high availability by automatically failing over to another node in the event of a failure. It’s specifically designed for SQL Server and ensures minimal downtime in case of outages. * Windows Failover Clustering: By configuring Windows Failover Clustering with Always On Availability Groups, you can achieve high availability by ensuring that the SQL Server can failover to another node in case of a zone or node failure. * Placing nodes in different zones: By deploying nodes in different zones within the same region, you ensure that your setup is protected from any potential zone-level outages. If one zone experiences a failure, the other zone can take over without downtime.
Question #: 49
Topic #: 1
Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. When Cloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policy at a particular node of the hierarchy?
A. The effective policy is determined only by the policy set at the node B. The effective policy is the policy set at the node and restricted by the policies of its ancestors C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors D. The effective policy is the intersection of the policy set at the node and policies inherited from its ancestors
https://www.examtopics.com/discussions/google/view/6846-exam-professional-cloud-architect-topic-1-question-49/
C. The effective policy is the union of the policy set at the node and policies inherited from its ancestors
https://cloud.google.com/iam/docs/resource-hierarchy-access-control
Question #: 46
Topic #: 1
You have an outage in your Compute Engine managed instance group: all instances keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered to look into the issue. You need to make sure that he can access the VMs. What should you do?
A. Grant your colleague the IAM role of project Viewer B. Perform a rolling restart on the instance group C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH Keys D. Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys
https://www.examtopics.com/discussions/google/view/6953-exam-professional-cloud-architect-topic-1-question-46/
C. Disable the health check for the instance group. Add his SSH key to the project-wide SSH Keys
The key element in C is “Disable the Health check.”, so that server wont restart automatically.
But before that the actual troubleshooting step is to check Cloud console -> Instance template -> Metadata-> and see if any startup script is there, if yes review it and possibly remove it. [Consider the case, a script is causing restarting the VM, (possibly in Metadata). ]
Question #: 6
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company’s web hosting platform. Improvement to the QA/
Test processes accomplished an 80% reduction.
Which additional two approaches can you take to further reduce the rollbacks? (Choose two.)
A. Introduce a green-blue deployment model B. Replace the QA environment with canary releases C. Fragment the monolithic platform into microservices D. Reduce the platform's dependency on relational database systems E. Replace the platform's relational database systems with a NoSQL database
https://www.examtopics.com/discussions/google/view/54383-exam-professional-cloud-architect-topic-1-question-6/
A. Introduce a green-blue deployment model
C. Fragment the monolithic platform into microservices
https://circleci.com/blog/canary-vs-blue-green-downtime/
A. Introduce a green-blue deployment model: This approach involves having two identical environments for the platform, one that is currently live (blue environment) and another that is inactive (green environment). When a new deployment is ready, it is first deployed to the green environment where it can be tested and verified before traffic is switched over to it. This approach reduces the impact of any errors or issues that may arise during deployment since traffic is still being served by the currently live environment. If any issues are identified during testing, the deployment can be rolled back without any impact on users. Once the green environment is verified to be working correctly, traffic is switched over to it, and the blue environment becomes the inactive one. This approach is particularly useful for high-traffic applications where downtime during deployments is not acceptable.
B. Replace the QA environment with canary releases: In this approach, new deployments are first released to a small subset of users (usually 1-5%) before being released to the entire user base. This allows for any issues to be identified and resolved before the deployment is released to everyone. If any issues are identified, the deployment can be rolled back before it affects the entire user base. This approach reduces the risk of deploying faulty code to the entire user base and helps identify issues before they become widespread.
C. Fragmenting the monolithic platform into microservices could improve the overall reliability of the platform, but it may not necessarily reduce the number of unplanned rollbacks of erroneous production deployments. Fragmentation may introduce new challenges in managing and deploying the microservices.
D. Reducing the platform’s dependency on relational database systems could improve the platform’s scalability and performance, but it may not necessarily reduce the number of unplanned rollbacks of erroneous production deployments.
E. Similarly, replacing the platform’s relational database systems with a NoSQL database could improve the platform’s scalability and performance, but it may not necessarily reduce the number of unplanned rollbacks of erroneous production deployments.
In summary, the most effective approaches to reduce the number of unplanned rollbacks of erroneous production deployments would be to introduce a green-blue deployment model and replace the QA environment with canary releases.
*
Question #: 21
Topic #: 1
Your company’s user-feedback portal comprises a standard LAMP stack replicated across two zones. It is deployed in the us-central1 region and uses autoscaled managed instance groups on all layers, except the database. Currently, only a small group of select customers have access to the portal. The portal meets a
99,99% availability SLA under these conditions. However next quarter, your company will be making the portal available to all users, including unauthenticated users. You need to develop a resiliency testing strategy to ensure the system maintains the SLA once they introduce additional user load.
What should you do?
A. Capture existing users input, and replay captured user load until autoscale is triggered on all layers. At the same time, terminate all resources in one of the zones B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to the system by terminating random resources on both zones C. Expose the new system to a larger group of users, and increase group size each day until autoscale logic is triggered on all layers. At the same time, terminate random resources on both zones D. Capture existing users input, and replay captured user load until resource utilization crosses 80%. Also, derive estimated number of users based on existing user's usage of the app, and deploy enough resources to handle 200% of expected load
https://www.examtopics.com/discussions/google/view/7128-exam-professional-cloud-architect-topic-1-question-21/
B. Create synthetic random user input, replay synthetic load until autoscale logic is triggered on at least one layer, and introduce ג€chaosג€ to the system by terminating random resources on both zones
https://cloud.google.com/architecture/scalable-and-resilient-apps?hl=en#test_your_resilience
Test your resilience
It’s critical to test that your app responds to failures in the way you expect. The overarching theme is th
at the best way to avoid failure is to introduce failure and learn from it.
Simulating and introducing failures is complex. In addition to verifying the behavior of your app or service, you must also ensure that expected alerts are generated, and appropriate metrics are generated. We recommend a structured approach, where you introduce simple failures and then escalate.
For example, you might proceed as follows, validating and documenting behavior at each stage:
Introduce intermittent failures. Block access to dependencies of the service. Block all network communication. Terminate hosts.
For details, see the Breaking your systems to make them unbreakable video from Google Cloud Next 2019.
If you’re using a service mesh like Istio to manage your app services, you can inject faults at the application layer instead of killing pods or machines, or you can inject corrupting packets at the TCP layer. You can introduce delays to simulate network latency or an overloaded upstream system. You can also introduce aborts, which mimic failures in upstream systems.
Question #: 55
Topic #: 1
Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should you do?
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage. B. Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage. C. Install gsutil on each server that contains data. Use resumable transfers to upload the data into Cloud Storage. D. Install gsutil on each server containing data. Use streaming transfers to upload the data into Cloud Storage.
https://www.examtopics.com/discussions/google/view/7043-exam-professional-cloud-architect-topic-1-question-55/
A. Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into Cloud Storage.
Transfer Appliance lets you quickly and securely transfer large amounts of data to Google Cloud Platform via a high capacity storage server that you lease from Google and ship to our datacenter. Transfer Appliance is recommended for data that exceeds 20 TB or would take more than a week to upload.
https://cloud.google.com/transfer-appliance/docs/2.2/overview
Question #: 74
Topic #: 1
You are tasked with building an online analytical processing (OLAP) marketing analytics and reporting tool. This requires a relational database that can operate on hundreds of terabytes of data. What is the Google-recommended tool for such applications?
A. Cloud Spanner, because it is globally distributed B. Cloud SQL, because it is a fully managed relational database C. Cloud Firestore, because it offers real-time synchronization across devices D. BigQuery, because it is designed for large-scale processing of tabular data
https://www.examtopics.com/discussions/google/view/11817-exam-professional-cloud-architect-topic-1-question-74/
D. BigQuery, because it is designed for large-scale processing of tabular data
4 reasons to choose BQ (Supports Petabytes of data)
- OLAP Data
- Relational DB (SQL)
- 100s of TB data
- Analystics and Reporting
Question #: 29
Topic #: 1
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database back-end. You want to store the credentials securely.
Where should you store the credentials?
A. In the source code B. In an environment variable C. In a secret management system D. In a config file that has restricted access through ACLs
https://www.examtopics.com/discussions/google/view/7200-exam-professional-cloud-architect-topic-1-question-29/
C. In a secret management system
https://cloud.google.com/kms/docs/secret-management
When designing a distributed application with microservices, it is important to ensure that credentials for accessing the database back-end are stored securely. The storage location should be accessible by the microservices but not by anyone else who is unauthorized.
Out of the options provided, the best option for storing credentials securely is C: In a secret management system. A secret management system is a centralized system that stores and manages sensitive information, such as passwords, API keys, and certificates. This system provides a secure way to manage the credentials, which can be accessed by authorized microservices as needed.
Using A: In the source code, is not a secure way to store credentials because the source code can be accessed by anyone who has access to the code repository. This includes not only authorized developers but also potentially unauthorized users who have gained access to the repository.
Using B: In an environment variable, is better than storing the credentials in source code but still not as secure as using a secret management system. Environment variables can be accessed by any process running on the same machine, so if an attacker gains access to the machine, they could potentially access the credentials stored in environment variables.
Using D: In a config file that has restricted access through ACLs, is better than storing the credentials in source code or environment variables, but it still has limitations. While the access control list (ACL) can restrict access to the config file, it may not be as secure as using a secret management system. Additionally, managing access control lists for multiple microservices can become cumbersome and error-prone.
In summary, when storing credentials for distributed microservices, it is best to use a centralized secret management system that provides secure and controlled access to the credentials.
Question #: 67
Topic #: 1
You have a Python web application with many dependencies that requires 0.1 CPU cores and 128 MB of memory to operate in production. You want to monitor and maximize machine utilization. You also want to reliably deploy new versions of the application. Which set of steps should you take?
A. Perform the following: 1. Create a managed instance group with f1-micro type machines. 2. Use a startup script to clone the repository, check out the production branch, install the dependencies, and start the Python app. 3. Restart the instances to automatically deploy new production releases. B. Perform the following: 1. Create a managed instance group with n1-standard-1 type machines. 2. Build a Compute Engine image from the production branch that contains all of the dependencies and automatically starts the Python app. 3. Rebuild the Compute Engine image, and update the instance template to deploy new production releases. C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to 'IfNotPresent' in the staging namespace, and then promote it to the production namespace after testing. D. Perform the following: 1. Create a GKE cluster with n1-standard-4 type machines. 2. Build a Docker image from the master branch with all of the dependencies, and tag it with 'latest'. 3. Create a Kubernetes Deployment in the default namespace with the imagePullPolicy set to 'Always'. Restart the pods to automatically deploy new production releases.
https://www.examtopics.com/discussions/google/view/6890-exam-professional-cloud-architect-topic-1-question-67/
C. Perform the following: 1. Create a Google Kubernetes Engine (GKE) cluster with n1-standard-1 type machines. 2. Build a Docker image from the production branch with all of the dependencies, and tag it with the version number. 3. Create a Kubernetes Deployment with the imagePullPolicy set to ‘IfNotPresent’ in the staging namespace, and then promote it to the production namespace after testing.
C is correct, need “ifnotpresent”when uploads to container registry
C is the best choice. You can create a k8s cluster with just one node and use a different namespaces for staging and production. In staging, you will test the changes
should be option C because if you are working in real world, GKE is the best solution for such a case. Furthermore, its reliable, scalable, flexible, at least the best option among the other three.
Question #: 181
Topic #: 1
You have a Compute Engine managed instance group that adds and removes Compute Engine instances from the group in response to the load on your application. The instances have a shutdown script that removes REDIS database entries associated with the instance. You see that many database entries have not been removed, and you suspect that the shutdown script is the problem. You need to ensure that the commands in the shutdown script are run reliably every time an instance is shut down. You create a Cloud Function to remove the database entries. What should you do next?
A. Modify the shutdown script to wait for 30 seconds before triggering the Cloud Function. B. Do not use the Cloud Function. Modify the shutdown script to restart if it has not completed in 30 seconds. C. Set up a Cloud Monitoring sink that triggers the Cloud Function after an instance removal log message arrives in Cloud Logging. D. Modify the shutdown script to wait for 30 seconds and then publish a message to a Pub/Sub queue.
https://www.examtopics.com/discussions/google/view/80034-exam-professional-cloud-architect-topic-1-question-181/
C. Set up a Cloud Monitoring sink that triggers the Cloud Function after an instance removal log message arrives in Cloud Logging
Actually C is correct but Wrong also in a way :
You cannot trigger a Cloud Function directly from a Cloud Monitoring sink. Instead, you can set up a Cloud Monitoring alert that sends notifications to a Pub/Sub topic, and then trigger the Cloud Function from that Pub/Sub topic.
In this scenario, you want to ensure that the commands in the shutdown script are run reliably every time an instance is shut down. One way to do this is by setting up a Cloud Monitoring sink that triggers a Cloud Function after an instance removal log message arrives in Cloud Logging. This will allow you to use the Cloud Function to perform the necessary tasks (such as removing database entries) when an instance is shut down, and it will ensure that these tasks are performed reliably and consistently.
Option A: Modifying the shutdown script to wait for 30 seconds before triggering the Cloud Function is not a reliable solution, as it relies on the shutdown script being able to run for at least 30 seconds before the instance is shut down.
Question #: 39
Topic #: 1
You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing a message were sent by a specific user.
What should you do?
A. Tag messages client side with the originating user identifier and the destination user. B. Encrypt the message client side using block-based encryption with a shared key. C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key. D. Use a trusted certificate authority to enable SSL connectivity between the client application and the server.
https://www.examtopics.com/discussions/google/view/6844-exam-professional-cloud-architect-topic-1-question-39/
C. Use public key infrastructure (PKI) to encrypt the message client side using the originating user’s private key.
Question #: 38
Topic #: 1
You deploy your custom Java application to Google App Engine. It fails to deploy and gives you the following stack trace.
What should you do?
~~~
SHA1 digest error ….
~~~
A. Upload missing JAR files and redeploy your application.
B. Digitally sign all of your JAR files and redeploy your application
C. Recompile the CLoakedServlet class using and MD5 hash instead of SHA1
https://www.examtopics.com/discussions/google/view/7209-exam-professional-cloud-architect-topic-1-question-38/
Digitally sign all of your JAR files and redeploy your application
- JAR signing and integrity: Digitally signing your JAR files ensures their authenticity and integrity. It adds a digital signature that verifies the origin and confirms that the file hasn’t been tampered with. This is crucial for security and prevents issues like the SHA1 digest error you’re encountering.
- App Engine requirement: Google App Engine enforces JAR signing for security reasons. All deployed applications must have properly signed JAR files.
Question #: 108
Topic #: 1
You are implementing a single Cloud SQL MySQL second-generation database that contains business-critical transaction data. You want to ensure that the minimum amount of data is lost in case of catastrophic failure. Which two features should you implement? (Choose two.)
A. Sharding B. Read replicas C. Binary logging D. Automated backups E. Semisynchronous replication
https://www.examtopics.com/discussions/google/view/56369-exam-professional-cloud-architect-topic-1-question-108/
C. Binary logging
D. Automated backups
Ans) C and D
Cloud SQL. If you use Cloud SQL, the fully managed Google Cloud MySQL database, you should enable automated backups and binary logging for your Cloud SQL instances. This allows you to perform a point-in-time recovery, which restores your database from a backup and recovers it to a fresh Cloud SQL instance
Binary Logging: Binary logging in MySQL records changes to the database. It can be used for backup and replication, and it’s essential for point-in-time recovery. With binary logging, you can roll your database forward to any point in time, minimizing data loss.
Automated Backups: Automated backups periodically take a snapshot of your database. In the event of a catastrophic failure, you can restore your database to the state it was in at the time of the last backup. This can also help minimize data loss.
While read replicas and semisynchronous replication can enhance availability and performance, they do not directly minimize data loss.
Also, you cannot create a read replica without enabling Automated backups and Enable binary logging
Sharding can improve performance but it’s not directly aimed at data loss prevention.
https://cloud.google.com/sql/docs/mysql/backup-recovery/backups
Prerequisites for creating a read replica
Before you can create a read replica of a primary Cloud SQL instance, the instance must meet the following requirements:
Automated backups must be enabled. Binary logging must be enabled which requires point-in-time recovery to be enabled. Learn more about the impact of these logs. At least one backup must have been created after binary logging was enabled. https://cloud.google.com/sql/docs/mysql/replication#requirements
Before being able to create a read replica, you have to make sure “binary logging and automated backup” are enabled. So picking only D or C without the other one makes no sense.
https://cloud.google.com/sql/docs/mysql/replication/create-replica
Question #: 159
Topic #: 1
A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run properly on Google Cloud Platform.
What should you do?
A. Help the engineer to convert his websocket code to use HTTP streaming B. Review the encryption requirements for websocket connections with the security team C. Meet with the cloud operations team and the engineer to discuss load balancer options D. Help the engineer redesign the application to use a distributed user session service that does not rely on websockets and HTTP sessions.
https://www.examtopics.com/discussions/google/view/8341-exam-professional-cloud-architect-topic-1-question-159/
C. Meet with the cloud operations team and the engineer to discuss load balancer options
https://cloud.google.com/load-balancing/docs/https#websocket_support
Session affinity for WebSockets works the same as for any other request. For information, see Session affinity.
Beside the reasons mentioned above regarding why A, B and D are wrong, there are also:
A and D are wrong because are abot changing the app - whereas in the task “You want to help him ensure his application will run properly on GCP” (not REDESIGN/CHANGE).
B is wrong because you don’t have to “Review the encryption requirements for websocket connections with the security team”…
Question #: 102
Topic #: 1
You are developing a globally scaled frontend for a legacy streaming backend data API. This API expects events in strict chronological order with no repeat data for proper processing.
Which products should you deploy to ensure guaranteed-once FIFO (first-in, first-out) delivery of data?
A. Cloud Pub/Sub alone B. Cloud Pub/Sub to Cloud Dataflow C. Cloud Pub/Sub to Stackdriver D. Cloud Pub/Sub to Cloud SQL
https://www.examtopics.com/discussions/google/view/6747-exam-professional-cloud-architect-topic-1-question-102/
B. Cloud Pub/Sub to Cloud Dataflow
Pub/Sub doesn’t provide guarantees about the order of message delivery. Strict message ordering can be achieved with buffering, often using Dataflow.”
Question #: 123
Topic #: 1
You need to deploy a stateful workload on Google Cloud. The workload can scale horizontally, but each instance needs to read and write to the same POSIX filesystem. At high load, the stateful workload needs to support up to 100 MB/s of writes. What should you do?
A. Use a persistent disk for each instance. B. Use a regional persistent disk for each instance. C. Create a Cloud Filestore instance and mount it in each instance. D. Create a Cloud Storage bucket and mount it in each instance using gcsfuse.
https://www.examtopics.com/discussions/google/view/56384-exam-professional-cloud-architect-topic-1-question-123/
C. Create a Cloud Filestore instance and mount it in each instance.
The requirement is explicitly POSIX filesystem - using gcsfuse Cloud Storage still remains an object storage - IMHO gcsfuse brings a lot of downsizes compared with Filestore and in the question there are no indications that a non-POSIX filesystem shall be used.
https://cloud.google.com/storage/docs/gcs-fuse#differences-and-limitations
While Cloud Storage FUSE has a file system interface, it is not like an NFS or CIFS file system on the backend. Additionally, Cloud Storage FUSE is not POSIX compliant. For a POSIX file system product in Google Cloud, see Filestore.
When using Cloud Storage FUSE, be aware of its limitations and semantics, which are different than that of POSIX file systems. Cloud Storage FUSE should only be used within its capabilities.
Question #: 107
Topic #: 1
You are monitoring Google Kubernetes Engine (GKE) clusters in a Cloud Monitoring workspace. As a Site Reliability Engineer (SRE), you need to triage incidents quickly. What should you do?
A. Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies. B. Navigate the predefined dashboards in the Cloud Monitoring workspace, create custom metrics, and install alerting software on a Compute Engine instance. C. Write a shell script that gathers metrics from GKE nodes, publish these metrics to a Pub/Sub topic, export the data to BigQuery, and make a Data Studio dashboard. D. Create a custom dashboard in the Cloud Monitoring workspace for each incident, and then add metrics and create alert policies.
https://www.examtopics.com/discussions/google/view/56365-exam-professional-cloud-architect-topic-1-question-107/
A. Navigate the predefined dashboards in the Cloud Monitoring workspace, and then add metrics and create alert policies.
Yeah, creating a new dashboard for each incident doesn’t seem like the quickest option.
A is correct. Option D is highly inefficient and time-consuming. Creating individual dashboards for every incident is impractical and slows down the triage process.
quickly would mean custom, and you can call it what you want. D
Question #: 183
Topic #: 1
Your company recently acquired a company that has infrastructure in Google Cloud. Each company has its own Google Cloud organization. Each company is using a Shared Virtual Private Cloud (VPC) to provide network connectivity for its applications. Some of the subnets used by both companies overlap. In order for both businesses to integrate, the applications need to have private network connectivity. These applications are not on overlapping subnets. You want to provide connectivity with minimal re-engineering. What should you do?
A. Set up VPC peering and peer each Shared VPC together. B. Migrate the projects from the acquired company into your company's Google Cloud organization. Re-launch the instances in your companies Shared VPC. C. Set up a Cloud VPN gateway in each Shared VPC and peer Cloud VPNs. D. Configure SSH port forwarding on each application to provide connectivity between applications in the different Shared VPCs.
https://www.examtopics.com/discussions/google/view/79697-exam-professional-cloud-architect-topic-1-question-184/https://www.examtopics.com/discussions/google/view/80075-exam-professional-cloud-architect-topic-1-question-183/
C. Set up a Cloud VPN gateway in each Shared VPC and peer Cloud VPNs.
VPC peering cannot be established between VPCs if there is IP range overlap. C is ok since you can establish VPN across these VPCs and only include the applications required IP ranges as its mentioned that they do not overlap
Question #: 97
Topic #: 1
Your company creates rendering software which users can download from the company website. Your company has customers all over the world. You want to minimize latency for all your customers. You want to follow Google-recommended practices.
How should you store the files?
A. Save the files in a Multi-Regional Cloud Storage bucket. B. Save the files in a Regional Cloud Storage bucket, one bucket per zone of the region. C. Save the files in multiple Regional Cloud Storage buckets, one bucket per zone per region. D. Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
https://www.examtopics.com/discussions/google/view/6466-exam-professional-cloud-architect-topic-1-question-97/
D. Save the files in multiple Multi-Regional Cloud Storage buckets, one bucket per multi-region.
There are 3 multi-regions: ASIA, EU and US. In order to be global, there must be multi-region buckets in this 3 locations.
Question #: 47
Topic #: 1
Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to integrate Google Kubernetes Engine (GKE) for workload orchestration. Parts of your architecture must also be PCI DSS-compliant. Which of the following is most accurate?
A. App Engine is the only compute platform on GCP that is certified for PCI DSS hosting. B. GKE cannot be used under PCI DSS because it is considered shared hosting. C. GKE and GCP provide the tools you need to build a PCI DSS-compliant environment. D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
https://www.examtopics.com/discussions/google/view/54735-exam-professional-cloud-architect-topic-1-question-47/
C. GKE and GCP provide the tools you need to build a PCI DSS-compliant environment.
D. All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.
https://cloud.google.com/security/compliance/pci-dss
Question #: 110
Topic #: 1
Your company has announced that they will be outsourcing operations functions. You want to allow developers to easily stage new versions of a cloud-based application in the production environment and allow the outsourced operations team to autonomously promote staged versions to production. You want to minimize the operational overhead of the solution. Which Google Cloud product should you migrate to?
A. App Engine B. GKE On-Prem C. Compute Engine D. Google Kubernetes Engine
https://www.examtopics.com/discussions/google/view/56840-exam-professional-cloud-architect-topic-1-question-110/
A. App Engine
arsav Highly Voted 3 years, 5 months ago
Answer should be A as only with App Engine we have a default service account which allows the user to deploy the changes per project. for GKE we may have to configure additional permission for both DEV and Operations team to deploy the changes.
https://cloud.google.com/appengine/docs/standard/php/service-account
Question #: 45
Topic #: 1
You are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensure that code changes can be verified before deploying to production. What should you do?
A. Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes can easily be rolled back. B. Use Spinnaker to deploy builds to production and run tests on production deployments. C. Use Jenkins to build the staging branches and the master branch. Build and deploy changes to production for 10% of users before doing a complete rollout. D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
https://www.examtopics.com/discussions/google/view/8197-exam-professional-cloud-architect-topic-1-question-45/
D. Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. After testing, tag the repository for production and deploy that to the production environment.
Question #: 113
Topic #: 1
Your organization has decided to restrict the use of external IP addresses on instances to only approved instances. You want to enforce this requirement across all of your Virtual Private Clouds (VPCs). What should you do?
A. Remove the default route on all VPCs. Move all approved instances into a new subnet that has a default route to an internet gateway. B. Create a new VPC in custom mode. Create a new subnet for the approved instances, and set a default route to the internet gateway on this new subnet. C. Implement a Cloud NAT solution to remove the need for external IP addresses entirely. D. Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved instances in the allowedValues list.
https://www.examtopics.com/discussions/google/view/56576-exam-professional-cloud-architect-topic-1-question-113/
D. Set an Organization Policy with a constraint on constraints/compute.vmExternalIpAccess. List the approved instances in the allowedValues list.
https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address#disableexternalip
you might want to restrict external IP address so that only specific VM instances can use them. This option can help to prevent data exfiltration or maintain network isolation. Using an Organization Policy, you can restrict external IP addresses to specific VM instances with constraints to control use of external IP addresses for your VM instances within an organization or a project.
Question #: 194
Topic #: 1
Your company is planning to migrate their Windows Server 2022 from their on-premises data center to Google Cloud. You need to bring the licenses that are currently in use in on-premises virtual machines into the target cloud environment. What should you do?
A. 1. Create an image of the on-premises virtual machines and upload into Cloud Storage. 2. Import the image as a virtual disk on Compute Engine. B. 1. Create standard instances on Compute Engine. 2. Select as the OS the same Microsoft Windows version that is currently in use in the on-premises environment. C. 1. Create an image of the on-premises virtual machine. 2. Import the image as a virtual disk on Compute Engine. 3. Create a standard instance on Compute Engine, selecting as the OS the same Microsoft Windows version that is currently in use in the on-premises environment. 4. Attach a data disk that includes data that matches the created image. D. 1. Create an image of the on-premises virtual machines. 2. Import the image as a virtual disk on Compute Engine using --os=windows-2022-dc-v. 3. Create a sole-tenancy instance on Compute Engine that uses the imported disk as a boot disk.
https://www.examtopics.com/discussions/google/view/121314-exam-professional-cloud-architect-topic-1-question-194/
D. 1. Create an image of the on-premises virtual machines.
2. Import the image as a virtual disk on Compute Engine using –os=windows-2022-dc-v.
3. Create a sole-tenancy instance on Compute Engine that uses the imported disk as a boot disk.
Well, yes, you actually need a sole-tenant instance to install a windows server with your licence.
A lot of links were pasted here, but after reading a lot of them, I reached this one who explicitly states:
‘To create a VM instance that uses the custom BYOL image, you must provision the VM instance on a sole-tenant node.’
https://cloud.google.com/compute/docs/images/creating-custom-windows-byol-images#use_the_custom_image
Question #: 121
Topic #: 1
You need to deploy an application on Google Cloud that must run on a Debian Linux environment. The application requires extensive configuration in order to operate correctly. You want to ensure that you can install Debian distribution updates with minimal manual intervention whenever they become available. What should you do?
A. Create a Compute Engine instance template using the most recent Debian image. Create an instance from this template, and install and configure the application as part of the startup script. Repeat this process whenever a new Google-managed Debian image becomes available. B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates. C. Create an instance with the latest available Debian image. Connect to the instance via SSH, and install and configure the application on the instance. Repeat this process whenever a new Google-managed Debian image becomes available. D. Create a Docker container with Debian as the base image. Install and configure the application as part of the Docker image creation process. Host the container on Google Kubernetes Engine and restart the container whenever a new update is available.
https://www.examtopics.com/discussions/google/view/57270-exam-professional-cloud-architect-topic-1-question-121/
B. Create a Debian-based Compute Engine instance, install and configure the application, and use OS patch management to install available updates.
Question #: 86
Topic #: 1
You have been engaged by your client to lead the migration of their application infrastructure to GCP. One of their current problems is that the on-premises high performance SAN is requiring frequent and expensive upgrades to keep up with the variety of workloads that are identified as follows: 20 TB of log archives retained for legal reasons; 500 GB of VM boot/data volumes and templates; 500 GB of image thumbnails; 200 GB of customer session state data that allows customers to restart sessions even if off-line for several days.
Which of the following best reflects your recommendations for a cost-effective storage allocation?
A. Local SSD for customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes. B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes. C. Memcache backed by Cloud SQL for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails. D. Memcache backed by Persistent Disk SSD storage for customer session state data. Assorted local SSD-backed instances for VM boot/data volumes. Cloud Storage for log archives and thumbnails.
https://www.examtopics.com/discussions/google/view/7468-exam-professional-cloud-architect-topic-1-question-86/
B. Memcache backed by Cloud Datastore for the customer session state data. Lifecycle-managed Cloud Storage for log archives, thumbnails, and VM boot/data volumes.
WHY NOT OTHERS.
A: is wrong Local SSD in non-persistent therefore cannot be used for session state (as questions also need to save data for users who are offline for several days).
C: Again Local SSD cannot be used for boot volume (because its Non-persistent again) and always used for temporary data storage.
D: Same reason as C.
WHY B?
Left with B that’s why, but the question is how to store Boot/Data volume on Cloud Storage?
- Storing other type of data is easy but most comments were about boot volume.
- Boot volume can be stored to Cloud Storage by creating an Custom Image.
https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#selecting_image_storage_location
Question #: 33
Topic #: 1
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier and then on to the database tier. Traffic should not flow between the web and the database tier.
How should you configure the network?
A. Add each tier to a different subnetwork B. Set up software based firewalls on individual VMs C. Add tags to each tier and set up routes to allow the desired traffic flow D. Add tags to each tier and set up firewall rules to allow the desired traffic flow
https://www.examtopics.com/discussions/google/view/9033-exam-professional-cloud-architect-topic-1-question-33/
D. Add tags to each tier and set up firewall rules to allow the desired traffic flow
refer to target filtering. https://cloud.google.com/solutions/best-practices-vpc-design
Question #: 35
Topic #: 1
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup.
Which two steps should you take? (Choose two.)
A. Load logs into Google BigQuery B. Load logs into Google Cloud SQL C. Import logs into Google Stackdriver D. Insert logs into Google Cloud Bigtable E. Upload log files into Google Cloud Storage
https://www.examtopics.com/discussions/google/view/54534-exam-professional-cloud-architect-topic-1-question-35/
A. Load logs into Google BigQuery
E. Upload log files into Google Cloud Storage
A. Load logs into Google BigQuery: BigQuery is Google Cloud’s serverless, highly scalable, and cost-effective multicloud data warehouse designed for data analytics. It’s ideal for storing and analyzing large volumes of log data (100 TB in this case). You can use BigQuery’s powerful SQL capabilities to run queries, generate reports, and gain insights from your logs.
E. Upload log files into Google Cloud Storage: Cloud Storage provides durable, scalable, and secure object storage. It’s perfect for storing your log data as a long-term disaster recovery backup. Cloud Storage offers different storage classes to optimize costs based on your data access frequency and retention needs.
Question #: 34
Topic #: 1
Your development team has installed a new Linux kernel module on the batch servers in Google Compute Engine (GCE) virtual machines (VMs) to speed up the nightly batch process. Two days after the installation, 50% of the batch servers failed the nightly batch run. You want to collect details on the failure to pass back to the development team.
Which three actions should you take? (Choose three.)
A. Use Stackdriver Logging to search for the module log entries B. Read the debug GCE Activity log using the API or Cloud Console C. Use gcloud or Cloud Console to connect to the serial console and observe the logs D. Identify whether a live migration event of the failed server occurred, using in the activity log E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics F. Export a debug VM into an image, and run the image on a local server where kernel log messages will be displayed on the native screen
https://www.examtopics.com/discussions/google/view/54535-exam-professional-cloud-architect-topic-1-question-34/
A. Use Stackdriver Logging to search for the module log entries
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics
A. Use Stackdriver Logging to search for the module log entries = Check logs
C. Use gcloud or Cloud Console to connect to the serial console and observe the logs = Check grub messages, remember new kernel module was installed.
E. Adjust the Google Stackdriver timeline to match the failure time, and observe the batch server metrics = Zoom into the time window when problem happened.
Question #: 101
Topic #: 1
Your company is building a new architecture to support its data-centric business focus. You are responsible for setting up the network. Your company’s mobile and web-facing applications will be deployed on-premises, and all data analysis will be conducted in GCP. The plan is to process and load 7 years of archived .csv files totaling 900 TB of data and then continue loading 10 TB of data daily. You currently have an existing 100-MB internet connection.
What actions will meet your company’s needs?
A. Compress and upload both archived files and files uploaded daily using the gsutil ג€"m option. B. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily. C. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish one Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily using the gsutil ג€"m option. D. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a Cloud VPN Tunnel to VPC networks over the public internet, and compress and upload files daily.
https://www.examtopics.com/discussions/google/view/6306-exam-professional-cloud-architect-topic-1-question-101/
B. Lease a Transfer Appliance, upload archived files to it, and send it to Google to transfer archived data to Cloud Storage. Establish a connection with Google using a Dedicated Interconnect or Direct Peering connection and use it to upload files daily.
Dedicated Interconnect will be a new connection and will not run over the existing internet connection. With dedicated interconnect the existing ISP becomes irrelevant. If you were trying to use VPN the existing internet connection would be relevant
Question #: 143
Topic #: 1
Your company is designing its data lake on Google Cloud and wants to develop different ingestion pipelines to collect unstructured data from different sources.
After the data is stored in Google Cloud, it will be processed in several data pipelines to build a recommendation engine for end users on the website. The structure of the data retrieved from the source systems can change at any time. The data must be stored exactly as it was retrieved for reprocessing purposes in case the data structure is incompatible with the current processing pipelines. You need to design an architecture to support the use case after you retrieve the data. What should you do?
A. Send the data through the processing pipeline, and then store the processed data in a BigQuery table for reprocessing. B. Store the data in a BigQuery table. Design the processing pipelines to retrieve the data from the table. C. Send the data through the processing pipeline, and then store the processed data in a Cloud Storage bucket for reprocessing. D. Store the data in a Cloud Storage bucket. Design the processing pipelines to retrieve the data from the bucket.
https://www.examtopics.com/discussions/google/view/60682-exam-professional-cloud-architect-topic-1-question-143/
D. Store the data in a Cloud Storage bucket. Design the processing pipelines to retrieve the data from the bucket.
The data needs to be stored as it is retrieved. This would mean that any processing should be done after it is stored.
store RAW unstructured data as-is in Cloud Storage, and then define how to process it.
Classical Data Lake ELT (Extract -> Load -> Transform )
https://cloud.google.com/architecture/big-data-analytics/analytics-lakehouse
Question #: 82
Topic #: 1
Your customer wants to do resilience testing of their authentication layer. This consists of a regional managed instance group serving a public REST API that reads from and writes to a Cloud SQL instance.
What should you do?
A. Engage with a security company to run web scrapers that look your for users' authentication data om malicious websites and notify you if any is found. B. Deploy intrusion detection software to your virtual machines to detect and log unauthorized access. C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves. D. Configure a read replica for your Cloud SQL instance in a different zone than the master, and then manually trigger a failover while monitoring KPIs for our REST API.
https://www.examtopics.com/discussions/google/view/6709-exam-professional-cloud-architect-topic-1-question-82/
C. Schedule a disaster simulation exercise during which you can shut off all VMs in a zone to see how your application behaves.
https://cloud.google.com/solutions/scalable-and-resilient-apps
C: A well-designed application should scale seamlessly as demand increases and decreases, and be resilient enough to withstand the loss of one or more compute resources.
Resilience: designed to withstand the unexpected
A highly-available, or resilient, application is one that continues to function despite expected or unexpected failures of components in the system. If a single instance fails or an entire zone experiences a problem, a resilient application remains fault tolerant—continuing to function and repairing itself automatically if necessary. Because stateful information isn’t stored on any single instance, the loss of an instance—or even an entire zone—should not impact the application’s performance.
D. is not correct as this tests the resilience of the database (Cloud SQL) but not necessarily the authentication layer. The authentication layer might have separate components or dependencies that need to be tested under failure conditions.
Question #: 95
Topic #: 1
You have an application that makes HTTP requests to Cloud Storage. Occasionally the requests fail with HTTP status codes of 5xx and 429.
How should you handle these types of errors?
A. Use gRPC instead of HTTP for better performance. B. Implement retry logic using a truncated exponential backoff strategy. C. Make sure the Cloud Storage bucket is multi-regional for geo-redundancy. D. Monitor https://status.cloud.google.com/feed.atom and only make requests if Cloud Storage is not reporting an incident.
https://www.examtopics.com/discussions/google/view/7962-exam-professional-cloud-architect-topic-1-question-95/
B. Implement retry logic using a truncated exponential backoff strategy.
You should use exponential backoff to retry your requests when receiving errors with 5xx or 429 response codes from Cloud Storage.
https://cloud.google.com/storage/docs/request-rate
HTTP 408, 429, and 5xx response codes.
Exponential backoff algorithm
For requests that meet both the response and idempotency criteria, you should generally use truncated exponential backoff.
Truncated exponential backoff is a standard error handling strategy for network applications in which a client periodically retries a failed request with increasing delays between requests.
An exponential backoff algorithm retries requests exponentially, increasing the waiting time between retries up to a maximum backoff time. See the following workflow example to learn how exponential backoff works:
You make a request to Cloud Storage.
If the request fails, wait 1 + random_number_milliseconds seconds and retry the request.
If the request fails, wait 2 + random_number_milliseconds seconds and retry the request.
If the request fails, wait 4 + random_number_milliseconds seconds and retry the request.
And so on, up to a maximum_backoff time.
Continue waiting and retrying up to a maximum amount of time (deadline), but do not increase the maximum_backoff wait period between retries
Question #: 164
Topic #: 1
You are helping the QA team to roll out a new load-testing tool to test the scalability of your primary cloud services that run on Google Compute Engine with Cloud
Bigtable.
Which three requirements should they include? (Choose three.)
A. Ensure that the load tests validate the performance of Cloud Bigtable B. Create a separate Google Cloud project to use for the load-testing environment C. Schedule the load-testing tool to regularly run against the production environment D. Ensure all third-party systems your services use is capable of handling high load E. Instrument the production services to record every transaction for replay by the load-testing tool F. Instrument the load-testing tool and the target services with detailed logging and metrics collection
https://www.examtopics.com/discussions/google/view/54371-exam-professional-cloud-architect-topic-1-question-164/
B. Create a separate Google Cloud project to use for the load-testing environment
D. Ensure all third-party systems your services use is capable of handling high load
F. Instrument the load-testing tool and the target services with detailed logging and metrics collection
Selected Answer: BDF
A: No. Not needed since it’s a managed GCP product. It’ll scale to satisfy demand.
B: Yes. You could leave it in the same project as the app, but it’ll eventually be deployed to production and be a risk if anyone accidentally runs it against prod.
C: No. You musn’t run load testing against prod.
D: Yes. The capability of the third party systems should be tested. They are another link in the chain and if they are not up to the task, they may be replaced.
E: No. There is no need to use real data in the requests, this is a load test, not a behavior one.
F: Yes. Having detailed logs and metrics helps diagnosing problems during the tests.
Selected Answer: ABF
after reading link: https://cloud.google.com/bigtable/docs/performance
A:Run your typical workloads against Bigtable :Always run your own typical workloads against a Bigtable cluster when doing capacity planning, so you can figure out the best resource allocation for your applications.
B. Create a separate Google Cloud project to use for the load-testing environment
F : The most important/standard factor of testing, you gather logs and metrics in TEST environment for further scaling.
Question #: 166
Topic #: 1
Your company places a high value on being responsive and meeting customer needs quickly. Their primary business objectives are release speed and agility. You want to reduce the chance of security errors being accidentally introduced.
Which two actions can you take? (Choose two.)
A. Ensure every code check-in is peer reviewed by a security SME B. Use source code security analyzers as part of the CI/CD pipeline C. Ensure you have stubs to unit test all interfaces between components D. Enable code signing and a trusted binary repository integrated with your CI/CD pipeline E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline
https://www.examtopics.com/discussions/google/view/54372-exam-professional-cloud-architect-topic-1-question-166/
B. Use source code security analyzers as part of the CI/CD pipeline
E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline
D. Enable code signing and a trusted binary repository integrated with your CI/CD pipeline
E. Run a vulnerability security scanner as part of your continuous-integration /continuous-delivery (CI/CD) pipeline
Selected Answer: BE
B. Source Code Security Analyzers:
Integrating source code security analyzers into the CI/CD pipeline helps identify vulnerabilities in the codebase early in the development cycle. This ensures that security errors are caught and addressed before they make it into production.
E. Vulnerability Security Scanner:
Running a vulnerability scanner as part of the CI/CD pipeline identifies weaknesses in dependencies, configurations, and deployed artifacts. This provides an additional layer of security by detecting risks that might not be evident in the source code alone.
Answer - D & E
Speed will nit get hampered if the images are verified and attested. Checks need to be there. If you argument would be true than why to introduce VA scanner , as that will also induce delay in deployment.
when we select E , it might auto include B . Some VA scanning tools also do SAST.
So why choose B and E in that case.
D makes more sense with E .
Authorised repo will add an additional layer of security with verified images and artifacts in it.
Question #: 30
Topic #: 1
A lead engineer wrote a custom tool that deploys virtual machines in the legacy data center. He wants to migrate the custom tool to the new cloud environment.
You want to advocate for the adoption of Google Cloud Deployment Manager.
What are two business risks of migrating to Cloud Deployment Manager? (Choose two.)
A. Cloud Deployment Manager uses Python B. Cloud Deployment Manager APIs could be deprecated in the future C. Cloud Deployment Manager is unfamiliar to the company's engineers D. Cloud Deployment Manager requires a Google APIs service account to run E. Cloud Deployment Manager can be used to permanently delete cloud resources F. Cloud Deployment Manager only supports automation of Google Cloud resources
https://www.examtopics.com/discussions/google/view/54125-exam-professional-cloud-architect-topic-1-question-30/
C. Cloud Deployment Manager is unfamiliar to the company’s engineers
F. Cloud Deployment Manager only supports automation of Google Cloud resources
Question #: 112
Topic #: 1
You are moving an application that uses MySQL from on-premises to Google Cloud. The application will run on Compute Engine and will use Cloud SQL. You want to cut over to the Compute Engine deployment of the application with minimal downtime and no data loss to your customers. You want to migrate the application with minimal modification. You also need to determine the cutover strategy. What should you do?
A. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Create a mysqldump of the on-premises MySQL server. 4. Upload the dump to a Cloud Storage bucket. 5. Import the dump into Cloud SQL. 6. Modify the source code of the application to write queries to both databases and read from its local database. 7. Start the Compute Engine application. 8. Stop the on-premises application. B. 1. Set up Cloud SQL proxy and MySQL proxy. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Stop the on-premises application. 6. Start the Compute Engine application. C. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server. 4. Create the replication configuration in Cloud SQL. 5. Configure the source database server to accept connections from the Cloud SQL replica. 6. Finalize the Cloud SQL replica configuration. 7. When replication has been completed, stop the Compute Engine application. 8. Promote the Cloud SQL replica to a standalone instance. 9. Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance. D. 1. Stop the on-premises application. 2. Create a mysqldump of the on-premises MySQL server. 3. Upload the dump to a Cloud Storage bucket. 4. Import the dump into Cloud SQL. 5. Start the application on Compute Engine.
https://www.examtopics.com/discussions/google/view/56692-exam-professional-cloud-architect-topic-1-question-112/
C. 1. Set up Cloud VPN to provide private network connectivity between the Compute Engine application and the on-premises MySQL server. 2. Stop the on-premises application. 3. Start the Compute Engine application, configured to read and write to the on-premises MySQL server. 4. Create the replication configuration in Cloud SQL. 5. Configure the source database server to accept connections from the Cloud SQL replica. 6. Finalize the Cloud SQL replica configuration. 7. When replication has been completed, stop the Compute Engine application. 8. Promote the Cloud SQL replica to a standalone instance. 9. Restart the Compute Engine application, configured to read and write to the Cloud SQL standalone instance.
External replica promotion migration
In the migration strategy of external replica promotion, you create an external database replica and synchronize the existing data to that replica. This can happen with minimal downtime to the existing database.
When you have a replica database, the two databases have different roles that are referred to in this document as primary and replica.
After the data is synchronized, you promote the replica to be the primary in order to move the management layer with minimal impact to database uptime.
In Cloud SQL, an easy way to accomplish the external replica promotion is to use the automated migration workflow. This process automates many of the steps that are needed for this type of migration.
Question #: 88
Topic #: 1
You are using Cloud CDN to deliver static HTTP(S) website content hosted on a Compute Engine instance group. You want to improve the cache hit ratio.
What should you do?
A. Customize the cache keys to omit the protocol from the key. B. Shorten the expiration time of the cached objects. C. Make sure the HTTP(S) header ג€Cache-Regionג€ points to the closest region of your users. D. Replicate the static content in a Cloud Storage bucket. Point CloudCDN toward a load balancer on that bucket.
https://www.examtopics.com/discussions/google/view/9188-exam-professional-cloud-architect-topic-1-question-88/
A. Customize the cache keys to omit the protocol from the key.
https://cloud.google.com/cdn/docs/caching#cache-keys
Cache Keys and Protocols: Cloud CDN uses cache keys to identify and store content in its cache. By default, the protocol (HTTP or HTTPS) is included in the cache key. This means that the same content served over HTTP and HTTPS will be cached separately, reducing the cache hit ratio.
Omitting the Protocol: Customizing the cache keys to omit the protocol allows Cloud CDN to treat HTTP and HTTPS requests for the same content as identical. This increases the chance of a cache hit, as the CDN can serve the cached content regardless of the protocol used in the request.
Improved Cache Hit Ratio: By consolidating the cache entries for HTTP and HTTPS versions of the content, you effectively increase the cache hit ratio. This leads to better performance, reduced latency, and lower costs.
Question #: 69
Topic #: 1
You are running a cluster on Kubernetes Engine (GKE) to serve a web application. Users are reporting that a specific part of the application is not responding anymore. You notice that all pods of your deployment keep restarting after 2 seconds. The application writes logs to standard output. You want to inspect the logs to find the cause of the issue. Which approach can you take?
A. Review the Stackdriver logs for each Compute Engine instance that is serving as a node in the cluster. B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application. C. Connect to the cluster using gcloud credentials and connect to a container in one of the pods to read the logs. D. Review the Serial Port logs for each Compute Engine instance that is serving as a node in the cluster.
https://www.examtopics.com/discussions/google/view/6892-exam-professional-cloud-architect-topic-1-question-69/
B. Review the Stackdriver logs for the specific GKE container that is serving the unresponsive part of the application.
Stackdriver Logging seems to be enabled by default for GKE.
Looking here:
https://cloud.google.com/monitoring/kubernetes-engine/legacy-stackdriver/logging
For container and system logs, GKE deploys a per-node logging agent that reads container logs, adds helpful metadata, and then stores them. The logging agent checks for container logs in the following sources:
Standard output and standard error logs from containerized processes
Question #: 144
Topic #: 1
You are responsible for the Google Cloud environment in your company. Multiple departments need access to their own projects, and the members within each department will have the same project responsibilities. You want to structure your Google Cloud environment for minimal maintenance and maximum overview of
IAM permissions as each department’s projects start and end. You want to follow Google-recommended practices. What should you do?
A. Grant all department members the required IAM permissions for their respective projects. B. Create a Google Group per department and add all department members to their respective groups. Create a folder per department and grant the respective group the required IAM permissions at the folder level. Add the projects under the respective folders. C. Create a folder per department and grant the respective members of the department the required IAM permissions at the folder level. Structure all projects for each department under the respective folders. D. Create a Google Group per department and add all department members to their respective groups. Grant each group the required IAM permissions for their respective projects.
https://www.examtopics.com/discussions/google/view/60743-exam-professional-cloud-architect-topic-1-question-144/
B. Create a Google Group per department and add all department members to their respective groups. Create a folder per department and grant the respective group the required IAM permissions at the folder level. Add the projects under the respective folders.
https://cloud.google.com/resource-manager/docs/access-control-folders#best-practices-folders-iam
Use groups whenever possible to manage principals.
https://cloud.google.com/resource-manager/docs/creating-managing-folders
A folder can contain projects, other folders, or a combination of both. Organizations can use folders to group projects under the organization node in a hierarchy. For example, your organization might contain multiple departments, each with its own set of Google Cloud resources. Folders allow you to group these resources on a per-department basis.
Question #: 177
Topic #: 1
Your company has just recently activated Cloud Identity to manage users. The Google Cloud Organization has been configured as well. The security team needs to secure projects that will be part of the Organization. They want to prohibit IAM users outside the domain from gaining permissions from now on. What should they do?
A. Configure an organization policy to restrict identities by domain. B. Configure an organization policy to block creation of service accounts. C. Configure Cloud Scheduler to trigger a Cloud Function every hour that removes all users that don't belong to the Cloud Identity domain from all projects. D. Create a technical user (e.g., crawler@yourdomain.com), and give it the project owner role at root organization level. Write a bash script that: ג€¢ Lists all the IAM rules of all projects within the organization. ג€¢ Deletes all users that do not belong to the company domain. Create a Compute Engine instance in a project within the Organization and configure gcloud to be executed with technical user credentials. Configure a cron job that executes the bash script every hour.
https://www.examtopics.com/discussions/google/view/68690-exam-professional-cloud-architect-topic-1-question-177/
A. Configure an organization policy to restrict identities by domain.
https://cloud.google.com/resource-manager/docs/organization-policy/restricting-domains
Domain restricted sharing lets you limit resource sharing based on a domain or organization resource. When domain restricted sharing is active, only principals that belong to allowed domains or organizations can be granted IAM roles in your Google Cloud organization.
Question #: 75
Topic #: 1
You have deployed an application to Google Kubernetes Engine (GKE), and are using the Cloud SQL proxy container to make the Cloud SQL database available to the services running on Kubernetes. You are notified that the application is reporting database connection issues. Your company policies require a post- mortem. What should you do?
A. Use gcloud sql instances restart. B. Validate that the Service Account used by the Cloud SQL proxy container still has the Cloud Build Editor role. C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL. D. In the GCP Console, navigate to Cloud SQL. Restore the latest backup. Use kubectl to restart all pods.
https://www.examtopics.com/discussions/google/view/6452-exam-professional-cloud-architect-topic-1-question-75/
C. In the GCP Console, navigate to Stackdriver Logging. Consult logs for (GKE) and Cloud SQL.
jcmoranp Highly Voted 4 years, 2 months ago
post mortem always includes log analysis,
Question #: 128
Topic #: 1
You are implementing the infrastructure for a web service on Google Cloud. The web service needs to receive and store the data from 500,000 requests per second. The data will be queried later in real time, based on exact matches of a known set of attributes. There will be periods where the web service will not receive any requests. The business wants to keep costs low. Which web service platform and database should you use for the application?
A. Cloud Run and BigQuery B. Cloud Run and Cloud Bigtable C. A Compute Engine autoscaling managed instance group and BigQuery D. A Compute Engine autoscaling managed instance group and Cloud Bigtable
https://www.examtopics.com/discussions/google/view/56612-exam-professional-cloud-architect-topic-1-question-128/
B. Cloud Run and Cloud Bigtable
Any correct answer must involve Cloud Bigtable over BigQuery since Bigtable is optimized for heavy write loads. That leaves B and D. I would suggest B b/c it is lower cost (“The business wants to keep costs low”)
Additionally data need to store now so use Bigtable as question is not for analysing or data Analytics etc
occasionally there will be no requests. so Cloud Run will scale to zero
we are talking about a predefined set of queries. For any predefined list of (simple) queries, we use Bigtable, and for any (complex) queries that we do not know ahead of time, we use BigQuery.
At first I thought Cloud Run could not handle this request rate and then chose D. After a little bit of research on the docs I changed my mind to B.
On each instance concurrency, it clearly says
> By default each Cloud Run instance can receive up to 80 requests at the same time; you can increase this to a maximum of 1000
https://cloud.google.com/run/docs/about-concurrency
The maximum number of auto-scaling instances by default is 100, which can be configured depending on the regional quota. With the default max instances it can already handle 100 * 1000 = 100,000 requests concurrently, which should be able to achieve the 50,000 rps requirement.
https://cloud.google.com/run/docs/about-instance-autoscaling
Question #: 41
Topic #: 1
Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. What is the recommended approach for sanitizing this data of personally identifiable information or payment card information before initial storage?
A. Hash all data using SHA256 B. Encrypt all data using elliptic curve cryptography C. De-identify the data with the Cloud Data Loss Prevention API D. Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers
https://www.examtopics.com/discussions/google/view/11803-exam-professional-cloud-architect-topic-1-question-41/
C. De-identify the data with the Cloud Data Loss Prevention API
https://cloud.google.com/dlp
The recommended approach for sanitizing data of personally identifiable information or payment card information before storing it in Cloud Bigtable is option C: De-identify the data with the Cloud Data Loss Prevention API.
The Cloud Data Loss Prevention (DLP) API is a powerful tool that allows you to automatically discover, classify, and redact sensitive data in your organization. It uses advanced machine learning techniques to accurately identify and protect a wide range of sensitive data types, including personal information such as names, addresses, phone numbers, and payment card information.
Using the DLP API to de-identify your data before storing it in Cloud Bigtable is the most effective way to ensure that sensitive information is protected and not accessible to unauthorized users.
Question #: 174
Topic #: 1
You are working with a data warehousing team that performs data analysis. The team needs to process data from external partners, but the data contains personally identifiable information (PII). You need to process and store the data without storing any of the PIIE data. What should you do?
A. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery. B. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, store all non-PII data in BigQuery and store all PII data in a Cloud Storage bucket that has a retention policy set. C. Ask the external partners to upload all data on Cloud Storage. Configure Bucket Lock for the bucket. Create a Dataflow pipeline to read the data from the bucket. As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery. D. Ask the external partners to import all data in your BigQuery dataset. Create a dataflow pipeline to copy the data into a new table. As part of the Dataflow bucket, skip all data in columns that have PII data
https://www.examtopics.com/discussions/google/view/68685-exam-professional-cloud-architect-topic-1-question-174/
A. Create a Dataflow pipeline to retrieve the data from the external sources. As part of the pipeline, use the Cloud Data Loss Prevention (Cloud DLP) API to remove any PII data. Store the result in BigQuery.
Option C seems to be an option, but there are two non-conformities there. In addition to storing personal data in the GCS, it is being improperly retained.
C – is wrong because PII data is uploaded and the bucket is locked which means the data cannot be deleted
B and D are wron as they do not use Data loss prevention to protect data
Question #: 114
Topic #: 1
Your company uses the Firewall Insights feature in the Google Network Intelligence Center. You have several firewall rules applied to Compute Engine instances.
You need to evaluate the efficiency of the applied firewall ruleset. When you bring up the Firewall Insights page in the Google Cloud Console, you notice that there are no log rows to display. What should you do to troubleshoot the issue?
A. Enable Virtual Private Cloud (VPC) flow logging. B. Enable Firewall Rules Logging for the firewall rules you want to monitor. C. Verify that your user account is assigned the compute.networkAdmin Identity and Access Management (IAM) role. D. Install the Google Cloud SDK, and verify that there are no Firewall logs in the command line output.
https://www.examtopics.com/discussions/google/view/56375-exam-professional-cloud-architect-topic-1-question-114/
B. Enable Firewall Rules Logging for the firewall rules you want to monitor.
when you create a firewall rule there is an option for firewall rule logging on/off. It is set to off by default.
To get firewall insights or view the logs for a specific firewall rule you need to enable logging while creating the rule or you can enable it by editing that rule.
https://cloud.google.com/network-intelligence-center/docs/firewall-insights/how-to/using-firewall-insights#enabling-fw-rules-logging
Question #: 186
Topic #: 1
Your company uses Google Kubernetes Engine (GKE) as a platform for all workloads. Your company has a single large GKE cluster that contains batch, stateful, and stateless workloads. The GKE cluster is configured with a single node pool with 200 nodes. Your company needs to reduce the cost of this cluster but does not want to compromise availability. What should you do?
A. Create a second GKE cluster for the batch workloads only. Allocate the 200 original nodes across both clusters. B. Configure CPU and memory limits on the namespaces in the cluster. Configure all Pods to have a CPU and memory limits. C. Configure a HorizontalPodAutoscaler for all stateless workloads and for all compatible stateful workloads. Configure the cluster to use node auto scaling. D. Change the node pool to use preemptible VMs.
https://www.examtopics.com/discussions/google/view/79736-exam-professional-cloud-architect-topic-1-question-186/
C. Configure a HorizontalPodAutoscaler for all stateless workloads and for all compatible stateful workloads. Configure the cluster to use node auto scaling.
A: Is not necessary because you can have multiple node pools with different configurations.
B: Optimizes resource usage of CPU/memory in your existing node pool but does not necessarily improve cost - still an option that should be considered.
C: This looks really good. Autoscaling workloads and the node pools makes your whole infrastructure more elastic and gives you the option to rely on the same node pool.
D: This might not be a good option for every type of workload. Batch and stateless workloads can often handle this quite well, but stateful workloads are not well-suited for operation on preemptible VMs.
Question #: 32
Topic #: 1
You have created several pre-emptible Linux virtual machine instances using Google Compute Engine. You want to properly shut down your application before the virtual machines are preempted.
What should you do?
A. Create a shutdown script named k99.shutdown in the /etc/rc.6.d/ directory B. Create a shutdown script registered as a xinetd service in Linux and configure a Stackdriver endpoint check to call the service C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance D. Create a shutdown script, registered as a xinetd service in Linux, and use the gcloud compute instances add-metadata command to specify the service URL as the value for a new metadata entry with the key shutdown-script-url
https://www.examtopics.com/discussions/google/view/7202-exam-professional-cloud-architect-topic-1-question-32/
C. Create a shutdown script and use it as the value for a new metadata entry with the key shutdown-script in the Cloud Platform Console when you create the new virtual machine instance
https://cloud.google.com/compute/docs/shutdownscript
Question #: 58
Topic #: 1
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?
A. Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24 hours. B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours. C. Create an App Engine web application where users can upload images. Configure App Engine to disable the application after 24 hours. Authenticate users via Cloud Identity. D. Create an App Engine web application where users can upload images for the next 24 hours. Authenticate users via Cloud Identity.
https://www.examtopics.com/discussions/google/view/6889-exam-professional-cloud-architect-topic-1-question-58/
B. Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.
“When should you use a signed URL? In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage” “Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource”
https://cloud.google.com/storage/docs/access-control/signed-urls
Question #: 182
Topic #: 1
You are managing several projects on Google Cloud and need to interact on a daily basis with BigQuery, Bigtable, and Kubernetes Engine using the gcloud CL tool. You are travelling a lot and work on different workstations during the week. You want to avoid having to manage the gcloud CLI manually. What should you do?
A. Use Google Cloud Shell in the Google Cloud Console to interact with Google Cloud. B. Create a Compute Engine instance and install gcloud on the instance. Connect to this instance via SSH to always use the same gcloud installation when interacting with Google Cloud. C. Install gcloud on all of your workstations. Run the command gcloud components auto-update on each workstation D. Use a package manager to install gcloud on your workstations instead of installing it manually.
https://www.examtopics.com/discussions/google/view/80035-exam-professional-cloud-architect-topic-1-question-182/
A. Use Google Cloud Shell in the Google Cloud Console to interact with Google Cloud.
C / D are totally not it. Since they will do so much work and it says “do not want to manage gcoud CLI manually”
Then B is not a good cost option, besides at the end you are managing your gcloud cli manually. So A is the only left correct answer.
Question #: 136
Topic #: 1
Your company is developing a new application that will allow globally distributed users to upload pictures and share them with other selected users. The application will support millions of concurrent users. You want to allow developers to focus on just building code without having to create and maintain the underlying infrastructure. Which service should you use to deploy the application?
A. App Engine B. Cloud Endpoints C. Compute Engine D. Google Kubernetes Engine
https://www.examtopics.com/discussions/google/view/56754-exam-professional-cloud-architect-topic-1-question-136/
A. App Engine
Question #: 99
Topic #: 1
You are deploying a PHP App Engine Standard service with Cloud SQL as the backend. You want to minimize the number of queries to the database.
What should you do?
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL. B. Set the memcache service level to dedicated. Create a cron task that runs every minute to populate the cache with keys containing query results. C. Set the memcache service level to shared. Create a cron task that runs every minute to save all expected queries to a key called ג€cached_queriesג€. D. Set the memcache service level to shared. Create a key called ג€cached_queriesג€, and return database values from the key before using a query to Cloud SQL.
https://www.examtopics.com/discussions/google/view/7377-exam-professional-cloud-architect-topic-1-question-99/
A. Set the memcache service level to dedicated. Create a key from the hash of the query, and return database values from memcache before issuing a query to Cloud SQL
hiteshrup Highly Voted 3 years ago
A dedicated memset is always better than shared until cost-effectiveness specify in the exam as objective. So Option C and D are ruled out.
From A and B, Option B is sending and updating query every minutes which is over killing. So reasonable option left with A which balance performance and cost.
https://cloud.google.com/appengine/docs/standard/php/memcache/using
Question #: 189
Topic #: 1
You want to store critical business information in Cloud Storage buckets. The information is regularly changed, but previous versions need to be referenced on a regular basis. You want to ensure that there is a record of all changes to any information in these buckets. You want to ensure that accidental edits or deletions can be easily rolled back. Which feature should you enable?
A. Bucket Lock B. Object Versioning C. Object change notification D. Object Lifecycle Management
https://www.examtopics.com/discussions/google/view/80304-exam-professional-cloud-architect-topic-1-question-189/
B. Object Versioning
Question #: 146
Topic #: 1
Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity, the overall cost, and database load. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?
A. Develop a Dataflow job to read data directly from the database and write it into Cloud Storage. B. Use the Data Transfer appliance to perform an offline migration. C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage. D. Compress the data and upload it with gsutil -m to enable multi-threaded copy.
https://www.examtopics.com/discussions/google/view/60720-exam-professional-cloud-architect-topic-1-question-146/
B. Use the Data Transfer appliance to perform an offline migration.
Question #: 25
Topic #: 1
During a high traffic portion of the day, one of your relational databases crashes, but the replica is never promoted to a master. You want to avoid this in the future.
What should you do?
A. Use a different database B. Choose larger instances for your database C. Create snapshots of your database more regularly D. Implement routinely scheduled failovers of your databases
https://www.examtopics.com/discussions/google/view/7118-exam-professional-cloud-architect-topic-1-question-25/
D. Implement routinely scheduled failovers of your databases M
In order to avoid a similar situation in the future where a replica is never promoted to a master during a high traffic portion of the day, the best option is to implement routinely scheduled failovers of your databases.
Option A, using a different database, may not solve the issue as other databases may have similar issues with replica promotion during high traffic periods. Option B, choosing larger instances for your database, may improve performance but will not necessarily prevent a replica from being promoted to a master. Option C, creating snapshots of your database more regularly, is a good practice for backup and recovery purposes, but it does not address the issue of replica promotion.
Implementing routinely scheduled failovers of your databases is a proactive approach that ensures that replicas are periodically promoted to masters, allowing for more reliable database performance. This can be achieved through the use of automated failover mechanisms, such as Amazon RDS Multi-AZ or Cloud SQL High Availability, which automatically promote a replica to a master in the event of a failure.
In addition to implementing routinely scheduled failovers, it is important to ensure that your database environment is properly configured for high availability and resilience. This may involve the use of load balancing, redundant network connections, and multiple availability zones. Regular monitoring and testing of your database environment can also help identify potential issues before they become critical.
Question #: 151
Topic #: 1
Your company has a support ticketing solution that uses App Engine Standard. The project that contains the App Engine application already has a Virtual Private
Cloud (VPC) network fully connected to the company’s on-premises environment through a Cloud VPN tunnel. You want to enable the App Engine application to communicate with a database that is running in the company’s on-premises environment. What should you do?
A. Configure private Google access for on-premises hosts only. B. Configure private Google access. C. Configure private services access. D. Configure serverless VPC access.
https://www.examtopics.com/discussions/google/view/60436-exam-professional-cloud-architect-topic-1-question-151/
D. Configure serverless VPC access.
https://cloud.google.com/vpc/docs/serverless-vpc-access
Serverless VPC Access makes it possible for you to connect directly to your Virtual Private Cloud (VPC) network from serverless environments such as Cloud Run, App Engine, or Cloud Run functions. Configuring Serverless VPC Access allows your serverless environment to send requests to your VPC network by using internal DNS and internal IP addresses
Answer is D. Here the explanation since I didn’t see any good answer:
1- We have a VPC.
2- We have an onpremisses DB.
3- We have App Engine (that runs on a isolated network that does not belong to the VPC).
4- We can connect the VPC to the onpremisses network using Cloud VPN, which is the main purpose of Cloud VPN (let’s say to simplify this answer).
5 - Now how we connect the AppEngine that is isolated from the VPC and needs to use “something” to reach out the onpremisses DB directly (no public ip, only private ip)? Here we will have to have somehow access to the VPC and then the VPN and then the on premisses DB. That is the serverless vpc access.
6- So flow can be something like app engine –> serverless vpc access –> cloud VPN —> on premessises db through private ip.
Question #: 117
Topic #: 1
Your company is designing its application landscape on Compute Engine. Whenever a zonal outage occurs, the application should be restored in another zone as quickly as possible with the latest application data. You need to design the solution to meet this requirement. What should you do?
A. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in the same zone. B. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region. Use the regional persistent disk for the application data. C. Create a snapshot schedule for the disk containing the application data. Whenever a zonal outage occurs, use the latest snapshot to restore the disk in another zone within the same region. D. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another region. Use the regional persistent disk for the application data.
https://www.examtopics.com/discussions/google/view/56403-exam-professional-cloud-architect-topic-1-question-117/
B. Configure the Compute Engine instances with an instance template for the application, and use a regional persistent disk for the application data. Whenever a zonal outage occurs, use the instance template to spin up the application in another zone in the same region. Use the regional persistent disk for the application data.
Regional persistent disk is a storage option that provides synchronous replication of data between two zones in a region. Regional persistent disks can be a good building block to use when you implement HA services in Compute Engine.
https://cloud.google.com/compute/docs/disks/high-availability-regional-persistent-disk
A regional persistent disk is designed to provide synchronous replication of data between two zones in the same region, ensuring that data remains available even if one zone is affected by an outage. By using an instance template along with a regional disk, you can quickly create new instances in an available zone during a zonal outage and attach the regional persistent disk to continue operations with the latest application data.
Question #: 44
Topic #: 1
You are analyzing and defining business processes to support your startup’s trial usage of GCP, and you don’t yet know what consumer demand for your product will be. Your manager requires you to minimize GCP service costs and adhere to Google best practices. What should you do?
A. Utilize free tier and sustained use discounts. Provision a staff position for service cost management. B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management. C. Utilize free tier and committed use discounts. Provision a staff position for service cost management. D. Utilize free tier and committed use discounts. Provide training to the team about service cost management.
https://www.examtopics.com/discussions/google/view/7190-exam-professional-cloud-architect-topic-1-question-44/
B. Utilize free tier and sustained use discounts. Provide training to the team about service cost management.
Sustained are automatic discounts for running specific GCE a significant portion of the billing month: https://cloud.google.com/compute/docs/sustained-use-discounts
Committed is for workloads with predictable resource needs between 1 year or 3 year, discount is up to 57% for most resources: https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Question #: 15
Topic #: 1
Your application needs to process credit card transactions. You want the smallest scope of Payment Card Industry (PCI) compliance without compromising the ability to analyze transactional data and trends relating to which payment methods are used.
How should you design your architecture?
A. Create a tokenizer service and store only tokenized data B. Create separate projects that only process credit card data C. Create separate subnetworks and isolate the components that process credit card data D. Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data E. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
https://www.examtopics.com/discussions/google/view/7147-exam-professional-cloud-architect-topic-1-question-15/
A. Create a tokenizer service and store only tokenized data
Final Decision to go with Option A. I have done PCI DSS Audit for my project and thats the best suited case. 100% sure to use tokenised data instead of actual card number
To minimize the scope of Payment Card Industry (PCI) compliance while still being able to analyze transactional data and trends, the best design approach is to use tokenization and network isolation.
Option A, “Create a tokenizer service and store only tokenized data,” is the correct answer. Tokenization involves replacing sensitive data with a randomly generated token, which can then be stored and analyzed without compromising the security of the original data. By tokenizing credit card data, the application can analyze transactional data and trends without having to worry about PCI compliance for the actual credit card data.
Option B, “Create separate projects that only process credit card data,” is not the best approach as it would require creating multiple projects and duplicating the necessary components for each project. This can lead to higher costs and management complexity.
Option C, “Create separate subnetworks and isolate the components that process credit card data,” is also a good option. By isolating the components that process credit card data into their own subnetworks, the risk of a security breach or unauthorized access to sensitive data is reduced.
Option D, “Streamline the audit discovery phase by labeling all of the virtual machines (VMs) that process PCI data,” is not a comprehensive solution to minimize PCI compliance scope. Labeling VMs can help with auditing, but it does not provide a complete solution for protecting sensitive data.
Option E, “Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor,” is not the best approach as it still requires PCI compliance for the actual credit card data. Logging export to BigQuery can provide valuable insights into transactional data and trends, but it is not a complete solution for minimizing the PCI compliance scope.
In summary, the best approach is to use tokenization to replace sensitive credit card data with randomly generated tokens, allowing the application to analyze transactional data and trends without having to worry about PCI compliance for the actual credit card data. Additionally, isolating the components that process credit card data into their own subnetworks can further reduce the risk of security breaches or unauthorized access.
Question #: 65
Topic #: 1
You need to upload files from your on-premises environment to Cloud Storage. You want the files to be encrypted on Cloud Storage using customer-supplied encryption keys. What should you do?
A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files. B. Supply the encryption key using gcloud config. Use gsutil to upload the files to that bucket. C. Use gsutil to upload the files, and use the flag --encryption-key to supply the encryption key. D. Use gsutil to create a bucket, and use the flag --encryption-key to supply the encryption key. Use gsutil to upload the files to that bucket.
https://www.examtopics.com/discussions/google/view/6308-exam-professional-cloud-architect-topic-1-question-65/
A. Supply the encryption key in a .boto configuration file. Use gsutil to upload the files
C. Use gsutil to upload the files, and use the flag –encryption-key to supply the encryption key.
Answer A
In GCP document, key could be configured in .boto.
I didn’t find information show gsutil suppots flag “–encryption-key”.
https://cloud.google.com/storage/docs/encryption/customer-supplied-keys
Answer C
https://cloud.google.com/storage/docs/encryption/using-customer-supplied-keys#upload-encrypt
Option C is correct. You can upload a file using customer-supplied encryption with the command:
gcloud storage cp SOURCE_DATA gs://BUCKET_NAME/OBJECT_NAME –encryption-key=YOUR_ENCRYPTION_KEY
Question #: 145
Topic #: 1
Your company has an application running as a Deployment in a Google Kubernetes Engine (GKE) cluster. You have separate clusters for development, staging, and production. You have discovered that the team is able to deploy a Docker image to the production cluster without first testing the deployment in development and then staging. You want to allow the team to have autonomy but want to prevent this from happening. You want a Google Cloud solution that can be implemented quickly with minimal effort. What should you do?
A. Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment. B. Implement a corporate policy to prevent teams from deploying Docker images to an environment unless the Docker image was tested in an earlier environment. C. Configure binary authorization policies for the development, staging, and production clusters. Create attestations as part of the continuous integration pipeline. D. Create a Kubernetes admissions controller to prevent the container from starting if it is not approved for usage in the given environment.
https://www.examtopics.com/discussions/google/view/60438-exam-professional-cloud-architect-topic-1-question-145/
C. Configure binary authorization policies for the development, staging, and production clusters. Create attestations as part of the continuous integration pipeline.
https://cloud.google.com/binary-authorization/docs/overview#policy_model
Option A, “Configure a Kubernetes lifecycle hook to prevent the container from starting if it is not approved for usage in the given environment,” would not be a good choice because it would not prevent the deployment of the container to the cluster in the first place.
Option D, “Create a Kubernetes admissions controller to prevent the container from starting if it is not approved for usage in the given environment,” would also not be a good choice because it would not prevent the deployment of the container to the cluster in the first place.
Option B, “Implement a corporate policy to prevent teams from deploying Docker images to an environment unless the Docker image was tested in an earlier environment,” would be a good option, but it would not be as effective as using binary authorization policies, as it would rely on the team following the policy rather than enforcing it automatically.
Question #: 28
Topic #: 1
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process.
What should you do?
A. Create custom Google Stackdriver alerts and send them to the auditor B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor C. Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor's view D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
https://www.examtopics.com/discussions/google/view/6884-exam-professional-cloud-architect-topic-1-question-28/
B. Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor
or
D. Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket
The best option for streamlining and expediting the analysis and audit process of Google Cloud Identity and Access Management (Cloud IAM) policy changes over the previous 12 months is to enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket. Option D is the correct answer.
Here’s why:
Google Cloud IAM is a critical component of managing cloud resources on the Google Cloud Platform. It enables you to control access to resources in your cloud projects and manage the permissions of users, service accounts, and Google groups. It is important to track any changes made to Cloud IAM policies to ensure that access to resources is granted or revoked appropriately.
Option A: Create custom Google Stackdriver alerts and send them to the auditor Creating custom Google Stackdriver alerts for Cloud IAM policy changes could help notify the auditor of any changes. However, this option does not provide the auditor with direct access to the data they need to review the policy changes. It only notifies them when there is a change, which means that they will need to request access to the relevant data before conducting an audit. This could potentially slow down the audit process and is not the most efficient option.
Option B: Enable Logging export to Google BigQuery and use ACLs and views to scope the data shared with the auditor Enabling Logging export to Google BigQuery is a good option to store and manage the audit logs. It provides an efficient way to search, analyze and visualize the logs. However, this option requires additional work to set up the ACLs and views to limit an auditor’s view of the data. This could be time-consuming and complex to implement. Furthermore, BigQuery may not be the ideal tool for auditors who are only interested in reviewing Cloud IAM policy changes.
Option C: Use cloud functions to transfer log entries to Google Cloud SQL and use ACLs and views to limit an auditor’s view Using Cloud Functions to transfer log entries to Google Cloud SQL is an interesting option as it could provide a relational database solution for storing audit logs. However, as with option B, it requires additional work to set up the ACLs and views to limit the auditor’s view of the data. This option is also more complex and potentially more expensive to set up and maintain than option D.
Option D: Enable Google Cloud Storage (GCS) log export to audit logs into a GCS bucket and delegate access to the bucket Enabling GCS log export to audit logs into a GCS bucket is the most straightforward and efficient option for this scenario. It provides auditors with direct access to the audit logs in a simple and cost-effective way. Additionally, GCS buckets offer robust security features that can be used to control access to the audit logs. Delegating access to the bucket can be done using IAM roles and permissions, which simplifies the setup process. Overall, this option is the most practical and efficient solution to streamline and expedite the analysis and audit process.
Question #: 26
Topic #: 1
Your organization requires that metrics from all applications be retained for 5 years for future analysis in possible legal proceedings.
Which approach should you use?
A. Grant the security team access to the logs in each Project B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery C. Configure Stackdriver Monitoring for all Projects with the default retention policies D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
https://www.examtopics.com/discussions/google/view/7172-exam-professional-cloud-architect-topic-1-question-26/
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage
The correct answer for this scenario is D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage.
Here’s why:
A. Grant the security team access to the logs in each Project: This approach does not meet the requirement of retaining metrics for five years, as logs in each project may be deleted after a certain period. Moreover, granting access to the security team alone may not be sufficient to retain and analyze the logs.
B. Configure Stackdriver Monitoring for all Projects, and export to BigQuery: This approach can work, but it may be expensive since BigQuery charges for the storage and processing of data. Additionally, you may need to configure a specific retention policy to retain the data for 5 years, which can further increase the cost.
C. Configure Stackdriver Monitoring for all Projects with the default retention policies: By default, Stackdriver Monitoring retains logs for 30 days only, which does not meet the requirement of retaining metrics for 5 years.
D. Configure Stackdriver Monitoring for all Projects, and export to Google Cloud Storage: This approach is the most appropriate because it allows for the configuration of retention policies that meet the requirement of retaining metrics for 5 years. Also, Google Cloud Storage is a cost-effective solution for long-term data storage. Exporting the logs to Google Cloud Storage can be automated and scheduled for regular intervals, reducing the manual effort required to ensure compliance with the retention policy. The exported logs can then be analyzed using various tools, including BigQuery, if needed.
In conclusion, the best approach for this scenario is to configure Stackdriver Monitoring for all projects and export logs to Google Cloud Storage, which can meet the requirement of retaining metrics for 5 years, while also being cost-effective and manageable.
Question #: 62
Topic #: 1
You need to evaluate your team readiness for a new GCP project. You must perform the evaluation and create a skills gap plan which incorporates the business goal of cost optimization. Your team has deployed two GCP projects successfully to date. What should you do?
A. Allocate budget for team training. Set a deadline for the new GCP project. B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role. C. Allocate budget to hire skilled external consultants. Set a deadline for the new GCP project. D. Allocate budget to hire skilled external consultants. Create a roadmap for your team to achieve Google Cloud certification based on job role.
https://www.examtopics.com/discussions/google/view/8373-exam-professional-cloud-architect-topic-1-question-62/
B. Allocate budget for team training. Create a roadmap for your team to achieve Google Cloud certification based on job role.
Question #: 57
Topic #: 1
Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.
How should you configure users’ access roles?
A. Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data. B. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data. C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data. D. Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.
https://www.examtopics.com/discussions/google/view/68708-exam-professional-cloud-architect-topic-1-question-57/
C. Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
BigQuery User: (roles/bigquery.user)
When applied to a dataset, this role provides the ability to read the dataset’s metadata and list tables in the dataset.
When applied to a project, this role also provides the ability to run jobs, including queries, within the project. A principal with this role can enumerate their own jobs, cancel their own jobs, and enumerate datasets within a project. <b>Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role(roles/bigquery.dataOwner) on these new datasets.</b>
Lowest-level resources where you can grant this role: Dataset
BigQuery Job User: (roles/bigquery.jobUser)
Provides permissions to run jobs, including queries, within the project.
Lowest-level resources where you can grant this role: Project
Source: https://cloud.google.com/bigquery/docs/access-control
Question #: 190
Topic #: 1
You have a Compute Engine application that you want to autoscale when total memory usage exceeds 80%. You have installed the Cloud Monitoring agent and configured the autoscaling policy as follows:
✑ Metric identifier: agent.googleapis.com/memory/percent_used
✑ Filter: metric.label.state = ‘used’
✑ Target utilization level: 80
✑ Target type: GAUGE
You observe that the application does not scale under high load. You want to resolve this. What should you do?
A. Change the Target type to DELTA_PER_MINUTE. B. Change the Metric identifier to agent.googleapis.com/memory/bytes_used. C. Change the filter to metric.label.state = 'used' AND metric.label.state = 'buffered' AND metric.label.state = 'cached' AND metric.label.state = 'slab'. D. Change the filter to metric.label.state = 'free' and the Target utilization to 20.
https://www.examtopics.com/discussions/google/view/80040-exam-professional-cloud-architect-topic-1-question-190/
C. Change the filter to metric.label.state = ‘used’ AND metric.label.state = ‘buffered’ AND metric.label.state = ‘cached’ AND metric.label.state = ‘slab’.
C is correct answer:
A. Change the Target type to DELTA_PER_MINUTE. (in this case the utlization tagret need to be in minutes which is not the case its percentage % and not time based.
B. Change the Metric identifier to agent.googleapis.com/memory/bytes_used. (not applicable)
C. Change the filter to metric.label.state = ‘used’ AND metric.label.state = ‘buffered’ AND metric.label.state = ‘cached’ AND metric.label.state = ‘slab’. (this gives total memory used)
D. Change the filter to metric.label.state = ‘free’ and the Target utilization to 20. (you would still need to change the the percent_used to percent_free)
Selected Answer: A
TARGET_TYPE: the value type for the metric.
gauge: the autoscaler computes the average value of the data collected in the last couple of minutes and compares that to the utilization target.
delta-per-minute: the autoscaler calculates the average rate of growth per minute and compares that to the utilization target.
delta-per-second: the autoscaler calculates the average rate of growth per second and compares that to the utilization target. For accurate comparisons, if you set the utilization target in seconds, use delta-per-second as the target type. Likewise, use delta-per-minute for a utilization target in minutes.
Question #: 8
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. The data from the motion detector includes only a sensor ID and several different discrete items of information. Analysts will use this data, together with information about account owners and office locations.
Which database type should you use?
A. Flat file B. NoSQL C. Relational D. Blobstore
https://www.examtopics.com/discussions/google/view/7126-exam-professional-cloud-architect-topic-1-question-8/
B. NoSQL
- High Volume and Velocity of Data: You have 1000 rooms reporting data every second, resulting in a massive amount of data with high velocity. NoSQL databases are designed to handle this kind of volume and speed efficiently.
- Simple Data Structure: The data from the motion sensor is relatively simple (sensor ID and discrete information). NoSQL databases are well-suited for storing and processing this type of data without the need for complex schemas.
- Flexible Schema: NoSQL databases offer schema flexibility, allowing you to easily adapt to changes in the data structure if needed. This is important as your tracking requirements might evolve over time.
4.Scalability: NoSQL databases are highly scalable, making it easy to accommodate future growth in the number of meeting rooms or data volume.
Relational databases were not designed to cope with the scale and agility challenges that face modern applications, nor were they built to take advantage of the commodity storage and processing power available today.
NoSQL fits well for: -> Developers are working with applications that create massive volumes of new, rapidly changing data types “ structured, semi-structured, unstructured and polymorphic data.
Incorrect Answers: D: The Blobstore API allows your application to serve data objects, called blobs, that are much larger than the size allowed for objects in the Datastore service.
Blobs are useful for serving large files, such as video or image files, and for allowing users to upload large data files.
https://www.mongodb.com/nosql-explained
Based on the provided scenario, the recommended database type would be NoSQL. Here are the reasons why:
Scalability: The system needs to track data from 1000 meeting rooms, which generates a large amount of data every second. With such a large scale of data, a NoSQL database can handle data more efficiently than a relational database, which has a fixed schema and requires expensive scaling operations. Flexibility: NoSQL databases can accommodate a wide variety of data types, including unstructured and semi-structured data, which is well-suited to the motion detector data that contains sensor IDs and different types of information. Moreover, NoSQL databases allow for easy updates to the data schema, making them adaptable to evolving data requirements. High availability: The system requires 24/7 availability for detecting and tracking room occupancy in real-time. NoSQL databases are designed to be highly available, with built-in replication and failover mechanisms. Performance: NoSQL databases are optimized for high-performance, low-latency data processing, making them ideal for real-time analytics and data processing.
In summary, NoSQL databases are better suited for the scenario described in the question, due to their scalability, flexibility, high availability, and performance characteristics.
Question #: 40
Topic #: 1
As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQL database from their private data center to their
GCP project using a Google Cloud VPN connection. They are experiencing latency issues and a small amount of packet loss that is disrupting the replication.
What should they do?
A. Configure their replication to use UDP. B. Configure a Google Cloud Dedicated Interconnect. C. Restore their database daily using Google Cloud SQL. D. Add additional VPN connections and load balance them. E. Send the replicated transaction to Google Cloud Pub/Sub.
https://www.examtopics.com/discussions/google/view/7211-exam-professional-cloud-architect-topic-1-question-40/
B. Configure a Google Cloud Dedicated Interconnect
Adding VPN connections may improve bandwidth but does not resolve latency or packet loss issues caused by public internet routing… though not mentioned, we must ‘think’ beyond the scope of the question and ask ‘what is causing the latency’…
Question #: 2
Topic #: 1
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using a SQL interface.
How should you store the data to optimize it for ease of analysis?
A. Load data into Google BigQuery B. Insert data into Google Cloud SQL C. Put flat files into Google Cloud Storage D. Stream data into Google Cloud Datastore
https://www.examtopics.com/discussions/google/view/7080-exam-professional-cloud-architect-topic-1-question-2/
A. Load data into Google BigQuery
IMHO, it should be A only. The reason is that they want to perform analysis on the data and BigQuery excels in that over Cloud SQL. You can run SQL queries in both but I BigQuery has better analytical tools. It can do ad-hoc analysis like Cloud SQL using Cloud Standard SQL and it can do geo-spatial and ML analysis via its Cloud Standard SQL interface.
Question #: 18
Topic #: 1
Your company is forecasting a sharp increase in the number and size of Apache Spark and Hadoop jobs being run on your local datacenter. You want to utilize the cloud to help you scale this upcoming demand with the least amount of operations work and code change.
Which product should you use?
A. Google Cloud Dataflow B. Google Cloud Dataproc C. Google Compute Engine D. Google Kubernetes Engine
https://www.examtopics.com/discussions/google/view/7154-exam-professional-cloud-architect-topic-1-question-18/
B. Google Cloud Dataproc
Dataproc is a managed Spark and Hadoop service that lets you take advantage of open source data tools for batch processing, querying, streaming, and machine learning. Dataproc automation helps you create clusters quickly, manage them easily, and save money by turning clusters off when you don’t need them. With less time and money spent on administration, you can focus on your jobs and your data.
https://cloud.google.com/dataproc/docs/concepts/overview
Question #: 98
Topic #: 1
Your company acquired a healthcare startup and must retain its customers’ medical information for up to 4 more years, depending on when it was created. Your corporate policy is to securely retain this data, and then delete it as soon as regulations allow.
Which approach should you take?
A. Store the data in Google Drive and manually delete records as they expire. B. Anonymize the data using the Cloud Data Loss Prevention API and store it indefinitely. C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire. D. Store the data in Cloud Storage and run a nightly batch script that deletes all expired data.
https://www.examtopics.com/discussions/google/view/7376-exam-professional-cloud-architect-topic-1-question-98/
C. Store the data in Cloud Storage and use lifecycle management to delete files when they expire.
Question #: 20
Topic #: 1
You want to optimize the performance of an accurate, real-time, weather-charting application. The data comes from 50,000 sensors sending 10 readings a second, in the format of a timestamp and sensor reading.
Where should you store the data?
A. Google BigQuery B. Google Cloud SQL C. Google Cloud Bigtable D. Google Cloud Storage
https://www.examtopics.com/discussions/google/view/22386-exam-professional-cloud-architect-topic-1-question-20/
C. Google Cloud Bigtable
- High Write Throughput: Bigtable excels at handling high-volume write operations, which is crucial for your application receiving data from 50,000 sensors sending 10 readings per second.
- Low Latency: Bigtable offers very low latency for read operations, essential for real-time charting and data visualization.
- Time-Series Data: Bigtable is well-suited for storing and querying time-series data, like your weather sensor readings with timestamps.
- Scalability: Bigtable can handle massive amounts of data and scale seamlessly as your application grows.
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Good for: -> Low-latency read/write access -> High-throughput analytics -> Native time series support Common workloads: -> IoT, finance, adtech -> Personalization, recommendations -> Monitoring -> Geospatial datasets -> Graphs Reference: https://cloud.google.com/storage-options/
Question #: 188
Topic #: 1
Your company and one of its partners each have a Google Cloud project in separate organizations. Your company’s project (prj-a) runs in Virtual Private Cloud
(vpc-a). The partner’s project (prj-b) runs in vpc-b. There are two instances running on vpc-a and one instance running on vpc-b. Subnets defined in both VPCs are not overlapping. You need to ensure that all instances communicate with each other via internal IPs, minimizing latency and maximizing throughput. What should you do?
A. Set up a network peering between vpc-a and vpc-b. B. Set up a VPN between vpc-a and vpc-b using Cloud VPN. C. Configure IAP TCP forwarding on the instance in vpc-b, and then launch the following gcloud command from one of the instances in vpc-a gcloud: gcloud compute start-iap-tunnel INSTANCE_NAME_IN_VPC_8 22 \ --local-host-port=localhost:22 D. 1. Create an additional instance in vpc-a. 2. Create an additional instance in vpc-b. 3. Install OpenVPN in newly created instances. 4. Configure a VPN tunnel between vpc-a and vpc-b with the help of OpenVPN.
https://www.examtopics.com/discussions/google/view/80000-exam-professional-cloud-architect-topic-1-question-188/
A. Set up a network peering between vpc-a and vpc-b.
https://cloud.google.com/vpc/docs/vpc-peering
Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization.
Question #: 23
Topic #: 1
Your solution is producing performance bugs in production that you did not see in staging and test environments. You want to adjust your test and deployment procedures to avoid this problem in the future.
What should you do?
A. Deploy fewer changes to production B. Deploy smaller changes to production C. Increase the load on your test and staging environments D. Deploy changes to a small subset of users before rolling out to production
https://www.examtopics.com/discussions/google/view/7167-exam-professional-cloud-architect-topic-1-question-23/
C. Increase the load on your test and staging environments
According to the question, [Your solution is producing “performance” bugs in production], so I think it is about the load. Plus canary test will not reproduce the bugs related to high load, I vote for C
Question #: 94
Topic #: 1
You need to design a solution for global load balancing based on the URL path being requested. You need to ensure operations reliability and end-to-end in- transit encryption based on Google best practices.
What should you do?
A. Create a cross-region load balancer with URL Maps. B. Create an HTTPS load balancer with URL Maps. C. Create appropriate instance groups and instances. Configure SSL proxy load balancing. D. Create a global forwarding rule. Configure SSL proxy load balancing.
https://www.examtopics.com/discussions/google/view/10289-exam-professional-cloud-architect-topic-1-question-94/
B. Create an HTTPS load balancer with URL Maps.
https://cloud.google.com/load-balancing/docs/url-map-concepts
Question #: 68
Topic #: 1
Your company wants to start using Google Cloud resources but wants to retain their on-premises Active Directory domain controller for identity management.
What should you do?
A. Use the Admin Directory API to authenticate against the Active Directory domain controller. B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO. C. Use Cloud Identity-Aware Proxy configured to use the on-premises Active Directory domain controller as an identity provider. D. Use Compute Engine to create an Active Directory (AD) domain controller that is a replica of the on-premises AD domain controller using Google Cloud Directory Sync.
https://www.examtopics.com/discussions/google/view/6528-exam-professional-cloud-architect-topic-1-question-68/
B. Use Google Cloud Directory Sync to synchronize Active Directory usernames with cloud identities and configure SAML SSO.
And in the document(https://cloud.google.com/iap/docs/concepts-overview), it says:
If you need to create Google Accounts for your existing users, you can use Google Cloud Directory Sync to synchronize with your Active Directory or LDAP server.
Question #: 89
Topic #: 1
Your architecture calls for the centralized collection of all admin activity and VM system logs within your project.
How should you collect these logs from both VMs and services?
A. All admin and VM system logs are automatically collected by Stackdriver. B. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs. C. Launch a custom syslogd compute instance and configure your GCP project and VMs to forward all logs to it. D. Install the Stackdriver Logging agent on a single compute instance and let it collect all audit and access logs for your environment.
https://www.examtopics.com/discussions/google/view/6712-exam-professional-cloud-architect-topic-1-question-89/
B. Stackdriver automatically collects admin activity logs for most services. The Stackdriver Logging agent must be installed on each instance to collect system logs.
Admin and event logs are configured by default. VM System logs require a logging agent to be configured. So A is not valid. Answer is B
Now it is recommended to use OpsAgent as a replacement. Although you can create a VM instance with OpsAgent automatically enabled, which makes it look like ‘the logging is automatically enabled’, under the hood you need to install the agent on the instance.
https://cloud.google.com/stackdriver/docs/solutions/agents/ops-agent/install-agent-vm-creation
Question #: 93
Topic #: 1
A development team at your company has created a dockerized HTTPS web application. You need to deploy the application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically.
How should you deploy to GKE?
A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic. B. Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic. C. Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the HTTPS traffic. D. Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
https://www.examtopics.com/discussions/google/view/7123-exam-professional-cloud-architect-topic-1-question-93/
A. Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
B is incorrect because of this:
“service of type LoadBalancer to load-balance the HTTPS traffic.”
GKE Service Load Balancer is L4 Network or Internal Load Balancer, does not support HTTPS traffic.
“Ingress is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services.
On GKE, Ingress is implemented using Cloud Load Balancing. When you create an Ingress in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application.”
Question #: 178
Topic #: 1
Your company has an application running on Google Cloud that is collecting data from thousands of physical devices that are globally distributed. Data is published to Pub/Sub and streamed in real time into an SSD Cloud Bigtable cluster via a Dataflow pipeline. The operations team informs you that your Cloud
Bigtable cluster has a hotspot, and queries are taking longer than expected. You need to resolve the problem and prevent it from happening in the future. What should you do?
A. Advise your clients to use HBase APIs instead of NodeJS APIs. B. Delete records older than 30 days. C. Review your RowKey strategy and ensure that keys are evenly spread across the alphabet. D. Double the number of nodes you currently have.
https://www.examtopics.com/discussions/google/view/68692-exam-professional-cloud-architect-topic-1-question-179/https://www.examtopics.com/discussions/google/view/68691-exam-professional-cloud-architect-topic-1-question-178/
C. Review your RowKey strategy and ensure that keys are evenly spread across the alphabet.
https://cloud.google.com/bigtable/docs/schema-design#row-keys
he RowKey is used to sort data within a Cloud Bigtable cluster. If the keys are not evenly spread across the alphabet, it can result in a hotspot and slow down queries. To prevent this from happening in the future, you should review your RowKey strategy and ensure that keys are evenly spread across the alphabet. This will help to distribute the data evenly across the cluster and improve query performance. Other potential solutions to consider include adding more nodes to the cluster or optimizing your query patterns. However, deleting records older than 30 days or advising clients to use HBase APIs instead of NodeJS APIs would not address the issue of a hotspot in the cluster.
Question #: 11
Topic #: 1
Your customer is moving an existing corporate application to Google Cloud Platform from an on-premises data center. The business owners require minimal user disruption. There are strict security team requirements for storing passwords.
What authentication strategy should they use?
A. Use G Suite Password Sync to replicate passwords into Google B. Federate authentication via SAML 2.0 to the existing Identity Provider C. Provision users in Google using the Google Cloud Directory Sync tool D. Ask users to set their Google password to match their corporate password
https://www.examtopics.com/discussions/google/view/7133-exam-professional-cloud-architect-topic-1-question-11/
B. Federate authentication via SAML 2.0 to the existing Identity Provider
Provision users to Google’s directory The global Directory is available to both Cloud Platform and G Suite resources and can be provisioned by a number of means.
Provisioned users can take advantage of rich authentication features including single sign-on (SSO), OAuth, and two-factor verification.
You can provision users automatically using one of the following tools and services: Google Cloud Directory Sync (GCDS) Google Admin SDK - A third-party connector - GCDS is a connector that can provision users and groups on your behalf for both Cloud Platform and G Suite.
Using GCDS, you can automate the addition, modification, and deletion of users, groups, and non-employee contacts.
You can synchronize the data from your LDAP directory server to your Cloud Platform domain by using LDAP queries.
This synchronization is one-way: the data in your LDAP directory server is never modified.
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#authentication-and-identity
The best authentication strategy for this scenario is to federate authentication via SAML 2.0 to the existing Identity Provider (option B).
Federated authentication allows users to authenticate to Google Cloud Platform using their existing corporate credentials. This means that users do not need to create new accounts or remember additional passwords, minimizing user disruption. Additionally, this approach enables the organization to maintain control over user access and permissions.
SAML 2.0 is a widely adopted standard for federated authentication. It allows for secure exchange of authentication and authorization data between an Identity Provider (IdP) and a Service Provider (SP). In this case, the on-premises corporate IdP would be the source of truth for user authentication, and Google Cloud Platform would be the SP.
Using G Suite Password Sync (option A) would replicate passwords into Google, but this approach does not support federated authentication and requires additional effort to manage password synchronization. Furthermore, storing passwords in this way may not meet the security team requirements for password storage.
Provisioning users in Google using the Google Cloud Directory Sync tool (option C) is a viable option, but it may require users to remember and manage additional passwords, which could lead to user disruption. Additionally, this approach would require ongoing management and synchronization of user data between the corporate directory and Google.
Asking users to set their Google password to match their corporate password (option D) is not recommended because it may not meet the security team’s requirements for password storage. Furthermore, this approach would require users to remember and manage yet another password, which could lead to user disruption.
In summary, federating authentication via SAML 2.0 to the existing Identity Provider is the best authentication strategy for this scenario because it minimizes user disruption, enables the organization to maintain control over user access and permissions, and supports secure password storage.
Question #: 193
Topic #: 1
You are configuring the cloud network architecture for a newly created project in Google Cloud that will host applications in Compute Engine. Compute Engine virtual machine instances will be created in two different subnets (sub-a and sub-b) within a single region:
* Instances in sub-a will have public IP addresses.
* Instances in sub-b will have only private IP addresses.
To download updated packages, instances must connect to a public repository outside the boundaries of Google Cloud. You need to allow sub-b to access the external repository. What should you do?
A. Enable Private Google Access on sub-b. B. Configure Cloud NAT and select sub-b in the NAT mapping section. C. Configure a bastion host instance in sub-a to connect to instances in sub-b. D. Enable Identity-Aware Proxy for TCP forwarding for instances in sub-b.
https://www.examtopics.com/discussions/google/view/121322-exam-professional-cloud-architect-topic-1-question-193/
B. Configure Cloud NAT and select sub-b in the NAT mapping section.
Question #: 103
Topic #: 1
Your company is planning to perform a lift and shift migration of their Linux RHEL 6.5+ virtual machines. The virtual machines are running in an on-premises
VMware environment. You want to migrate them to Compute Engine following Google-recommended practices. What should you do?
A. 1. Define a migration plan based on the list of the applications and their dependencies. 2. Migrate all virtual machines into Compute Engine individually with Migrate for Compute Engine. B. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Create images of all disks. Import disks on Compute Engine. 3. Create standard virtual machines where the boot disks are the ones you have imported. C. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration. D. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Install a third-party agent on all selected virtual machines. 3. Migrate all virtual machines into Compute Engine.
https://www.examtopics.com/discussions/google/view/56841-exam-professional-cloud-architect-topic-1-question-103/
C. 1. Perform an assessment of virtual machines running in the current VMware environment. 2. Define a migration plan, prepare a Migrate for Compute Engine migration RunBook, and execute the migration.
Migrate for Compute Engine organizes groups of VMs into Waves. After understanding the dependencies of your applications, create runbooks that contain groups of VMs and begin your migration!
https://cloud.google.com/migrate/compute-engine/docs/4.5/how-to/migrate-on-premises-to-gcp/overview
Question #: 192
Topic #: 1
Your company has an application running on App Engine that allows users to upload music files and share them with other people. You want to allow users to upload files directly into Cloud Storage from their browser session. The payload should not be passed through the backend. What should you do?
A. 1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine application is an allowed origin. 2. Use the Cloud Storage Signed URL feature to generate a POST URL. B. 1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine application is an allowed origin. 2. Assign the Cloud Storage WRITER role to users who upload files. C. 1. Use the Cloud Storage Signed URL feature to generate a POST URL. 2. Use App Engine default credentials to sign requests against Cloud Storage. D. 1. Assign the Cloud Storage WRITER role to users who upload files. 2. Use App Engine default credentials to sign requests against Cloud Storage.
https://www.examtopics.com/discussions/google/view/121240-exam-professional-cloud-architect-topic-1-question-192/
A. 1. Set a CORS configuration in the target Cloud Storage bucket where the base URL of the App Engine application is an allowed origin.
2. Use the Cloud Storage Signed URL feature to generate a POST URL.
Here’s why this approach is most suitable:
* CORS configuration: This allows cross-origin requests from your App Engine application to access the Cloud Storage bucket for uploads. Setting the App Engine base URL as an allowed origin ensures secure communication.
* Cloud Storage Signed URL: This feature generates a temporary URL with specific permissions and expiration time. You can provide this signed URL to the user’s browser for uploading files directly to Cloud Storage. The payload (music file) doesn’t pass through your backend, reducing server load.
The same-origin policy is a security policy enforced on client-side web applications (like web browsers) to prevent interactions between resources from different origins. While useful for preventing malicious behavior, this security measure also prevents legitimate interactions between known origins. For example, a script on a page hosted at example.appspot.com might need to use resources stored in a Cloud Storage bucket at example.storage.googleapis.com. However, because these are two different origins from the perspective of the browser, the browser won't allow a script from example.appspot.com to fetch resources from example.storage.googleapis.com.
https://cloud.google.com/storage/docs/cross-origin
The Cross Origin Resource Sharing (CORS) spec was developed by the World Wide Web Consortium (W3C) to get around this limitation. Cloud Storage supports this specification by allowing you to configure your buckets to support CORS. Continuing the previous example, you can configure the example.storage.googleapis.com bucket so that a browser can share its resources with scripts from example.appspot.com.
Question #: 126
Topic #: 1
Your team will start developing a new application using microservices architecture on Kubernetes Engine. As part of the development lifecycle, any code change that has been pushed to the remote develop branch on your GitHub repository should be built and tested automatically. When the build and test are successful, the relevant microservice will be deployed automatically in the development environment. You want to ensure that all code deployed in the development environment follows this process. What should you do?
A. Have each developer install a pre-commit hook on their workstation that tests the code and builds the container when committing on the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster. B. Install a post-commit hook on the remote git repository that tests the code and builds the container when code is pushed to the development branch. After a successful commit, have the developer deploy the newly built container image on the development cluster. C. Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions. D. Create a Cloud Build trigger based on the development branch to build a new container image and store it in Container Registry. Rely on Vulnerability Scanning to ensure the code tests succeed. As the final step of the Cloud Build process, deploy the new container image on the development cluster. Ensure only Cloud Build has access to deploy new versions.
https://www.examtopics.com/discussions/google/view/56702-exam-professional-cloud-architect-topic-1-question-126/
C. Create a Cloud Build trigger based on the development branch that tests the code, builds the container, and stores it in Container Registry. Create a deployment pipeline that watches for new images and deploys the new image on the development cluster. Ensure only the deployment tool has access to deploy new versions.
Questions say “relevant microservice will be deployed automatically in the development environment.” Therefore A and B are out. D says “Rely on Vulnerability Scanning to ensure the code tests succeed.” Vulnerability Scanning is not test so D is out. The correct Answer is therefore C.
Question #: 51
Topic #: 1
You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. What should you do?
A. Point gcloud datastore create-indexes to your configuration file B. Upload the configuration file to App Engine's default Cloud Storage bucket, and have App Engine detect the new indexes C. In the GCP Console, use Datastore Admin to delete the current indexes and upload the new configuration file D. Create an HTTP request to the built-in python module to send the index configuration file to your application
https://www.examtopics.com/discussions/google/view/7231-exam-professional-cloud-architect-topic-1-question-51/
A. Point gcloud datastore create-indexes to your configuration file
Here’s how you can do it:
-
Ensure you have the
gcloud
CLI installed: You need the Google Cloud SDK installed and set up on your local machine. If you haven’t done this yet, follow the installation guide. -
Navigate to the directory containing your
index.yaml
file: This file contains the definitions of the indexes you want to deploy. -
Run the following command:
bash gcloud datastore indexes create index.yaml
This command will deploy the indexes defined in theindex.yaml
file to Cloud Datastore.
Question #: 116
Topic #: 1
You have developed a non-critical update to your application that is running in a managed instance group, and have created a new instance template with the update that you want to release. To prevent any possible impact to the application, you don’t want to update any running instances. You want any new instances that are created by the managed instance group to contain the new update. What should you do?
A. Start a new rolling restart operation. B. Start a new rolling replace operation. C. Start a new rolling update. Select the Proactive update mode. D. Start a new rolling update. Select the Opportunistic update mode.
https://www.examtopics.com/discussions/google/view/56399-exam-professional-cloud-architect-topic-1-question-116/
D. Start a new rolling update. Select the Opportunistic update mode. Most Voted
In Google Cloud, the main difference between proactive and opportunistic updates in a managed instance group (MIG) is when they are applied:
Proactive updates: Automatically apply updates to existing VMs.
Opportunistic updates: Only apply updates when you manually select a VM to update or when new instances are created.
see: https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
Question #: 137
Topic #: 1
Your company provides a recommendation engine for retail customers. You are providing retail customers with an API where they can submit a user ID and the
API returns a list of recommendations for that user. You are responsible for the API lifecycle and want to ensure stability for your customers in case the API makes backward-incompatible changes. You want to follow Google-recommended practices. What should you do?
A. Create a distribution list of all customers to inform them of an upcoming backward-incompatible change at least one month before replacing the old API with the new API. B. Create an automated process to generate API documentation, and update the public API documentation as part of the CI/CD process when deploying an update to the API. C. Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change. D. Use a versioning strategy for the APIs that adds the suffix ג€DEPRECATEDג€ to the current API version number on every backward-incompatible change. Use the current version number for the new API.
https://www.examtopics.com/discussions/google/view/56656-exam-professional-cloud-architect-topic-1-question-137/
C. Use a versioning strategy for the APIs that increases the version number on every backward-incompatible change.
Deprecating API functionality
API elements (fields, messages, RPCs) may be marked deprecated in any channel to indicate that they should no longer be used:
// Represents a scroll. Books are preferred over scrolls.
message Scroll {
option deprecated = true;
// …
}
Deprecated API functionality must not graduate from alpha to beta, nor beta to stable. In other words, functionality must not arrive “pre-deprecated” in any channel.
The beta channel’s functionality may be removed after it has been deprecated for a sufficient period; we recommend 180 days. For functionality that exists only in the alpha channel, deprecation is optional, and functionality may be removed without notice. If functionality is deprecated in an API’s alpha channel before removal, the API should apply the same annotation, and may use any timeframe it wishes.
https://cloud.google.com/apis/design/versioning#release-based_versioning
Question #: 148
Topic #: 1
You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate the encryption keys outside of Google Cloud. You need to implement a solution. What should you do?
A. Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset. B. Generate a new key in Cloud KMS. Create a dataset in BigQuery using the customer-managed key option and select the created key. C. Import a key in Cloud KMS. Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset. D. Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
https://www.examtopics.com/discussions/google/view/60439-exam-professional-cloud-architect-topic-1-question-148/
D. Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
The answer is easy. It says keys must be left outside of Google Cloud.
This automatically eliminates A / B.
Now the C option says decrypts before storing it in BigQuery which the point is to encrypt the data while been in BigQuery, D is the only possible answer.
Question #: 156
Topic #: 1
Your company has a Google Cloud project that uses BigQuery for data warehousing. They have a VPN tunnel between the on-premises environment and Google
Cloud that is configured with Cloud VPN. The security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing.
What should they do?
A. Configure Private Google Access for on-premises only. B. Perform the following tasks: 1. Create a service account. 2. Give the BigQuery JobUser role and Storage Reader role to the service account. 3. Remove all other IAM access from the project. C. Configure VPC Service Controls and configure Private Google Access. D. Configure Private Google Access.
https://www.examtopics.com/discussions/google/view/60416-exam-professional-cloud-architect-topic-1-question-156/
C. Configure VPC Service Controls and configure Private Google Access.
Security benefits of VPC Service Controls
Access from unauthorized networks using stolen credentials
Data exfiltration by malicious insiders or compromised code
https://cloud.google.com/vpc-service-controls/docs/overview#benefits
Question #: 100
Topic #: 1
You need to ensure reliability for your application and operations by supporting reliable task scheduling for compute on GCP. Leveraging Google best practices, what should you do?
A. Using the Cron service provided by App Engine, publish messages directly to a message-processing utility service running on Compute Engine instances. B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances. C. Using the Cron service provided by Google Kubernetes Engine (GKE), publish messages directly to a message-processing utility service running on Compute Engine instances. D. Using the Cron service provided by GKE, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
https://www.examtopics.com/discussions/google/view/7233-exam-professional-cloud-architect-topic-1-question-100/
B. Using the Cron service provided by App Engine, publish messages to a Cloud Pub/Sub topic. Subscribe to that topic using a message-processing utility service running on Compute Engine instances.
Answer is B, but this question is outdated, Today the best practices for cron is Cloud Scheduler: fully managed enterprise-grade cron job scheduler
https://cloud.google.com/scheduler/?gad_source=1&gclsrc=ds&gclsrc=ds
Question #: 180
Topic #: 1
Your operations team currently stores 10 TB of data in an object storage service from a third-party provider. They want to move this data to a Cloud Storage bucket as quickly as possible, following Google-recommended practices. They want to minimize the cost of this data migration. Which approach should they use?
A. Use the gsutil mv command to move the data. B. Use the Storage Transfer Service to move the data. C. Download the data to a Transfer Appliance, and ship it to Google. D. Download the data to the on-premises data center, and upload it to the Cloud Storage bucket.
https://www.examtopics.com/discussions/google/view/68693-exam-professional-cloud-architect-topic-1-question-180/
B. Use the Storage Transfer Service to move the data.
Question #: 7
To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines
(VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department.
Which two steps should you take? (Choose two.)
A. Use the - -no-auto-delete flag on all persistent disks and stop the VM B. Use the - -auto-delete flag on all persistent disks and terminate the VM C. Apply VM CPU utilization label and include it in the BigQuery billing export D. Use Google BigQuery billing export and labels to associate cost to groups E. Store all state into local SSD, snapshot the persistent disks, and terminate the VM F. Store all state in Google Cloud Storage, snapshot the persistent disks, and terminate the VM
https://www.examtopics.com/discussions/google/view/54304-exam-professional-cloud-architect-topic-1-question-7/
A. Use the - -no-auto-delete flag on all persistent disks and stop the VM
D. Use Google BigQuery billing export and labels to associate cost to groups
The question describes a scenario where developers are required to move their development infrastructure resources from on-premises VMs to Google Cloud Platform (GCP) to reduce costs. The resources go through multiple start/stop events during the day and require state to persist. The challenge is to design a process of running a development environment in GCP while providing cost visibility to the finance department.
There are two steps that can be taken to address this challenge, which are described below:
Use the --no-auto-delete flag on all persistent disks and stop the VM Option A suggests using the --no-auto-delete flag on all persistent disks and stopping the VM instead of terminating it. This approach ensures that the data stored in the persistent disks is not deleted when the VM is stopped, and the state is preserved for the next start. By stopping the VM instead of terminating it, developers can quickly resume their work from where they left off, without having to recreate their environment. This approach also enables cost savings by avoiding the need to recreate the environment from scratch every time developers start working. Use Google BigQuery billing export and labels to associate cost to groups Option D suggests using Google BigQuery billing export and labels to associate costs with groups. By applying VM CPU utilization labels and including them in the BigQuery billing export, the finance department can get visibility into the cost associated with each group of resources. This approach enables accurate cost allocation and provides developers with visibility into the cost of their environment. By associating costs with groups, the finance department can identify areas where cost optimization can be applied.
In summary, the recommended approach is to use the –no-auto-delete flag on all persistent disks and stop the VM instead of terminating it to preserve the state of the environment, and use Google BigQuery billing export and labels to associate costs with groups for accurate cost allocation and optimization. Option A and Option D are the correct answers to the question.
Options B and E are incorrect because terminating the VM or storing the state into local SSD would result in the loss of data, which is not desirable in this scenario. Option C is incorrect because it only provides visibility into CPU utilization, which is not sufficient to associate costs with groups accurately. Option F is incorrect because storing state in Google Cloud Storage and snapshotting the persistent disks can be costly, and it does not address the issue of associating costs with groups.
Question #: 42
Topic #: 1
You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file so it is in the default execution path and persists across sessions?
A. ~/bin B. Cloud Storage C. /google/scripts D. /usr/local/bin
https://www.examtopics.com/discussions/google/view/7212-exam-professional-cloud-architect-topic-1-question-42/
A. ~/bin
Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOME directory on the virtual machine instance. This storage is on a per-user basis and is available across projects. Unlike the instance itself, this storage does not time out on inactivity. All files you store in your home directory, including installed software, scripts and user configuration files like .bashrc and .vimrc, persist between sessions. Your $HOME directory is private to you and cannot be accessed by other users.
Question #: 134
Topic #: 1
Your company is using Google Cloud. You have two folders under the Organization: Finance and Shopping. The members of the development team are in a
Google Group. The development team group has been assigned the Project Owner role on the Organization. You want to prevent the development team from creating resources in projects in the Finance folder. What should you do?
A. Assign the development team group the Project Viewer role on the Finance folder, and assign the development team group the Project Owner role on the Shopping folder. B. Assign the development team group only the Project Viewer role on the Finance folder. C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization. D. Assign the development team group only the Project Owner role on the Shopping folder.
https://www.examtopics.com/discussions/google/view/56975-exam-professional-cloud-architect-topic-1-question-134/
C. Assign the development team group the Project Owner role on the Shopping folder, and remove the development team group Project Owner role from the Organization.
Answer C is correct.
Answer A and B are both overridden by the less-restrictive permission on Organization level.
Answer D permission was already there on Organization level, and does not remove the project owner permission on the other folder
Question #: 37
Topic #: 1
Your organization wants to control IAM policies for different departments independently, but centrally.
Which approach should you take?
A. Multiple Organizations with multiple Folders B. Multiple Organizations, one for each department C. A single Organization with Folders for each department D. A single Organization with multiple projects, each with a central owner
https://www.examtopics.com/discussions/google/view/7208-exam-professional-cloud-architect-topic-1-question-37/
C. A single Organization with Folders for each departmen
https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
https://cloud.google.com/architecture/identity/best-practices-for-planning#use_organizations_to_delineate_administrative_authority
Question #: 170
Topic #: 1
One of your primary business objectives is being able to trust the data stored in your application. You want to log all changes to the application data.
How can you design your logging system to verify authenticity of your logs?
A. Write the log concurrently in the cloud and on premises B. Use a SQL database and limit who can modify the log table C. Digitally sign each timestamp and log entry and store the signature D. Create a JSON dump of each log entry and store it in Google Cloud Storage
https://www.examtopics.com/discussions/google/view/7082-exam-professional-cloud-architect-topic-1-question-170/
C. Digitally sign each timestamp and log entry and store the signature
C (Correct answer) - Digitally sign each timestamp and log entry and store the signature.
Answer A, B, and D don’t have any added value to verify the authenticity of your logs. Besides, Logs are mostly suitable for exporting to Cloud storage, BigQuery, and PubSub. SQL database is not the best way to be exported to nor store log data.
Simplified Explanation
To verify the authenticity of your logs if they are tampered with or forged, you can use a certain algorithm to generate digest by hashing each timestamp or log entry and then digitally sign the digest with a private key to generate a signature. Anybody with your public key can verify that signature to confirm that it was made with your private key and they can tell if the timestamp or log entry was modified. You can put the signature files into a folder separate from the log files. This separation enables you to enforce granular security policies.
C – Digitally sign each timestamp and log entry and store the signature.
This is fun Q where all options are technically correct. But, the point is to find most efficient. Since, Q asks about verification of log entry - then you don’t need to dub it. Using of much shorter timestamp-hash pair will address the request. So, when reading log from original source, you also read hash for this timestamp and then verify the entry’s body.
BTW, this is one of general purpose questions, which is not directly related to GCP. Just checks your attentiveness
A - is about duplication, can work, but redundant;
B / D - both have similar design, but don’t allow verification of entry. No cross-checking of entry. E.g. person having access to log can change it in one place.
C - storing log in one place, and hash-code in another. So, even if “trusted” person has modified original log, then it will break correspondence with hash code in other storage. That storage should be available only for authentication program (via service account).
Question #: 191
Topic #: 1
You are deploying an application to Google Cloud. The application is part of a system. The application in Google Cloud must communicate over a private network with applications in a non-Google Cloud environment. The expected average throughput is 200 kbps. The business requires:
✑ as close to 100% system availability as possible
✑ cost optimization
You need to design the connectivity between the locations to meet the business requirements. What should you provision?
A. An HA Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway B. Two Classic Cloud VPN gateways connected to two on-premises VPN gateways Configure each Classic Cloud VPN gateway to have two tunnels, each connected to different on-premises VPN gateways C. Two HA Cloud VPN gateways connected to two on-premises VPN gateways Configure each HA Cloud VPN gateway to have two tunnels, each connected to different on-premises VPN gateways D. A single Cloud VPN gateway connected to an on-premises VPN gateway
https://www.examtopics.com/discussions/google/view/80419-exam-professional-cloud-architect-topic-1-question-191/
A. An HA Cloud VPN gateway connected with two tunnels to an on-premises VPN gateway
A is true only if the on-prem (peer) gateway has two separate external P addresses. The HA VPN gateway uses two tunnels, one tunnel to each external IP address on the peer device as described in https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies#configurations_that_support_9999_availability
https://cloud.google.com/network-connectivity/docs/vpn/concepts/topologies-increase-bandwidth
Question #: 155
Topic #: 1
You are migrating third-party applications from optimized on-premises virtual machines to Google Cloud. You are unsure about the optimum CPU and memory options. The applications have a consistent usage pattern across multiple weeks. You want to optimize resource usage for the lowest cost. What should you do?
A. Create an instance template with the smallest available machine type, and use an image of the third-party application taken from a current on-premises virtual machine. Create a managed instance group that uses average CPU utilization to autoscale the number of instances in the group. Modify the average CPU utilization threshold to optimize the number of instances running. B. Create an App Engine flexible environment, and deploy the third-party application using a Dockerfile and a custom runtime. Set CPU and memory options similar to your application's current on-premises virtual machine in the app.yaml file. C. Create multiple Compute Engine instances with varying CPU and memory options. Install the Cloud Monitoring agent, and deploy the third-party application on each of them. Run a load test with high traffic levels on the application, and use the results to determine the optimal settings. D. Create a Compute Engine instance with CPU and memory options similar to your application's current on-premises virtual machine. Install the Cloud Monitoring agent, and deploy the third-party application. Run a load test with normal traffic levels on the application, and follow the Rightsizing Recommendations in the Cloud Console.
https://www.examtopics.com/discussions/google/view/60494-exam-professional-cloud-architect-topic-1-question-155/
D. Create a Compute Engine instance with CPU and memory options similar to your application’s current on-premises virtual machine. Install the Cloud Monitoring agent, and deploy the third-party application. Run a load test with normal traffic levels on the application, and follow the Rightsizing Recommendations in the Cloud Console.
“Rightsizing provides two types of recommendations:
- Performance-based recommendations: Recommends Compute Engine instances based on the CPU and RAM currently allocated to the on-premises VM. This recommendation is the default.
- Cost-based recommendations: Recommends Compute Engine instances based on:
- The current CPU and RAM configuration of the on-premises VM.
- The average usage of this VM during a given period. To use this option, you must activate rightsizing monitoring with vSphere for this group of VMs and allow time for Migrate for Compute Engine to analyze usage.
——–
https://cloud.google.com/compute/docs/instances/apply-machine-type-recommendations-for-instances
Use the monitoring agent for more precise recommendations
Cloud Monitoring offers a Monitoring agent that collects additional disk, CPU, network, and process metrics from your VM instances. To collect this data, install the Monitoring agent on your VM instances so it can access system resources and app services.
If the Monitoring agent is installed and running on a VM instance, the CPU and memory metrics collected by the agent are automatically used to compute machine type recommendations. The agent metrics provided by the Monitoring agent give better insights into resource utilization of the instance than the default Compute Engine metrics. This allows the recommendation engine to estimate resource requirements better and make more precise recommendations.
Question #: 115
Topic #: 1
Your company has sensitive data in Cloud Storage buckets. Data analysts have Identity Access Management (IAM) permissions to read the buckets. You want to prevent data analysts from retrieving the data in the buckets from outside the office network. What should you do?
A. 1. Create a VPC Service Controls perimeter that includes the projects with the buckets. 2. Create an access level with the CIDR of the office network. B. 1. Create a firewall rule for all instances in the Virtual Private Cloud (VPC) network for source range. 2. Use the Classless Inter-domain Routing (CIDR) of the office network. C. 1. Create a Cloud Function to remove IAM permissions from the buckets, and another Cloud Function to add IAM permissions to the buckets. 2. Schedule the Cloud Functions with Cloud Scheduler to add permissions at the start of business and remove permissions at the end of business. D. 1. Create a Cloud VPN to the office network. 2. Configure Private Google Access for on-premises hosts.
https://www.examtopics.com/discussions/google/view/56376-exam-professional-cloud-architect-topic-1-question-115/
A. 1. Create a VPC Service Controls perimeter that includes the projects with the buckets. 2. Create an access level with the CIDR of the office network
For all Google Cloud services secured with VPC Service Controls, you can ensure that:
Resources within a perimeter are accessed only from clients within authorized VPC networks using Private Google Access with either Google Cloud or on-premises.
https://cloud.google.com/vpc-service-controls/docs/overview
Enforce a security perimeter with VPC Service Controls to isolate resources of multi-tenant Google Cloud services—reducing the risk of data exfiltration or data breach.
Question #: 70
Topic #: 1
You are using a single Cloud SQL instance to serve your application from a specific zone. You want to introduce high availability. What should you do?
A. Create a read replica instance in a different region B. Create a failover replica instance in a different region C. Create a read replica instance in the same region, but in a different zone D. Create a failover replica instance in the same region, but in a different zone
https://www.examtopics.com/discussions/google/view/11815-exam-professional-cloud-architect-topic-1-question-70/
D. Create a failover replica instance in the same region, but in a different zone
https://cloud.google.com/sql/docs/mysql/replication
As a best practice, put read replicas in a different zone than the primary instance when you use HA on your primary instance. This practice ensures that read replicas continue to operate when the zone that contains the primary instance has an outage.
Question #: 142
Topic #: 1
An application development team has come to you for advice. They are planning to write and deploy an HTTP(S) API using Go 1.12. The API will have a very unpredictable workload and must remain reliable during peaks in traffic. They want to minimize operational overhead for this application. Which approach should you recommend?
A. Develop the application with containers, and deploy to Google Kubernetes Engine. B. Develop the application for App Engine standard environment. C. Use a Managed Instance Group when deploying to Compute Engine. D. Develop the application for App Engine flexible environment, using a custom runtime.
https://www.examtopics.com/discussions/google/view/60437-exam-professional-cloud-architect-topic-1-question-142/
B. Develop the application for App Engine standard environment.
https://cloud.google.com/appengine/docs/the-appengine-environments
Question #: 63
Topic #: 1
You are designing an application for use only during business hours. For the minimum viable product release, you’d like to use a managed product that automatically scales to zero
so you don’t incur costs when there is no activity.
Which primary compute resource should you choose?
A. Cloud Functions B. Compute Engine C. Google Kubernetes Engine D. AppEngine flexible environment
https://www.examtopics.com/discussions/google/view/68714-exam-professional-cloud-architect-topic-1-question-63/
A. Cloud Functions
A. Cloud Functions - managed service scales down to 0
B. Compute Engine - not a managed service
C. Google Kubernetes Engine - not a managed service and wont scale down to 0
D. AppEngine flexible environment - managed service but wont scale down to 0
Question #: 72
Topic #: 1
Your web application has several VM instances running within a VPC. You want to restrict communications between instances to only the paths and ports you authorize, but you don’t want to rely on static IP addresses or subnets because the app can autoscale. How should you restrict communications?
A. Use separate VPCs to restrict traffic B. Use firewall rules based on network tags attached to the compute instances C. Use Cloud DNS and only allow connections from authorized hostnames D. Use service accounts and configure the web application to authorize particular service accounts to have access
https://www.examtopics.com/discussions/google/view/11816-exam-professional-cloud-architect-topic-1-question-72/
B. Use firewall rules based on network tags attached to the compute instances
Question #: 109
Topic #: 1
You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can accommodate such a request. What should you do?
A. Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier. B. When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information. C. Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject's data from this view. Use this view instead of the source table for all analysis tasks. D. Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value.
https://www.examtopics.com/discussions/google/view/56381-exam-professional-cloud-architect-topic-1-question-109/
A. Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier.
According to me, the question states “The association collects a large amount of health data, such as sustained injuries.” and the nuance on the word such => “ Current legislation requires you to delete “SUCH” information upon request of the subject. “ So from that point of view the question is not to delete the entire user records but specific data related to personal health data. With DLP you can use InfoTypes and InfoType detectors to specifically scan for those entries and how to act upon them (link https://cloud.google.com/dlp/docs/concepts-infotypes)
I would say B.
I vote for B.
I had some doubts whether A was correct, but:
- I’m not convinced by the argument “only A talks about deleting” (it would be too easy if it was about choosing an answer containing the word “delete” ;)
- the question says “design a solution that can accommodate such a request” - I’m not very fluent in english, but “accommodate” imho means more “facilitate” than “accomplish” here
- I think that the task is about deleting health data not everything related with unique identifier
- Data Catalog allows you to manage data, knowing in which datasets and in which tables what data is stored. Answer “A” somehow imposes the data model - each table with data related to a given individual must contain the ID of this individual (in a real data model this does not have to be the case).
Question #: 127
Topic #: 1
Your operations team has asked you to help diagnose a performance issue in a production application that runs on Compute Engine. The application is dropping requests that reach it when under heavy load. The process list for affected instances shows a single application process that is consuming all available CPU, and autoscaling has reached the upper limit of instances. There is no abnormal load on any other related systems, including the database. You want to allow production traffic to be served again as quickly as possible. Which action should you recommend?
A. Change the autoscaling metric to agent.googleapis.com/memory/percent_used. B. Restart the affected instances on a staggered schedule. C. SSH to each instance and restart the application process. D. Increase the maximum number of instances in the autoscaling group.
https://www.examtopics.com/discussions/google/view/56603-exam-professional-cloud-architect-topic-1-question-127/
D. Increase the maximum number of instances in the autoscaling group.
The question is not asking for a permanent solution to the problem, it is asking what to do to have the production traffic to be served again as quickly as possible. Therefore, the best answer is D.
Question #: 90
Topic #: 1
You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.
What should you do?
A. Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary testing. B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions. C. Deploy the update in a new VPC, and use Google's global HTTP load balancing to split traffic between the update and current applications. D. Deploy the update as a new App Engine application, and use Google's global HTTP load balancing to split traffic between the new and current applications.
https://www.examtopics.com/discussions/google/view/6462-exam-professional-cloud-architect-topic-1-question-90/
B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
B. Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.
To test an update to an App Engine application with production traffic before replacing the current version, you can deploy the update as a new version in the App Engine application and split traffic between the new and current versions. This is known as a “blue-green” deployment, and it allows you to test the new version with a portion of production traffic while the current version is still serving the remainder of traffic.
To split traffic between the new and current versions, you can use the App Engine traffic splitting feature. This feature allows you to specify the percentage of traffic that should be sent to each version, and it can be used to gradually ramp up traffic to the new version over time. This allows you to test the new version with a small portion of traffic initially, and gradually increase the traffic as you become more confident in the update.
https://cloud.google.com/appengine/docs/standard/splitting-traffic
Question #: 120
Topic #: 1
Your company has a project in Google Cloud with three Virtual Private Clouds (VPCs). There is a Compute Engine instance on each VPC. Network subnets do not overlap and must remain separated. The network configuration is shown below.
Instance #1 is an exception and must communicate directly with both Instance #2 and Instance #3 via internal IPs. How should you accomplish this?
A. Create a cloud router to advertise subnet #2 and subnet #3 to subnet #1. B. Add two additional NICs to Instance #1 with the following configuration: ג€¢ NIC1 ג—‹ VPC: VPC #2 ג—‹ SUBNETWORK: subnet #2 ג€¢ NIC2 ג—‹ VPC: VPC #3 ג—‹ SUBNETWORK: subnet #3 Update firewall rules to enable traffic between instances. C. Create two VPN tunnels via CloudVPN: ג€¢ 1 between VPC #1 and VPC #2. ג€¢ 1 between VPC #2 and VPC #3. Update firewall rules to enable traffic between the instances. D. Peer all three VPCs: ג€¢ Peer VPC #1 with VPC #2. ג€¢ Peer VPC #2 with VPC #3. Update firewall rules to enable traffic between the instances.
https://www.examtopics.com/discussions/google/view/56416-exam-professional-cloud-architect-topic-1-question-120/
B. Add two additional NICs to Instance #1 with the following configuration: ג€¢ NIC1 ג—‹ VPC: VPC #2 ג—‹ SUBNETWORK: subnet #2 ג€¢ NIC2 ג—‹ VPC: VPC #3 ג—‹ SUBNETWORK: subnet #3 Update firewall rules to enable traffic between instances.
According to my understanding the requirement is that only VM1 shall be able to communicate with VM2 and VM3, but not VM2 with VM3.
We can exclude d) as d) would enable VM2 to communicate with VM3 as well - my assumption is, that if the quizzer wanted that d) is the correct answer, he would make just 2 peerings - 1x between VM1 and VM2 and 1x between VM1 and VM3 repectively the VPCs.
We can exclude c) as well - there is no connection between VPC1 and VPC3.
IMHO a) will not work.
So the only correct answer seems to be b) - what I don’t understand is why we have to update the firewall rules as IMHO the default firewall rules enable such communication (maybe some restrictive rules are implemented - not enough details in the question to clarify that part). Please correct me if I am wrong.
https://cloud.google.com/vpc/docs/multiple-interfaces-concepts#firewall_rules_and_multiple_network_interfaces
Question #: 71
Topic #: 1
Your company is running a stateless application on a Compute Engine instance. The application is used heavily during regular business hours and lightly outside of business hours. Users are reporting that the application is slow during peak hours. You need to optimize the application’s performance. What should you do?
A. Create a snapshot of the existing disk. Create an instance template from the snapshot. Create an autoscaled managed instance group from the instance template. B. Create a snapshot of the existing disk. Create a custom image from the snapshot. Create an autoscaled managed instance group from the custom image. C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template. D. Create an instance template from the existing disk. Create a custom image from the instance template. Create an autoscaled managed instance group from the custom image.
https://www.examtopics.com/discussions/google/view/11301-exam-professional-cloud-architect-topic-1-question-71/
C. Create a custom image from the existing disk. Create an instance template from the custom image. Create an autoscaled managed instance group from the instance template.
(image -> template -> mig).
Not recommended to create template from –source-instance as If the existing instance contains a static external IP address, that address is copied into the instance template and might limit the use of the template.
Templates are best created from images or other templates. Creating the template from a running instance may require work to clean it up before it can be used for a MIG
Question #: 111
Topic #: 1
Your company is running its application workloads on Compute Engine. The applications have been deployed in production, acceptance, and development environments. The production environment is business-critical and is used 24/7, while the acceptance and development environments are only critical during office hours. Your CFO has asked you to optimize these environments to achieve cost savings during idle times. What should you do?
A. Create a shell script that uses the gcloud command to change the machine type of the development and acceptance instances to a smaller machine type outside of office hours. Schedule the shell script on one of the production instances to automate the task. B. Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours. C. Deploy the development and acceptance applications on a managed instance group and enable autoscaling. D. Use regular Compute Engine instances for the production environment, and use preemptible VMs for the acceptance and development environments.
https://www.examtopics.com/discussions/google/view/56686-exam-professional-cloud-architect-topic-1-question-111/
B. Use Cloud Scheduler to trigger a Cloud Function that will stop the development and acceptance environments after office hours and start them just before office hours.
https://cloud.google.com/blog/products/it-ops/best-practices-for-optimizing-your-cloud-costs
Schedule VMs to auto start and stop: The benefit of a platform like Compute Engine is that you only pay for the compute resources that you use. Production systems tend to run 24/7; however, VMs in development, test or personal environments tend to only be used during business hours, and turning them off can save you a lot of money!
https://cloud.google.com/blog/products/storage-data-transfer/save-money-by-stopping-and-starting-compute-engine-instances-on-schedule
Cloud Scheduler, GCP’s fully managed cron job scheduler, provides a straightforward solution for automatically stopping and starting VMs. By employing Cloud Scheduler with Cloud Pub/Sub to trigger Cloud Functions on schedule, you can stop and start groups of VMs identified with labels of your choice (created in Compute Engine). Here you can see an example schedule that stops all VMs labeled “dev” at 5pm and restarts them at 9am, while leaving VMs labeled “prod” untouched
Question #: 171
Topic #: 1
Your company has a Google Workspace account and Google Cloud Organization. Some developers in the company have created Google Cloud projects outside of the Google Cloud Organization.
You want to create an Organization structure that allows developers to create projects, but prevents them from modifying production projects. You want to manage policies for all projects centrally and be able to set more restrictive policies for production projects.
You want to minimize disruption to users and developers when business needs change in the future. You want to follow Google-recommended practices. Now should you design the Organization structure?
A. 1. Create a second Google Workspace account and Organization. 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on both Organizations. 5. Additionally, set the production policies on the original Organization. B. 1. Create a folder under the Organization resource named ג€Production.ג€ 2. Grant all developers the Project Creator IAM role on the new Organization. 3. Move the developer projects into the new Organization. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the ג€Productionג€ folder. C. 1. Create folders under the Organization resource named ג€Developmentג€ and ג€Production.ג€ 2. Grant all developers the Project Creator IAM role on the ג€Developmentג€ folder. 3. Move the developer projects into the ג€Developmentג€ folder. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the ג€Productionג€ folder. D. 1. Designate the Organization for production projects only. 2. Ensure that developers do not have the Project Creator IAM role on the Organization. 3. Create development projects outside of the Organization using the developer Google Workspace accounts. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the individual production projects.
https://www.examtopics.com/discussions/google/view/68682-exam-professional-cloud-architect-topic-1-question-171/
C. 1. Create folders under the Organization resource named ג€Developmentג€ and ג€Production.ג€ 2. Grant all developers the Project Creator IAM role on the ג€Developmentג€ folder. 3. Move the developer projects into the ג€Developmentג€ folder. 4. Set the policies for all projects on the Organization. 5. Additionally, set the production policies on the ג€Productionג€ folder.
I would recommend option C, creating two folders under the Organization resource named “Development” and “Production” and placing developer and production projects in the respective folders. This approach would allow you to centrally manage policies for all projects, while also being able to set more restrictive policies for production projects. It would also allow you to easily move projects between the Development and Production folders as business needs change, without disrupting users or developers.
Option D, designating the Organization for production projects only, would not allow developers to create projects within the Organization and could lead to confusion around project ownership and management. It would also make it more difficult to move projects between development and production environments.
Option A, creating a second Google Workspace account and Organization, would not be a recommended practice as it would create unnecessary complexity and make it more difficult to manage policies and move projects between environments.
Option B, creating a single folder under the Organization resource and placing all projects in that folder, would not allow you to set different policies for development and production projects.
1.
Question #: 179
Topic #: 1
Your company has a Google Cloud project that uses BigQuery for data warehousing. There are some tables that contain personally identifiable information (PII).
Only the compliance team may access the PII. The other information in the tables must be available to the data science team. You want to minimize cost and the time it takes to assign appropriate access to the tables. What should you do?
A. 1. From the dataset where you have the source data, create views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view. B. 1. From the dataset where you have the source data, create materialized views of tables that you want to share, excluding PII. 2. Assign an appropriate project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view. C. 1. Create a dataset for the data science team. 2. Create views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset. D. 1. Create a dataset for the data science team. 2. Create materialized views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
https://www.examtopics.com/discussions/google/view/68692-exam-professional-cloud-architect-topic-1-question-179/
C. 1. Create a dataset for the data science team. 2. Create views of tables that you want to share, excluding PII. 3. Assign an appropriate project-level IAM role to the members of the data science team. 4. Assign access controls to the dataset that contains the view. 5. Authorize the view to access the source dataset.
Materialized view is too costly for the requirement.
Authorized views should be created in a different dataset from the source data. That way, data owners can give users access to the authorized view without simultaneously granting access to the underlying data.
https://cloud.google.com/bigquery/docs/share-access-views
views
A view is a virtual table defined by a SQL query. You can use views to provide an easily reusable name for a complex query or a limited set of data that you can then authorize other users to access. Once you create a view, a user can then query the view as they would a table. Query results contain only the data from the tables and fields specified in the query that defines the view.
The query that defines the view is run each time the view is queried. If you frequently query a large or computationally expensive view, then you should consider creating a materialized view.
materialized views
In BigQuery, materialized views are precomputed views that periodically cache the results of a query for increased performance and efficiency. BigQuery leverages precomputed results from materialized views and whenever possible reads only changes from the base tables to compute up-to-date results. Materialized views can be queried directly or can be used by the BigQuery optimizer to process queries to the base tables.
Queries that use materialized views are generally faster and consume fewer resources than queries that retrieve the same data only from the base tables. Materialized views can significantly improve the performance of workloads that have the characteristic of common and repeated queries.
Question #: 152
Topic #: 1
Your company is planning to upload several important files to Cloud Storage. After the upload is completed, they want to verify that the uploaded content is identical to what they have on-premises. You want to minimize the cost and effort of performing this check. What should you do?
A. 1. Use Linux shasum to compute a digest of files you want to upload. 2. Use gsutil -m to upload all the files to Cloud Storage. 3. Use gsutil cp to download the uploaded files. 4. Use Linux shasum to compute a digest of the downloaded files. 5. Compare the hashes. B. 1. Use gsutil -m to upload the files to Cloud Storage. 2. Develop a custom Java application that computes CRC32C hashes. 3. Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files. 4. Compare the hashes. C. 1. Use gsutil -m to upload all the files to Cloud Storage. 2. Use gsutil cp to download the uploaded files. 3. Use Linux diff to compare the content of the files. D. 1. Use gsutil -m to upload the files to Cloud Storage. 2. Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files. 3. Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files. 4. Compare the hashes.
https://www.examtopics.com/discussions/google/view/60415-exam-professional-cloud-architect-topic-1-question-152/
D. 1. Use gsutil -m to upload the files to Cloud Storage. 2. Use gsutil hash -c FILE_NAME to generate CRC32C hashes of all on-premises files. 3. Use gsutil ls -L gs://[YOUR_BUCKET_NAME] to collect CRC32C hashes of the uploaded files. 4. Compare the hashes.
https://cloud.google.com/storage/docs/data-validation
Actual exam question from Google’s Professional Cloud Architect
Question #: 197
Topic #: 1
You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be uploaded as a separate approval file. You need to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do?
A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy. B. Create a retention policy organizational constraint constraints/storage.retentionPolicySeconds at the organization level. Set the duration to 5 years. C. Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years. D. Create a retention policy organizational constraint constraints/storage.retentionPolicySeconds at the project level. Set the duration to 5 years.
https://www.examtopics.com/discussions/google/view/138677-exam-professional-cloud-architect-topic-1-question-197/
A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.
A is correct.
- retention policy must be locked
- the is now need for retention policy for all buckets in organization or all buckets in the project
Question #: 161
Topic #: 1
A recent audit revealed that a new network was created in your GCP project. In this network, a GCE instance has an SSH port open to the world. You want to discover this network’s origin.
What should you do?
A. Search for Create VM entry in the Stackdriver alerting console B. Navigate to the Activity page in the Home section. Set category to Data Access and search for Create VM entry C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry D. Connect to the GCE instance using project SSH keys. Identify previous logins in system logs, and match these with the project owners list
https://www.examtopics.com/discussions/google/view/7016-exam-professional-cloud-architect-topic-1-question-161/
C. In the Logging section of the console, specify GCE Network as the logging section. Search for the Create Insert entry
A doesn’t seem to fit because the matter of when a VM was created.
Answer B focuses on Data Access logs which doesn’t seem to fit since the matter of creating a network firewall rule
is an Admin activity, not a data access activity.
D focuses on who logged in which is good to know but doesn’t answer the question of how the network was created.
C focuses on logging, the selection of network events, and the Create/Insert entry.
Question #: 73
Topic #: 1
You are using Cloud SQL as the database backend for a large CRM deployment. You want to scale as usage increases and ensure that you don’t run out of storage, maintain 75% CPU usage cores, and keep replication lag below 60 seconds. What are the correct steps to meet your requirements?
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time. B. 1. Enable automatic storage increase for the instance. 2. Change the instance type to a 32-core machine type to keep CPU usage below 75%. 3. Create a Stackdriver alert for replication lag, and deploy memcache to reduce load on the master. C. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Change the instance type to a 32-core machine type to reduce replication lag. D. 1. Create a Stackdriver alert when storage exceeds 75%, and increase the available storage on the instance to create more space. 2. Deploy memcached to reduce CPU load. 3. Create a Stackdriver alert for replication lag, and change the instance type to a 32-core machine type to reduce replication lag.
https://www.examtopics.com/discussions/google/view/6529-exam-professional-cloud-architect-topic-1-question-73/
A. 1. Enable automatic storage increase for the instance. 2. Create a Stackdriver alert when CPU usage exceeds 75%, and change the instance type to reduce CPU usage. 3. Create a Stackdriver alert for replication lag, and shard the database to reduce replication time.
Sharding database will reduce latency
Question #: 81
Topic #: 1
Your company operates nationally and plans to use GCP for multiple batch workloads, including some that are not time-critical. You also need to use GCP services that are HIPAA-certified and manage service costs.
How should you design to meet Google best practices?
A. Provision preemptible VMs to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant. B. Provision preemptible VMs to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant. C. Provision standard VMs in the same region to reduce cost. Discontinue use of all GCP services and APIs that are not HIPAA-compliant. D. Provision standard VMs to the same region to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
https://www.examtopics.com/discussions/google/view/6896-exam-professional-cloud-architect-topic-1-question-81/
B. Provision preemptible VMs to reduce cost. Disable and then discontinue use of all GCP services and APIs that are not HIPAA-compliant.
Disabling and then discontinuing allows you to see the effects of not using the APIs, so you can gauge (check) alternatives. So that leaves B and D as viable answers. The question says only some are not time-critical which implies others are… this means preemptible VMs are good because they will secure a spot for scaling when needed. So I’m also going to choose B.
Question #: 153
Topic #: 1
You have deployed an application on Anthos clusters (formerly Anthos GKE). According to the SRE practices at your company, you need to be alerted if request latency is above a certain threshold for a specified amount of time. What should you do?
A. Install Anthos Service Mesh on your cluster. Use the Google Cloud Console to define a Service Level Objective (SLO), and create an alerting policy based on this SLO. B. Enable the Cloud Trace API on your project, and use Cloud Monitoring Alerts to send an alert based on the Cloud Trace metrics. C. Use Cloud Profiler to follow up the request latency. Create a custom metric in Cloud Monitoring based on the results of Cloud Profiler, and create an Alerting policy in case this metric exceeds the threshold. D. Configure Anthos Config Management on your cluster, and create a yaml file that defines the SLO and alerting policy you want to deploy in your cluster.
https://www.examtopics.com/discussions/google/view/60624-exam-professional-cloud-architect-topic-1-question-153/
A. Install Anthos Service Mesh on your cluster. Use the Google Cloud Console to define a Service Level Objective (SLO), and create an alerting policy based on this SLO.
Cloud Service Mesh displays a Latency graph on the Metrics page for each of your services. The Latency graph shows you the latency over time, which can help you determine a latency threshold or upper bound for a service.
https://cloud.google.com/service-mesh/docs/observability/slo-overview
Question #: 157
Topic #: 1
You are working at an institution that processes medical data. You are migrating several workloads onto Google Cloud. Company policies require all workloads to run on physically separated hardware, and workloads from different clients must also be separated. You created a sole-tenant node group and added a node for each client. You need to deploy the workloads on these dedicated hosts. What should you do?
A. Add the node group name as a network tag when creating Compute Engine instances in order to host each workload on the correct node group. B. Add the node name as a network tag when creating Compute Engine instances in order to host each workload on the correct node. C. Use node affinity labels based on the node group name when creating Compute Engine instances in order to host each workload on the correct node group. D. Use node affinity labels based on the node name when creating Compute Engine instances in order to host each workload on the correct node.
https://www.examtopics.com/discussions/google/view/60495-exam-professional-cloud-architect-topic-1-question-157/
D. Use node affinity labels based on the node name when creating Compute Engine instances in order to host each workload on the correct node.
Y’all not reading the fine details. The question is about aligning EACH client to their dedicated nodes (D), not to a node group (C).
https://cloud.google.com/compute/docs/nodes/sole-tenant-nodes#default_affinity_labels
The above reference clearly articulates the default affinity label for node group and node name. Unless we’re thinking about growing each client to their own dedicated node groups (not in the current requirement), then the answer is not C, rather D.
Compute Engine assigns two default affinity labels to each node:
A label for the node group name:
Key: compute.googleapis.com/node-group-name
Value: Name of the node group.
A label for the node name:
Key: compute.googleapis.com/node-name
Value: Name of the individual node.
Question #: 105
Topic #: 1
Your company has an application running on multiple Compute Engine instances. You need to ensure that the application can communicate with an on-premises service that requires high throughput via internal IPs, while minimizing latency. What should you do?
A. Use OpenVPN to configure a VPN tunnel between the on-premises environment and Google Cloud. B. Configure a direct peering connection between the on-premises environment and Google Cloud. C. Use Cloud VPN to configure a VPN tunnel between the on-premises environment and Google Cloud. D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.
https://www.examtopics.com/discussions/google/view/56368-exam-professional-cloud-architect-topic-1-question-105/
D. Configure a Cloud Dedicated Interconnect connection between the on-premises environment and Google Cloud.
Reason : high throughput via internal IPs
Question #: 118
Topic #: 1
Your company has just acquired another company, and you have been asked to integrate their existing Google Cloud environment into your company’s data center. Upon investigation, you discover that some of the RFC 1918 IP ranges being used in the new company’s Virtual Private Cloud (VPC) overlap with your data center IP space. What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established?
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space. B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space. C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP space. D. Create a Cloud VPN connection from the new VPC to the data center, and apply a firewall rule that blocks the overlapping IP space.
https://www.examtopics.com/discussions/google/view/56680-exam-professional-cloud-architect-topic-1-question-118/
A. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply new IP addresses so there is no overlapping IP space.
B. Create a Cloud VPN connection from the new VPC to the data center, and create a Cloud NAT instance to perform NAT on the overlapping IP space.
C. Create a Cloud VPN connection from the new VPC to the data center, create a Cloud Router, and apply a custom route advertisement to block the overlapping IP space.
Correct Answer: A
- IP Should not overlap so applying new IP address is the solution
A is not correct. “What should you do to enable connectivity and make sure that there are no routing conflicts when connectivity is established?” if you apply VPN con BGP, the actual IP address will be propagated to on prem environment with overlapping RFC1918 as result. B is correct with custom route
Answer is C.
https://cloud.google.com/network-connectivity/docs/router/how-to/advertising-custom-ip
Question #: 83
Topic #: 1
Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month. What should you do?
A. Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user. B. In the BigQuery interface, execute a query on the JOBS table to get the required information. C. Use 'bq show' to list all jobs. Per job, use 'bq ls' to list job information and get the required information. D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
https://www.examtopics.com/discussions/google/view/8378-exam-professional-cloud-architect-topic-1-question-83/
D. Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
1.-Cloud Audit Logs maintains audit logs for admin activity, data access and system events. BIGQUERY is automatically send to cloud audit log functionality.
2.- In the filter you can filter relevant BigQuery Audit messages, you can express filters as part of the export
https://cloud.google.com/logging/docs/audit
https://cloud.google.com/bigquery/docs/reference/auditlogs#ids
https://cloud.google.com/bigquery/docs/reference/auditlogs#auditdata_examples
Option A, connecting Google Data Studio to BigQuery and creating a dimension for the users and a metric for the amount of queries per user, is a valid method of visualizing data, but it would not provide the specific information about the number of queries that were run by each user in the last month.
Option B, executing a query on the JOBS table to get the required information, is not a viable option because the JOBS table does not contain information about the user who ran the query.
Option C, using the ‘bq show’ and ‘bq ls’ commands to list job information, is not a viable option because these commands do not provide information about the user who ran the query.
Question #: 80
Topic #: 1
You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on-premises network and the GCP network.
What should you do?
A. Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if Dedicated Interconnect fails. B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails. C. Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a secure connection between your networks if the Transfer Appliance fails. D. Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.
https://www.examtopics.com/discussions/google/view/6399-exam-professional-cloud-architect-topic-1-question-80/
B. Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.
Transfer appliance is a physical appliance for transferring huge bulk of data. does not fit into disaster recovery testing. out of A and B, B seems to be more nearest answer. One would not have direct peering and Dedicated interconnect in a solution
Question #: 31
Topic #: 1
A development manager is building a new application. He asks you to review his requirements and identify what cloud technologies he can use to meet them. The application must:
1. Be based on open-source technology for cloud portability
2. Dynamically scale compute capacity based on demand
3. Support continuous software delivery
4. Run multiple segregated copies of the same application stack
5. Deploy application bundles using dynamic templates
6. Route network traffic to specific services based on URL
Which combination of technologies will meet all of his requirements?
A. Google Kubernetes Engine, Jenkins, and Helm B. Google Kubernetes Engine and Cloud Load Balancing C. Google Kubernetes Engine and Cloud Deployment Manager D. Google Kubernetes Engine, Jenkins, and Cloud Load Balancing
https://www.examtopics.com/discussions/google/view/54389-exam-professional-cloud-architect-topic-1-question-31/
A. Google Kubernetes Engine, Jenkins, and Helm
it should be A .. helm is needed for “Deploy application bundles using dynamic templates”
Load Balancing should be part of GKE Already
Kubernetes Engine offers integrated support for two types of Cloud Load Balancing (Ingress and External Network Load Balancing) , hence Option A
Reference : https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
Question #: 129
Topic #: 1
You are developing an application using different microservices that should remain internal to the cluster. You want to be able to configure each microservice with a specific number of replicas. You also want to be able to address a specific microservice from any other microservice in a uniform way, regardless of the number of replicas the microservice scales to. You need to implement this solution on Google Kubernetes Engine. What should you do?
A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster. B. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using an Ingress, and use the Ingress IP address to address the Deployment from other microservices within the cluster. C. Deploy each microservice as a Pod. Expose the Pod in the cluster using a Service, and use the Service DNS name to address the microservice from other microservices within the cluster. D. Deploy each microservice as a Pod. Expose the Pod in the cluster using an Ingress, and use the Ingress IP address name to address the Pod from other microservices within the cluster.
https://www.examtopics.com/discussions/google/view/57424-exam-professional-cloud-architect-topic-1-question-129/
A. Deploy each microservice as a Deployment. Expose the Deployment in the cluster using a Service, and use the Service DNS name to address it from other microservices within the cluster.
B is incorrect. Ingress comes with a HTTP(S) LB with external IP hence is not needed for communications within the cluster internally.
each microservice with a specific number of replicas = Deployment
internal to the cluster = Service
Question #: 4
Topic #: 1
A news feed web service has the following code running on Google App Engine. During peak load, users report that they can see news articles they already viewed.
What is the most likely cause of this problem?
A. The session variable is local to just a single instance B. The session variable is being overwritten in Cloud Datastore C. The URL of the API needs to be modified to prevent caching D. The HTTP Expires header needs to be set to -1 stop caching
https://www.examtopics.com/discussions/google/view/7085-exam-professional-cloud-architect-topic-1-question-4/
A. The session variable is local to just a single instance
It’s A. AppEngine spins up new containers automatically according to the load. During peak traffic, HTTP requests originated by the same user could be served by different containers. Given that the variable sessions
is recreated for each container, it might store different data.
The problem here is that this Flask app is stateful. The sessions
variable is the state of this app. And stateful variables in AppEngine / Cloud Run / Cloud Functions are problematic.
A solution would be to store the session in some database (e.g. Firestore, Memorystore) and retrieve it from there. This way the app would fetch the session from a single place and would be stateless.
Question #: 66
Topic #: 1
[All Professional Cloud Architect Questions]
Your customer wants to capture multiple GBs of aggregate real-time key performance indicators (KPIs) from their game servers running on Google Cloud Platform and monitor the KPIs with low latency. How should they capture the KPIs?
A. Store time-series data from the game servers in Google Bigtable, and view it using Google Data Studio. B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them. C. Schedule BigQuery load jobs to ingest analytics files uploaded to Cloud Storage every ten minutes, and visualize the results in Google Data Studio. D. Insert the KPIs into Cloud Datastore entities, and run ad hoc analysis and visualizations of them in Cloud Datalab.
https://www.examtopics.com/discussions/google/view/9222-exam-professional-cloud-architect-topic-1-question-66/
B. Output custom metrics to Stackdriver from the game servers, and create a Dashboard in Stackdriver Monitoring Console to view them.
Data studio cannot be used with BigTable
https://datastudio.google.com/datahttps://datastudio.google.com/data
Question #: 77
Topic #: 1
You want to establish a Compute Engine application in a single VPC across two regions. The application must communicate over VPN to an on-premises network.
How should you deploy the VPN?
A. Use VPC Network Peering between the VPC and the on-premises network. B. Expose the VPC to the on-premises network using IAM and VPC Sharing. C. Create a global Cloud VPN Gateway with VPN tunnels from each region to the on-premises peer gateway. D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
https://www.examtopics.com/discussions/google/view/11819-exam-professional-cloud-architect-topic-1-question-77/
D. Deploy Cloud VPN Gateway in each region. Ensure that each region has at least one VPN tunnel to the on-premises peer gateway.
It can’t be -A - VPC Network Peering only allows private RFC 1918 connectivity across two Virtual Private Cloud (VPC) networks. In this example is one VPC with on-premise network
https://cloud.google.com/vpc/docs/vpc-peering
It is not definitely - B - Can’t be
It is not C - Because Cloud VPN gateways and tunnels are regional objects, not global
So, it the answer is D -
https://cloud.google.com/vpn/docs/how-to/creating-static-vpns
Question #: 135
Topic #: 1
You are developing your microservices application on Google Kubernetes Engine. During testing, you want to validate the behavior of your application in case a specific microservice should suddenly crash. What should you do?
A. Add a taint to one of the nodes of the Kubernetes cluster. For the specific microservice, configure a pod anti-affinity label that has the name of the tainted node as a value. B. Use Istio's fault injection on the particular microservice whose faulty behavior you want to simulate. C. Destroy one of the nodes of the Kubernetes cluster to observe the behavior. D. Configure Istio's traffic management features to steer the traffic away from a crashing microservice.
https://www.examtopics.com/discussions/google/view/56640-exam-professional-cloud-architect-topic-1-question-135/
B. Use Istio’s fault injection on the particular microservice whose faulty behavior you want to simulate.
application crash, not node.
Fault injection is a technique used in chaos engineering to deliberately introduce errors into a system to test its resilience and observe its behavior under failure conditions. Istio is a service mesh that can manage the traffic between microservices. It includes fault injection capabilities that enable you to simulate failures such as delays or crashed services without actually stopping the service or damaging the environment. This allows you to validate how the rest of your application reacts to the failure of a specific microservice.
Question #: 185
Topic #: 1
Your company has an application running as a Deployment in a Google Kubernetes Engine (GKE) cluster. When releasing new versions of the application via a rolling deployment, the team has been causing outages. The root cause of the outages is misconfigurations with parameters that are only used in production. You want to put preventive measures for this in the platform to prevent outages. What should you do?
A. Configure liveness and readiness probes in the Pod specification. B. Configure health checks on the managed instance group. C. Create a Scheduled Task to check whether the application is available. D. Configure an uptime alert in Cloud Monitoring.
https://www.examtopics.com/discussions/google/view/79702-exam-professional-cloud-architect-topic-1-question-185/
A. Configure liveness and readiness probes in the Pod specification.
A: Configuring the right liveness and readiness probes prevents outages when rolling out a new ReplicaSet of a Deployment, because Pods are only getting traffic when they are considered ready.
B: With GKE, you do not deal with MIGs.
C: Does not use GKE tools and is therefore not the best option.
D: Does alert you but does not prevent the outage.
https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes?hl=en
Question #: 91
Topic #: 1
All Compute Engine instances in your VPC should be able to connect to an Active Directory server on specific ports. Any other traffic emerging from your instances is not allowed. You want to enforce this using VPC firewall rules.
How should you configure the firewall rules?
A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances. B. Create an egress rule with priority 100 to deny all traffic for all instances. Create another egress rule with priority 1000 to allow the Active Directory traffic for all instances. C. Create an egress rule with priority 1000 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 100 to block all traffic for all instances. D. Create an egress rule with priority 100 to allow the Active Directory traffic. Rely on the implied deny egress rule with priority 1000 to block all traffic for all instances.
https://www.examtopics.com/discussions/google/view/6817-exam-professional-cloud-architect-topic-1-question-91/
A. Create an egress rule with priority 1000 to deny all traffic for all instances. Create another egress rule with priority 100 to allow the Active Directory traffic for all instances.
Should be A, there is no implied deny egress but only implied allow egress
https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules
Every VPC network has two implied firewall rules. These rules exist, but are not shown in the Cloud Console:
The implied allow egress rule: An egress rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination, except for traffic blocked by GCP. Outbound access may be restricted by a higher priority firewall rule. Internet access is allowed if no other firewall rules deny outbound traffic and if the instance has an external IP address or uses a NAT instance. Refer to Internet access requirements for more details.
The implied deny ingress rule: An ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming traffic to them. Incoming access may be allowed by a higher priority rule. Note that the default network includes some additional rules that override this one, allowing certain types of incoming traffic.
Question #: 56
Topic #: 1
You have an application deployed on Google Kubernetes Engine using a Deployment named echo-deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the application with minimal downtime to the application. What should you do?
A. Use kubectl set image deployment/echo-deployment <new-image> B. Use the rolling update functionality of the Instance Group behind the Kubernetes cluster C. Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create ג€"f <yaml-file> D. Update the service yaml file which the new container image. Use kubectl delete service/echo-service and kubectl create ג€"f <yaml-file>
https://www.examtopics.com/discussions/google/view/7184-exam-professional-cloud-architect-topic-1-question-56/
A. Use kubectl set image deployment/echo-deployment <new-image></new-image>
https://cloud.google.com/kubernetes-engine/docs/how-to/updating-apps#updating_an_application
Question #: 87
Topic #: 1
Your web application uses Google Kubernetes Engine to manage several workloads. One workload requires a consistent set of hostnames even after pod scaling and relaunches.
Which feature of Kubernetes should you use to accomplish this?
A. StatefulSets B. Role-based access control C. Container environment variables D. Persistent Volumes
https://www.examtopics.com/discussions/google/view/7328-exam-professional-cloud-architect-topic-1-question-87/
A. StatefulSets
To ensure that a workload in Kubernetes has a consistent set of hostnames even after pod scaling and relaunches, you should use StatefulSets. StatefulSets are a type of controller in Kubernetes that is used to manage stateful applications. They provide a number of features that are specifically designed to support stateful applications, including:
Stable, unique network identifiers for each pod in the set
Persistent storage that is automatically attached to pods
Ordered, graceful deployment and scaling of pods
Ordered, graceful deletion and termination of pods
By using StatefulSets, you can ensure that your workload has a consistent set of hostnames even if pods are scaled or relaunched, which can be important for applications that rely on stable network identifiers.
StatefulSets is a feature of Kubernetes, which the question asks about. Yes, Persistent volumes are required by StatefulSets (https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). See the Google documentations for mentioning of hostnames (https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset).
Question #: 150
Topic #: 1
Your team needs to create a Google Kubernetes Engine (GKE) cluster to host a newly built application that requires access to third-party services on the internet.
Your company does not allow any Compute Engine instance to have a public IP address on Google Cloud. You need to create a deployment strategy that adheres to these guidelines. What should you do?
A. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet. B. Configure the GKE cluster as a private cluster. Configure Private Google Access on the Virtual Private Cloud (VPC). C. Configure the GKE cluster as a route-based cluster. Configure Private Google Access on the Virtual Private Cloud (VPC). D. Create a Compute Engine instance, and install a NAT Proxy on the instance. Configure all workloads on GKE to pass through this proxy to access third-party services on the Internet.
https://www.examtopics.com/discussions/google/view/60441-exam-professional-cloud-architect-topic-1-question-150/
A. Configure the GKE cluster as a private cluster, and configure Cloud NAT Gateway for the cluster subnet.
Cloud NAT is the correct answer
Question #: 54
Topic #: 1
[All Professional Cloud Architect Questions]
You are working in a highly secured environment where public Internet access from the Compute Engine VMs is not allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install specific software on a Compute Engine instance. How should you install the software?
A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil. B. Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IP address range for Cloud Storage. Download the files to the VM using gsutil. C. Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gcloud. D. Upload the required installation files to Cloud Source Repositories and use firewall rules to block all traffic except the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.
https://www.examtopics.com/discussions/google/view/7180-exam-professional-cloud-architect-topic-1-question-54/
A. Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to the VM using gsutil
Option B: Uploading the required installation files to Cloud Storage and using firewall rules to block all traffic except the IP address range for Cloud Storage is not a valid option, as it does not allow the VM to access the installation files without public internet access.
Option C: Uploading the required installation files to Cloud Source Repositories and using gcloud to download the files to the VM is not a valid option, as Cloud Source Repositories does not support storing large binary files such as installation files.
Option D: Uploading the required installation files to Cloud Source Repositories and using firewall rules to block all traffic except the IP address range for Cloud Source Repositories is not a valid option, as it does not allow the VM to access the installation files without public internet access.
To install specific software on a Compute Engine instance in a highly secured environment where public Internet access is not allowed, you can follow these steps:
Upload the required installation files to Cloud Storage.
Configure the VM on a subnet with a Private Google Access subnet. This will allow the VM to access Google APIs and services, such as Cloud Storage, without requiring a public IP address or internet access.
Assign only an internal IP address to the VM. This will ensure that the VM is not accessible from the public internet.
Download the installation files to the VM using gsutil, which is a command-line tool that allows you to access Cloud Storage from the VM.
Question #: 130
Topic #: 1
Your company has a networking team and a development team. The development team runs applications on Compute Engine instances that contain sensitive data. The development team requires administrative permissions for Compute Engine. Your company requires all network resources to be managed by the networking team. The development team does not want the networking team to have access to the sensitive data on the instances. What should you do?
A. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use Cloud VPN to join the two VPCs. B. 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team. C. 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team. D. 1. Create a project with a standalone VPC and assign the Network Admin role to the networking team. 2. Create a second project with a standalone VPC and assign the Compute Admin role to the development team. 3. Use VPC Peering to join the two VPCs.
https://www.examtopics.com/discussions/google/view/57301-exam-professional-cloud-architect-topic-1-question-130/
B. 1. Create a project with a standalone Virtual Private Cloud (VPC), assign the Network Admin role to the networking team, and assign the Compute Admin role to the development team.
C. 1. Create a project with a Shared VPC and assign the Network Admin role to the networking team. 2. Create a second project without a VPC, configure it as a Shared VPC service project, and assign the Compute Admin role to the development team.
For the same project , same VPC, Network Admin role to the networking team, and Compute Admin role to the development team. What is the need for another project?
For full separation of the teams you will need to use a shared VPC in this case. If you compare the two roles you will see that Compute Admin includes the permissions of the Network Admin so with option B you don’t separate the teams as Compute Admin includes compute.network.* permissions (and others). https://cloud.google.com/iam/docs/understanding-roles
Complete separation was not required. However, the networking team shouldn’t have access to the compute engine. For this, no need a full separation. Any better idea?
They’re getting the Compute Admin permissions either way. The key words in the statement are actually “Create a second project without a VPC, configure it as a Shared VPC service project.” Since the VPC being used doesn’t exist in their project, they’re unable to manage network changes.
Because Compute Admin has compute.* permissions, which includes Network Admin’s.
This is tricky. Both B & C could seem okay, but C is the right answer.
The compute.networkAdmin role gives broad permissions on the project, which also affects compute instances. For instance, it gives “compute.instances.get” and “compute.instances.use” permissions. Even though this roles does not grant permissions to start/stop/create/delete instances, it still gives broad permissions on compute instances.
This gets much clearer if we do the same analysis on the other role: compute.Admin.
This role gives permissions on “compute.”, which also includes “compute.networks.”. That is exactly what we don’t want to happen. If we spawn the VPC and the compute VMs in the same project, then compute admins will be able to mess around with the VPC.
That is why we need to separate networks and compute within 2 projects, unless creating custom roles, etc. Shared VPC are aimed at that. Therefore, C is the right answer.
Question #: 96
Topic #: 1
You need to develop procedures to test a disaster plan for a mission-critical application. You want to use Google-recommended practices and native capabilities within GCP.
What should you do?
A. Use Deployment Manager to automate service provisioning. Use Activity Logs to monitor and debug your tests. B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests. C. Use gcloud scripts to automate service provisioning. Use Activity Logs to monitor and debug your tests. D. Use gcloud scripts to automate service provisioning. Use Stackdriver to monitor and debug your tests.
https://www.examtopics.com/discussions/google/view/7125-exam-professional-cloud-architect-topic-1-question-96/
B. Use Deployment Manager to automate service provisioning. Use Stackdriver to monitor and debug your tests.
Google Best practice —> never use scripts. They do not trust anyone else’s code it seems.
https://cloud.google.com/architecture/dr-scenarios-planning-guide?hl=en
Question #: 92
Topic #: 1
Your customer runs a web service used by e-commerce sites to offer product recommendations to users. The company has begun experimenting with a machine learning model on Google Cloud Platform to improve the quality of results.
What should the customer do to improve their model’s results over time?
A. Export Cloud Machine Learning Engine performance metrics from Stackdriver to BigQuery, to be used to analyze the efficiency of the model. B. Build a roadmap to move the machine learning model training from Cloud GPUs to Cloud TPUs, which offer better results. C. Monitor Compute Engine announcements for availability of newer CPU architectures, and deploy the model to them as soon as they are available for additional performance. D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.
https://www.examtopics.com/discussions/google/view/68718-exam-professional-cloud-architect-topic-1-question-92/
D. Save a history of recommendations and results of the recommendations in BigQuery, to be used as training data.
Model performance is generally based on the volume of its training data input. The more the data, the better the model.
A,B,C is defining about the performance of ML but not the result….only the training data will give good ML result/predictions
Question #: 187
Topic #: 1
Your company has a Google Cloud project that uses BigQuery for data warehousing on a pay-per-use basis. You want to monitor queries in real time to discover the most costly queries and which users spend the most. What should you do?
A. 1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query. 2. Open the Billing page of the project. 3. Select Reports. 4. Select BigQuery as the product and filter by the user you want to check. B. 1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query on the generated table to extract the information you need. C. 1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage. 2. Develop a Dataflow pipeline to compute the cost of queries split by users. D. 1. Activate billing export into BigQuery. 2. Perform a BigQuery query on the billing table to extract the information you need.
https://www.examtopics.com/discussions/google/view/80112-exam-professional-cloud-architect-topic-1-question-187/
B. 1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query on the generated table to extract the information you need
D also can be continuous https://cloud.google.com/billing/docs/how-to/export-data-bigquery#setup. I think D is the right answer.
Why not D: Billing export can provide cost data in BigQuery, but it doesn’t capture details about individual queries or users, making it insufficient for the specific needs of identifying costly queries and high-spending users.
B is the correct answer https://cloud.google.com/blog/products/data-analytics/taking-a-practical-approach-to-bigquery-cost-monitoring
A is incorrect as there is not billing page for a project, its billing account that handles all org billing.
Question #: 3
Topic #: 1
The operations manager asks you for a list of recommended practices that she should consider when migrating a J2EE application to the cloud.
Which three practices should you recommend? (Choose three.)
A. Port the application code to run on Google App Engine B. Integrate Cloud Dataflow into the application to capture real-time metrics C. Instrument the application with a monitoring tool like Stackdriver Debugger D. Select an automation framework to reliably provision the cloud infrastructure E. Deploy a continuous integration tool with automated testing in a staging environment F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
https://www.examtopics.com/discussions/google/view/54378-exam-professional-cloud-architect-topic-1-question-3/
ACE
A. Port the application code to run on Google App Engine C. Instrument the application with a monitoring tool like Stackdriver Debugger E. Deploy a continuous integration tool with automated testing in a staging environment
ADE
A. Port the application code to run on Google App Engine D. Select an automation framework to reliably provision the cloud infrastructure E. Deploy a continuous integration tool with automated testing in a staging environment
ACE
This is talking about the APPLICATION not the infrastructure, therefore I believe we should focus on the APP-side of things:
1. port the app to app engine for content delivery
2. add monitoring for troubleshooting
3. use a CI/CD workflow for continuous delivery w/testing for a stable application
———–
Let’s go with option elimination
A. Port the application code to run on Google App Engine
» PaaS serverless managed service, so all my infra provisioning is taken care by GCP.
B. Integrate Cloud Dataflow into the application to capture real-time metrics
» Good to have
C. Instrument the application with a monitoring tool like Stackdriver Debugger
» Is a must for debugging issues and monitoring application logs this is now GCP Cloud monitoring and logging.
D. Select an automation framework to reliably provision the cloud infrastructure
» App Engine is a PaaS so the infrastructure is taken care of by App Engine, I would select this if I have not selected A, hence will eliminate this option for now
E. Deploy a continuous integration tool with automated testing in a staging environment
» Good to have
F. Migrate from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable
» There is no requirement for DB enhancement hence will elimination this option
A and C are must-have
B and E are Good to have, but E has more importance than Big
——–
ADE.
References: https://cloud.google.com/appengine/docs/standard/java/tools/uploadinganapp https://cloud.google.com/appengine/docs/standard/java/building-app/cloud-sql.
When migrating a J2EE (Java Enterprise Edition) application to the cloud, there are several best practices that should be considered. Here are three of the most important ones:
Instrument the application with a monitoring tool like Stackdriver Debugger (Option C) Monitoring is essential for any application running in the cloud. In order to ensure that the application is performing as expected, you need to monitor it for issues like performance bottlenecks, errors, and crashes. Stackdriver Debugger is a monitoring tool that allows you to monitor your application's performance and troubleshoot issues in real-time. It enables you to take snapshots of your application's state at any point in time, which can be extremely helpful when debugging problems. Select an automation framework to reliably provision the cloud infrastructure (Option D) Automation is key when it comes to cloud infrastructure management. When you're dealing with large, complex systems, it's easy for things to go wrong if you're relying on manual processes. That's why it's important to use an automation framework to provision and manage your cloud infrastructure. There are several automation frameworks available, such as Terraform and CloudFormation, that make it easy to manage your infrastructure as code. Deploy a continuous integration tool with automated testing in a staging environment (Option E) Continuous integration (CI) is the practice of continuously building and testing your application as you develop it. This helps you catch issues early on in the development process, before they become bigger problems. When migrating a J2EE application to the cloud, it's important to deploy a CI tool like Jenkins or CircleCI, and set up automated testing in a staging environment. This will allow you to catch issues before they're deployed to production, and ensure that your application is working as expected.
While the other options listed (porting the application code to run on Google App Engine, integrating Cloud Dataflow into the application to capture real-time metrics, and migrating from MySQL to a managed NoSQL database like Google Cloud Datastore or Bigtable) may also be important considerations, they are not as essential as the three practices listed above.
Question #: 176
Topic #: 1
Your company has an application that is running on multiple instances of Compute Engine. It generates 1 TB per day of logs. For compliance reasons, the logs need to be kept for at least two years. The logs need to be available for active query for 30 days. After that, they just need to be retained for audit purposes. You want to implement a storage solution that is compliant, minimizes costs, and follows Google-recommended practices. What should you do?
A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock. B. 1. Write a daily cron job, running on all instances, that uploads logs into a Cloud Storage bucket. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. C. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a partitioned BigQuery table. 3. Set a time_partitioning_expiration of 30 days. D. 1. Create a daily cron job, running on all instances, that uploads logs into a partitioned BigQuery table. 2. Set a time_partitioning_expiration of 30 days.
https://www.examtopics.com/discussions/google/view/68688-exam-professional-cloud-architect-topic-1-question-176/
A. 1. Install a Cloud Logging agent on all instances. 2. Create a sink to export logs into a regional Cloud Storage bucket. 3. Create an Object Lifecycle rule to move files into a Coldline Cloud Storage bucket after one month. 4. Configure a retention policy at the bucket level using bucket lock.
The practice for managing logs generated on Compute Engine on Google Cloud is to install the Cloud Logging agent and send them to Cloud Logging.
The sent logs will be aggregated into a Cloud Logging sink and exported to Cloud Storage.
The reason for using Cloud Storage as the destination for the logs is that the requirement in question requires setting up a lifecycle based on the storage period.
In this case, the log will be used for active queries for 30 days after it is saved, but after that, it needs to be stored for a longer period of time for auditing purposes.
If the data is to be used for active queries, we can use BigQuery’s Cloud Storage data query feature and move the data past 30 days to Coldline to build a cost-optimal solution.
Therefore, the correct answer is as follows
1. Install the Cloud Logging agent on all instances.
Create a sync that exports the logs to the region’s Cloud Storage bucket.
3. Create an Object Lifecycle rule to move the files to the Coldline Cloud Storage bucket after one month. 4.
4. set up a bucket-level retention policy using bucket locking.”
https://cloud.google.com/logging/docs/agent/logging/installation
https://cloud.google.com/logging/docs/export/configure_export_v2
https://cloud.google.com/bigquery/external-data-cloud-storage
A - It should be archive (>= 365 days) and not coldline ( >= 90 days) - Proposed solution is more expensive that what is possible. Also no way to query unless you use BigQuery external tables.
B & C - Wrong because CRON is not the way to do this.
D - Wrong because data is deleted after 30 days and not retained for 2 years.
Question #: 19
Topic #: 1
The database administration team has asked you to help them improve the performance of their new database server running on Google Compute Engine. The database is for importing and normalizing their performance statistics and is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD persistent disk.
What should they change to get better performance from this system?
A. Increase the virtual machine's memory to 64 GB B. Create a new virtual machine running PostgreSQL C. Dynamically resize the SSD persistent disk to 500 GB D. Migrate their performance metrics warehouse to BigQuery E. Modify all of their batch jobs to use bulk inserts into the database
https://www.examtopics.com/discussions/google/view/7161-exam-professional-cloud-architect-topic-1-question-19/
C. Dynamically resize the SSD persistent disk to 500 GB
Answer is C because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.
C. Dynamically resize the SSD persistent disk to 500 GB: Increasing the disk size can improve the database’s performance by providing more space for storing data and indexes. It can also help reduce I/O operations by enabling more data to be stored in memory. However, it depends on the current disk utilization and the rate at which data is being added to the database.
Question #: 168
Topic #: 1
Your marketing department wants to send out a promotional email campaign. The development team wants to minimize direct operation management. They project a wide range of possible customer responses, from 100 to 500,000 click-through per day. The link leads to a simple website that explains the promotion and collects user information and preferences.
Which infrastructure should you recommend? (Choose two.)
A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data. B. Use a Google Container Engine cluster to serve the website and store data to persistent disk. C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data. D. Use a single Compute Engine virtual machine (VM) to host a web server, backend by Google Cloud SQL.
https://www.examtopics.com/discussions/google/view/54373-exam-professional-cloud-architect-topic-1-question-168/
A. Use Google App Engine to serve the website and Google Cloud Datastore to store user data.
C. Use a managed instance group to serve the website and Google Cloud Bigtable to store user data.
A: Google App Engine is a fully managed platform for building and running web applications and APIs. It can automatically scale to meet high traffic demands, making it a good choice for serving the website for the promotional email campaign. Google Cloud Datastore can also scale automatically to meet high traffic demands, making it a good choice for storing user data.
C: A managed instance group are managed as a single entity and can automatically scale up or down based on demand. This makes it a good choice for serving the website for the promotional email campaign. Google Cloud Bigtable is a fully managed, high-performance NoSQL database that can store and serve large amounts of structured data with low latency. It is designed to scale horizontally and can handle high traffic demands, making it a good choice for storing user data.
Question #: 53
Topic #: 1
You are deploying an application on App Engine that needs to integrate with an on-premises database. For security purposes, your on-premises database must not be accessible through the public internet. What should you do?
A. Deploy your application on App Engine standard environment and use App Engine firewall rules to limit access to the open on-premises database. B. Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database. C. Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit access to the on-premises database. D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.
https://www.examtopics.com/discussions/google/view/6314-exam-professional-cloud-architect-topic-1-question-53/
D. Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database
https://cloud.google.com/appengine/docs/the-appengine-environments
https://cloud.google.com/appengine/docs/flexible/python/using-third-party-databases
Standard requires more setup compared to Flexible.
Standard Environment:
To connect from the standard environment, you primarily use “Serverless VPC Access” which allows your App Engine app to reach your VPC network over private IP addresses without exposing it directly to the public internet.
Flexible Environment:
In the flexible environment, you can directly connect to your VPC network by deploying your app within the same VPC as your Cloud VPN gateway, enabling a more seamless connection using the private IP addresses of your network resources.
I just had the same confusion. Serverless VPC Connector is something relatively newer than this question on the exam, so probably it’s safer to assume that a VPC connection is not supported (at least directly) by App Engine Standard.
Besides, this would add extra overhead, and would also increase the costs for the solution.
Most of these questions haven’t been updated or repurposed according to newer products and services. For this particular question, using a Serverless VPC Connector would add unnecessary complexity and the solution would become more expensive.
I swore to god it was B lol, but after a few hours of reading the documentation, I changed my mind and switched to option D. You might want to do the same.
Question #: 1
Topic #: 1
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs.
What should they do?
A. Configure a new load balancer for the new version of the API B. Reconfigure old clients to use a new endpoint for the new API C. Have the old API forward traffic to the new API based on the path D. Use separate backend pools for each API path behind the load balancer
https://www.examtopics.com/discussions/google/view/7083-exam-professional-cloud-architect-topic-1-question-1/
D. Use separate backend pools for each API path behind the load balancer
D is the answer because HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL. A is not correct because configuring a new load balancer would require a new or different SSL and DNS records which conflicts with the requirements to keep the same SSL and DNS records. B is not correct because it goes against the requirements. The company wants to keep the old API available while new customers and testers try the new API. C is not correct because it is not a requirement to decommission the implementation behind the old API. Moreover, it introduces unnecessary risk in case bugs or incompatibilities are discovered in the new API.
Question #: 10
Topic #: 1
You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to
BigQuery.
What should you do to fix the script?
A. Install the latest BigQuery API client library for Python B. Run your script on a new virtual machine with the BigQuery access scope enabled C. Create a new service account with BigQuery access and execute your script with that user D. Install the bq component for gcloud with the command gcloud components install bq.
https://www.examtopics.com/discussions/google/view/7041-exam-professional-cloud-architect-topic-1-question-10/
C. Create a new service account with BigQuery access and execute your script with that user
C - service account is Google Cloud’s best practice
Access scopes are the legacy method of specifying permissions for your instance. read from > https://cloud.google.com/compute/docs/access/service-accounts . So , I would go with C
Question #: 175
Topic #: 1
You want to allow your operations team to store logs from all the production projects in your Organization, without including logs from other projects. All of the production projects are contained in a folder. You want to ensure that all logs for existing and new production projects are captured automatically. What should you do?
A. Create an aggregated export on the Production folder. Set the log sink to be a Cloud Storage bucket in an operations project. B. Create an aggregated export on the Organization resource. Set the log sink to be a Cloud Storage bucket in an operations project. C. Create log exports in the production projects. Set the log sinks to be a Cloud Storage bucket in an operations project. D. Create log exports in the production projects. Set the log sinks to be BigQuery datasets in the production projects, and grant IAM access to the operations team to run queries on the datasets.
https://www.examtopics.com/discussions/google/view/68686-exam-professional-cloud-architect-topic-1-question-175/
A. Create an aggregated export on the Production folder. Set the log sink to be a Cloud Storage bucket in an operations project.
This solution will allow the operations team to store logs from all the production projects in your Organization, without including logs from other projects. All of the production projects are contained in a folder, so you can create an aggregated export on the Production folder. You can then set the log sink to be a Cloud Storage bucket in an operations project. This will allow the operations team to store all of the logs from the production projects in one place.
Option B is not the correct solution because it creates an aggregated export on the Organization resource, which will capture logs from all projects in the Organization, including those outside the Production folder.
Option C is not the correct solution because it requires you to create log exports in each production project, which can be time-consuming and error-prone. Additionally, setting the log sink to a Cloud Storage bucket in an operations project will not automatically capture logs for new production projects.
Option D is not the correct solution because it requires you to create log exports in each production project, which can be time-consuming and error-prone. Additionally, storing the logs in BigQuery datasets in the production projects will not allow the operations team to easily access the logs. Instead, they would need to be granted IAM access to run queries on the datasets.
Question #: 78
Topic #: 1
Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed.
You want to optimize storage and follow Google-recommended practices. What should you do?
A. Configure the expiration time for your tables at 45 days B. Make the tables time-partitioned, and configure the partition expiration at 45 days C. Rely on BigQuery's default behavior to prune application logs older than 45 days D. Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days
https://www.examtopics.com/discussions/google/view/6455-exam-professional-cloud-architect-topic-1-question-78/
B. Make the tables time-partitioned, and configure the partition expiration at 45 days
https://cloud.google.com/bigquery/docs/managing-tables#updating_a_tables_expiration_time
When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.
Question #: 184
Topic #: 1
You are managing several internal applications that are deployed on Compute Engine. Business users inform you that an application has become very slow over the past few days. You want to find the underlying cause in order to solve the problem. What should you do first?
A. Inspect the logs and metrics from the instances in Cloud Logging and Cloud Monitoring. B. Change the Compute Engine Instances behind the application to a machine type with more CPU and memory. C. Restore a backup of the application database from a time before the application became slow. D. Deploy the applications on a managed instance group with autoscaling enabled. Add a load balancer in front of the managed instance group, and have the users connect to the IP of the load balancer.
https://www.examtopics.com/discussions/google/view/79697-exam-professional-cloud-architect-topic-1-question-184/
A. Inspect the logs and metrics from the instances in Cloud Logging and Cloud Monitoring.
First thing to do is to inspect logs and monitoring to see what is happening
A is the only inpection.
You want to inspect and find the underlying cause in order to solve the problem.
B & D are possible solutions, not inspection.
C is neither solution nor inspection. C will just lead to the issue again.
Question #: 13
Topic #: 1
Your customer is receiving reports that their recently updated Google App Engine application is taking approximately 30 seconds to load for some of their users.
This behavior was not reported before the update.
What strategy should you take?
A. Work with your ISP to diagnose the problem B. Open a support ticket to ask for network capture and flow data to diagnose the problem, then roll back your application C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment D. Roll back to an earlier known good release, then push the release again at a quieter period to investigate. Then use Stackdriver Trace and Logging to diagnose the problem
https://www.examtopics.com/discussions/google/view/7137-exam-professional-cloud-architect-topic-1-question-13/
C. Roll back to an earlier known good release initially, then use Stackdriver Trace and Logging to diagnose the problem in a development/test/staging environment
- Prioritize User Experience: Rolling back to a stable version quickly minimizes user impact and restores the application to a functional state. This should be the immediate first step.
- Controlled Environment: Diagnosing the issue in a development/test/staging environment allows you to investigate without affecting real users. You can reproduce the problem, gather data, and test potential solutions safely.
- Powerful Diagnostic Tools: Stackdriver Trace helps you pinpoint performance bottlenecks by tracing requests across your application. Stackdriver Logging provides detailed logs to understand application behavior and identify errors.
Question #: 133
Topic #: 1
You have deployed several instances on Compute Engine. As a security requirement, instances cannot have a public IP address. There is no VPN connection between Google Cloud and your office, and you need to connect via SSH into a specific machine without violating the security requirements. What should you do?
A. Configure Cloud NAT on the subnet where the instance is hosted. Create an SSH connection to the Cloud NAT IP address to reach the instance. B. Add all instances to an unmanaged instance group. Configure TCP Proxy Load Balancing with the instance group as a backend. Connect to the instance using the TCP Proxy IP. C. Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured Tunnel User. Use the gcloud command line tool to ssh into the instance. D. Create a bastion host in the network to SSH into the bastion host from your office location. From the bastion host, SSH into the desired instance.
https://www.examtopics.com/discussions/google/view/56751-exam-professional-cloud-architect-topic-1-question-133/
C. Configure Identity-Aware Proxy (IAP) for the instance and ensure that you have the role of IAP-secured Tunnel User. Use the gcloud command line tool to ssh into the instance.
https://cloud.google.com/iap/docs/using-tcp-forwarding#tunneling_with_ssh
Question #: 169
Topic #: 1
Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. Specifically, you want a system that is no-ops and auto-scaling.
Which two compute products should you choose? (Choose two.)
A. Compute Engine with containers B. Google Kubernetes Engine with containers C. Google App Engine Standard Environment D. Compute Engine with custom instance types E. Compute Engine with managed instance groups
https://www.examtopics.com/discussions/google/view/54374-exam-professional-cloud-architect-topic-1-question-169/
B. Google Kubernetes Engine with containers
C. Google App Engine Standard Environment
Option B, Google Kubernetes Engine (GKE) with containers, is a managed Kubernetes service that automatically manages and scales containerized applications. GKE handles cluster management tasks like scaling, upgrades, and security patches, allowing you to focus on the application itself.
Option C, Google App Engine Standard Environment, is a fully managed platform for building and deploying applications. It automatically scales applications based on demand and provides a no-ops experience. With App Engine Standard Environment, you don’t need to worry about infrastructure management, as Google handles it for you.
Question #: 27
Topic #: 1
Your company has decided to build a backup replica of their on-premises user authentication PostgreSQL database on Google Cloud Platform. The database is 4
TB, and large updates are frequent. Replication requires private address space communication.
Which networking approach should you use?
A. Google Cloud Dedicated Interconnect B. Google Cloud VPN connected to the data center network C. A NAT and TLS translation gateway installed on-premises D. A Google Compute Engine instance with a VPN server installed connected to the data center network
https://www.examtopics.com/discussions/google/view/11781-exam-professional-cloud-architect-topic-1-question-27/
A. Google Cloud Dedicated Interconnect
Google Cloud Dedicated Interconnect provides direct physical connections and RFC 1918 communication between your on-premises network and Google’s network.
Dedicated Interconnect enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet or using VPN tunnels.
Benefits: -> Traffic between your on-premises network and your VPC network doesn’t traverse the public Internet.
Traffic traverses a dedicated connection with fewer hops, meaning there are less points of failure where traffic might get dropped or disrupted.
-> Your VPC network’s internal (RFC 1918) IP addresses are directly accessible from your on-premises network.
You don’t need to use a NAT device or VPN tunnel to reach internal IP addresses.
Currently, you can only reach internal IP addresses over a dedicated connection.
To reach Google external IP addresses, you must use a separate connection.
-> You can scale your connection to Google based on your needs.
Connection capacity is delivered over one or more 10 Gbps Ethernet connections, with a maximum of eight connections (80 Gbps total per interconnect)
-> The cost of egress traffic from your VPC network to your on-premises network is reduced.
A dedicated connection is generally the least expensive method if you have a high-volume of traffic to and from Google’s network.
https://cloud.google.com/interconnect/docs/details/dedicated
Question #: 43
Topic #: 1
You want to create a private connection between your instances on Compute Engine and your on-premises data center. You require a connection of at least 20
Gbps. You want to follow Google-recommended practices. How should you set up the connection?
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect. B. Create a VPC and connect it to your on-premises data center using a single Cloud VPN. C. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center using Dedicated Interconnect. D. Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using a single Cloud VPN.
https://www.examtopics.com/discussions/google/view/11804-exam-professional-cloud-architect-topic-1-question-43/
A. Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.
Question #: 125
Topic #: 1
You are working at a financial institution that stores mortgage loan approval documents on Cloud Storage. Any change to these approval documents must be uploaded as a separate approval file, so you want to ensure that these documents cannot be deleted or overwritten for the next 5 years. What should you do?
A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy. B. Create the bucket with uniform bucket-level access, and grant a service account the role of Object Writer. Use the service account to upload new files. C. Use a customer-managed key for the encryption of the bucket. Rotate the key after 5 years. D. Create the bucket with fine-grained access control, and grant a service account the role of Object Writer. Use the service account to upload new files.
https://www.examtopics.com/discussions/google/view/57007-exam-professional-cloud-architect-topic-1-question-125/
A. Create a retention policy on the bucket for the duration of 5 years. Create a lock on the retention policy.
o If a bucket has a retention policy, objects in the bucket can only be deleted or replaced once their age is greater than the retention period.
o Once you lock a retention policy, you cannot remove it or reduce the retention period it has.
https://cloud.google.com/storage/docs/bucket-lock#policy-locks
Question #: 16
Topic #: 1
You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams.
Which storage infrastructure should you choose?
A. Google Cloud SQL B. Google Cloud Bigtable C. Google Cloud Storage D. Google Cloud Datastore
https://www.examtopics.com/discussions/google/view/20238-exam-professional-cloud-architect-topic-1-question-16/
B. Google Cloud Bigtable
Google Cloud Bigtable is a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Good for: -> Low-latency read/write access -> High-throughput analytics -> Native time series support Common workloads: -> IoT, finance, adtech -> Personalization, recommendations -> Monitoring -> Geospatial datasets -> Graphs Incorrect Answers: C: Google Cloud Storage is a scalable, fully-managed, highly reliable, and cost-efficient object / blob store.
Is good for: -> Images, pictures, and videos -> Objects and blobs -> Unstructured data D: Google Cloud Datastore is a scalable, fully-managed NoSQL document database for your web and mobile applications.
Is good for: -> Semi-structured application data -> Hierarchical data -> Durable key-value data -> Common workloads: -> User profiles -> Product catalogs -> Game state Reference: https://cloud.google.com/storage-options/
Given the scenario, we need to select a storage system for click-data that can handle high traffic and large volumes of data. It should also be able to store the data for future analysis.
A. Google Cloud SQL: Google Cloud SQL is a fully managed relational database service that uses MySQL or PostgreSQL. It is suitable for storing structured data and handling moderate traffic. However, it may not be the best option for storing click-data that requires high write rates and can quickly grow to large volumes. Additionally, it is not designed to handle unstructured data, which could be a requirement for click-data. Therefore, Google Cloud SQL is not the best option for this scenario.
B. Google Cloud Bigtable: Google Cloud Bigtable is a scalable, fully managed NoSQL database service that can handle large volumes of structured and unstructured data. It is designed to handle high write rates and is ideal for storing click-data. With its ability to handle up to 8,500 clicks per second, it is the most suitable option for this scenario.
C. Google Cloud Storage: Google Cloud Storage is a scalable object storage service that is designed for unstructured data. It can handle high volumes of data and can be used to store click-data. However, it may not be the best option for real-time data ingestion and analysis. Google Cloud Storage is best suited for storing data that does not require immediate access.
D. Google Cloud Datastore: Google Cloud Datastore is a NoSQL document database service that is designed for storing structured data. It is suitable for handling moderate traffic and can handle data volumes of up to a few terabytes. However, it may not be the best option for storing click-data, which requires high write rates and can quickly grow to large volumes.
In summary, the most suitable storage infrastructure for storing click-data in this scenario is Google Cloud Bigtable.
Question #: 52
Topic #: 1
You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage. What should you do?
A. Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instance in case of a disaster. B. Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTP load balancing service to fail over to an instance on your premises in case of a disaster. C. Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster. D. Deploy the application on two Compute Engine instance groups, each in a separate project and a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
https://www.examtopics.com/discussions/google/view/7273-exam-professional-cloud-architect-topic-1-question-52/
Deploy the application on two Compute Engine instance groups, each in the same project but in a different region. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to the standby instance group in case of a disaster.
o set up a load balancer with a Compute Engine backend, your VMs need to be in an instance group. The managed instance group provides VMs running the backend servers of an external HTTP load balancer
Now you can use regional load balancer to route traffic to instance groups in different GCP projects. Welcome to cloud :-)
https://cloud.google.com/blog/products/networking/cloud-load-balancing-gets-cross-project-service-referencing
Question #: 196
Topic #: 1
Your company wants to migrate their 10-TB on-premises database export into Cloud Storage. You want to minimize the time it takes to complete this activity and the overall cost. The bandwidth between the on-premises environment and Google Cloud is 1 Gbps. You want to follow Google-recommended practices. What should you do?
A. Develop a Dataflow job to read data directly from the database and write it into Cloud Storage. B. Use the Data Transfer appliance to perform an offline migration. C. Use a commercial partner ETL solution to extract the data from the on-premises database and upload it into Cloud Storage. D. Upload the data with gcloud storage cp.
https://www.examtopics.com/discussions/google/view/121324-exam-professional-cloud-architect-topic-1-question-196/
D. Upload the data with gcloud storage cp
B. Use the Data Transfer appliance to perform an offline migration.
This Transfer Appliance docs says it is suitable when “It would take more than one week to upload your data over the network”
Since 10TB would take way less than a week for that bandwidth, I would go for D
https://cloud.google.com/transfer-appliance/docs/4.0/overview#suitability
The current maximum object size supported by GCS is 5 TB, so it should be B
Question #: 17
Topic #: 1
You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend.
What should you do?
A. Write a lifecycle management rule in XML and push it to the bucket with gsutil B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil C. Schedule a cron script using gsutil ls ג€"lr gs://backups/** to find and remove items older than 90 days D. Schedule a cron script using gsutil ls ג€"l gs://backups/** to find and remove items older than 90 days and schedule it with cron
https://www.examtopics.com/discussions/google/view/7150-exam-professional-cloud-architect-topic-1-question-17/
B. Write a lifecycle management rule in JSON and push it to the bucket with gsutil
https://cloud.google.com/storage/docs/lifecycle
To remove backup files older than 90 days from the Cloud Storage bucket and optimize ongoing Cloud Storage spend, you can use a lifecycle management rule. A lifecycle management rule defines the lifecycle of an object in a Cloud Storage bucket, and allows you to automate the deletion of older files.
- Lifecycle Management: Google Cloud Storage offers built-in lifecycle management rules specifically designed for automated data retention and deletion. This is the most efficient and cost-effective way to manage your backup files.
- JSON Format: Lifecycle rules are defined in JSON format.
- gsutil: The gsutil command-line tool is used to interact with Cloud Storage, including setting lifecycle configuration.
Question #: 172
Topic #: 1
Your company has an application running on Compute Engine that allows users to play their favorite music. There are a fixed number of instances. Files are stored in Cloud Storage, and data is streamed directly to users. Users are reporting that they sometimes need to attempt to play popular songs multiple times before they are successful. You need to improve the performance of the application. What should you do?
A. 1. Mount the Cloud Storage bucket using gcsfuse on all backend Compute Engine instances. 2. Serve music files directly from the backend Compute Engine instance. B. 1. Create a Cloud Filestore NFS volume and attach it to the backend Compute Engine instances. 2. Download popular songs in Cloud Filestore. 3. Serve music files directly from the backend Compute Engine instance. C. 1. Copy popular songs into CloudSQL as a blob. 2. Update application code to retrieve data from CloudSQL when Cloud Storage is overloaded. D. 1. Create a managed instance group with Compute Engine instances. 2. Create a global load balancer and configure it with two backends: ג—‹ Managed instance group ג—‹ Cloud Storage bucket 3. Enable Cloud CDN on the bucket backend.
https://www.examtopics.com/discussions/google/view/68683-exam-professional-cloud-architect-topic-1-question-172/
D. 1. Create a managed instance group with Compute Engine instances. 2. Create a global load balancer and configure it with two backends: ג—‹ Managed instance group ג—‹ Cloud Storage bucket 3. Enable Cloud CDN on the bucket backend.
The correct answer is: D. Create a managed instance group with Compute Engine instances. Create a global load balancer and configure it with two backends: Managed instance group, Cloud Storage bucket. Enable Cloud CDN on the bucket backend.
This solution will improve the performance of the application by:
Automatically scaling the number of Compute Engine instances to meet demand.
Distributing traffic across multiple instances to reduce load on each instance.
Caching popular songs in memory to reduce the number of times that they need to be loaded from Cloud Storage.
Using a global load balancer to distribute traffic evenly across all regions.
Using Cloud CDN to deliver files to users from a location that is closer to them.
This solution is the most efficient and cost-effective way to improve the performance of the application.
upvoted 10 times
Question #: 132
Topic #: 1
Your company sends all Google Cloud logs to Cloud Logging. Your security team wants to monitor the logs. You want to ensure that the security team can react quickly if an anomaly such as an unwanted firewall change or server breach is detected. You want to follow Google-recommended practices. What should you do?
A. Schedule a cron job with Cloud Scheduler. The scheduled job queries the logs every minute for the relevant events. B. Export logs to BigQuery, and trigger a query in BigQuery to process the log data for the relevant events. C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events. D. Export logs to a Cloud Storage bucket, and trigger Cloud Run with the relevant log events.
https://www.examtopics.com/discussions/google/view/57043-exam-professional-cloud-architect-topic-1-question-132/
C. Export logs to a Pub/Sub topic, and trigger Cloud Function with the relevant log events.
https://cloud.google.com/blog/products/management-tools/automate-your-response-to-a-cloud-logging-event
Question #: 158
Topic #: 1
Your company’s test suite is a custom C++ application that runs tests throughout each day on Linux virtual machines. The full test suite takes several hours to complete, running on a limited number of on-premises servers reserved for testing. Your company wants to move the testing infrastructure to the cloud, to reduce the amount of time it takes to fully test a change to the system, while changing the tests as little as possible.
Which cloud infrastructure should you recommend?
A. Google Compute Engine unmanaged instance groups and Network Load Balancer B. Google Compute Engine managed instance groups with auto-scaling C. Google Cloud Dataproc to run Apache Hadoop jobs to process each test D. Google App Engine with Google StackDriver for logging
https://www.examtopics.com/discussions/google/view/8340-exam-professional-cloud-architect-topic-1-question-158/
B. Google Compute Engine managed instance groups with auto-scaling
https://cloud.google.com/compute/docs/autoscaler/
Changing the tests as little as possible rules out C & D.
Test takes several hours and you need to improve perfromace. Autocaling with MIG will do it
Unmanaged group cannot autosacle. Load balancer will not improve perfromance
A: The Load Balancer offers no benefit.
C: Hadoop doesn’t process C++
D: App Engine is for web apps.
Question #: 139
Topic #: 1
Your team is developing a web application that will be deployed on Google Kubernetes Engine (GKE). Your CTO expects a successful launch and you need to ensure your application can handle the expected load of tens of thousands of users. You want to test the current deployment to ensure the latency of your application stays below a certain threshold. What should you do?
A. Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results. B. Enable autoscaling on the GKE cluster and enable horizontal pod autoscaling on your application deployments. Send curl requests to your application, and validate if the auto scaling works. C. Replicate the application over multiple GKE clusters in every Google Cloud region. Configure a global HTTP(S) load balancer to expose the different clusters over a single global IP address. D. Use Cloud Debugger in the development environment to understand the latency between the different microservices.
https://www.examtopics.com/discussions/google/view/56706-exam-professional-cloud-architect-topic-1-question-139/
A. Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results.
The correct answer is: A. Use a load testing tool to simulate the expected number of concurrent users and total requests to your application, and inspect the results.
A load testing tool can be used to simulate the expected number of concurrent users and total requests to your application. This will allow you to test how your application handles the expected load and to identify any potential problems.
Enabling autoscaling on the GKE cluster and enabling horizontal pod autoscaling on your application deployments will not help you to test the latency of your application. This will only help to ensure that your application can handle the expected load.
Question #: 124
Topic #: 1
Your company has an application deployed on Anthos clusters (formerly Anthos GKE) that is running multiple microservices. The cluster has both Anthos Service
Mesh and Anthos Config Management configured. End users inform you that the application is responding very slowly. You want to identify the microservice that is causing the delay. What should you do?
A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices. B. Use Anthos Config Management to create a ClusterSelector selecting the relevant cluster. On the Google Cloud Console page for Google Kubernetes Engine, view the Workloads and filter on the cluster. Inspect the configurations of the filtered workloads. C. Use Anthos Config Management to create a namespaceSelector selecting the relevant cluster namespace. On the Google Cloud Console page for Google Kubernetes Engine, visit the workloads and filter on the namespace. Inspect the configurations of the filtered workloads. D. Reinstall istio using the default istio profile in order to collect request latency. Evaluate the telemetry between the microservices in the Cloud Console.
https://www.examtopics.com/discussions/google/view/57009-exam-professional-cloud-architect-topic-1-question-124/
A. Use the Service Mesh visualization in the Cloud Console to inspect the telemetry between the microservices.
The Anthos Service Mesh pages in the Google Cloud Console provide both summary and in-depth metrics, charts, and graphs that enable you to observe service behavior. You can monitor the overall health of your services, or drill down on a specific service to set a service level objective (SLO) or troubleshoot an issue.
https://cloud.google.com/service-mesh/docs/observability/explore-dashboard
Question #: 163
Topic #: 1
Your company runs several databases on a single MySQL instance. They need to take backups of a specific database at regular intervals. The backup activity needs to complete as quickly as possible and cannot be allowed to impact disk performance.
How should you configure the storage?
A. Configure a cron job to use the gcloud tool to take regular backups using persistent disk snapshots. B. Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage. C. Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump. D. Mount additional persistent disk volumes onto each virtual machine (VM) instance in a RAID10 array and use LVM to create snapshots to send to Cloud Storage
https://www.examtopics.com/discussions/google/view/7020-exam-professional-cloud-architect-topic-1-question-163/
B. Mount a Local SSD volume as the backup location. After the backup is complete, use gsutil to move the backup to Google Cloud Storage.
C. Use gcsfise to mount a Google Cloud Storage bucket as a volume directly on the instance and write backups to the mounted location using mysqldump.
I think it’s B. If you use a tool like GCFUSE it will write immediatly to GCS which is a cost benefit because you don’t need intermediate storage. In this case however “Quickly as possible” key for understanding. GCFUSE will write to GCS which is much slower than writing directly to an added SSD. During the write to GCS it would also execute reads for a longer period on the production database. Therefor writing to the extra SSD would be my recommended solution. Offloading from the SSD to GCS would not impact the running database because the data is already separated.
I think it’s C (GCFUSE)
We cannot attach and mount a local SSD to a running instance.
“Because Local SSDs are located on the physical machine where your virtual machine instance is running, they can be created only during the instance creation process.”
The above is from https://cloud.google.com/compute/docs/disks/local-ssd#formatandmount
Question #: 50
Topic #: 1
You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises systems and Google Cloud until the migration is completed. You want to make sure all your on-premise systems remain reachable during this period. How should you organize your networking in Google Cloud?
A. Use the same IP range on Google Cloud as you use on-premises B. Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use a secondary range that does not overlap with the range you use on-premises C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises D. Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primary IP range and use a secondary range with the same IP range as you use on-premises
https://www.examtopics.com/discussions/google/view/7042-exam-professional-cloud-architect-topic-1-question-50/
C. Use an IP range on Google Cloud that does not overlap with the range you use on-premises
https://cloud.google.com/vpc/docs/using-vpc
“Primary and secondary ranges can’t conflict with on-premises IP ranges if you have connected your VPC network to another network with Cloud VPN, Dedicated Interconnect, or Partner Interconnect.”
Question #: 9
Topic #: 1
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address.
You have verified the appropriate web response is coming from each instance using the curl command. You want to ensure the backend is configured correctly.
What should you do?
A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer. B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP. C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group. D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.
https://www.examtopics.com/discussions/google/view/7130-exam-professional-cloud-architect-topic-1-question-9/
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.
A. Ensure that a firewall rules exists to allow source traffic on HTTP/HTTPS to reach the load balancer.»_space; not correct, load balancer is not the issue here.
B. Assign a public IP to each instance and configure a firewall rule to allow the load balancer to reach the instance public IP.»_space; defeats the purpose of getting load balancers , not correct
C. Ensure that a firewall rule exists to allow load balancer health checks to reach the instances in the instance group.» Correct. if using different port then appropriate FW rule need to be setup to ensure LB can reach backend instances for healthcheck. if healthcheck traffic is blcked, instances will be marked unhealthy and will be restarted.
D. Create a tag on each instance with the name of the load balancer. Configure a firewall rule with the name of the load balancer as the source and the instance tag as the destination.» tagging is not useful here as the instance is not the source of traffic, just the port need to be opened on FW.
Question #: 12
Topic #: 1
Your company has successfully migrated to the cloud and wants to analyze their data stream to optimize operations. They do not have any existing code for this analysis, so they are exploring all their options. These options include a mix of batch and stream processing, as they are running some hourly jobs and live- processing some data as it comes in.
Which technology should they use for this?
A. Google Cloud Dataproc B. Google Cloud Dataflow C. Google Container Engine with Bigtable D. Google Compute Engine with Google BigQuery
https://www.examtopics.com/discussions/google/view/7134-exam-professional-cloud-architect-topic-1-question-12/
B. Google Cloud Dataflow
- Unified Batch and Stream Processing: Dataflow is a fully managed service designed for both batch and stream data processing. This makes it ideal for your company’s needs, as they require both hourly batch jobs and live stream processing.
- No Existing Code: Dataflow provides a unified programming model and SDKs (Java, Python) for building data pipelines, which is beneficial since your company doesn’t have existing code and needs to develop new solutions.
- Serverless and Scalable: Dataflow is serverless, meaning you don’t need to manage infrastructure. It automatically scales resources based on the workload, ensuring efficient processing of both batch and stream data.
- Cost-Effective: Dataflow’s autoscaling and pay-per-use model optimize costs by only utilizing resources when needed.
Question #: 36
Topic #: 1
You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self-healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it, and investigation could take up to a week.
What should you do?
A. Log in to a server, and iterate on the fox locally B. Revert the source code change, and rerun the deployment pipeline C. Log into the servers with the bad code change, and swap in the previous code D. Change the instance group template to the previous one, and delete all instances
https://www.examtopics.com/discussions/google/view/10522-exam-professional-cloud-architect-topic-1-question-36/
B. Revert the source code change, and rerun the deployment pipeline
B. keyword is “self-healing” not “auto-healing” - which means MIG not used. So correct answer is B
A. Log in to a server, and iterate on the fix locally
» Long step, hence eliminate
B. Revert the source code change and rerun the deployment pipeline
» This revert will be logged in the source repo. Will go with this way although D also is correct.
C. login to the servers with the bad code change, and swap in the previous code
» C is manually doing what can be automatically done by B and C, hence eliminate.
D. Change the instance group template to the previous one and delete all instances
» This is similar to B but why manually do something which is automated. Hence eliminate. But is also correct. But B is better from code lifecycle perspective
Question #: 104
Topic #: 1
You need to deploy an application to Google Cloud. The application receives traffic via TCP and reads and writes data to the filesystem. The application does not support horizontal scaling. The application process requires full control over the data on the file system because concurrent access causes corruption. The business is willing to accept a downtime when an incident occurs, but the application must be available 24/7 to support their business operations. You need to design the architecture of this application on Google Cloud. What should you do?
A. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use an HTTP load balancer in front of the instances. B. Use a managed instance group with instances in multiple zones, use Cloud Filestore, and use a network load balancer in front of the instances. C. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use an HTTP load balancer in front of the instances. D. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.
https://www.examtopics.com/discussions/google/view/56360-exam-professional-cloud-architect-topic-1-question-104/
D. Use an unmanaged instance group with an active and standby instance in different zones, use a regional persistent disk, and use a network load balancer in front of the instances.
Since the Traffic is TCP, Ans A & C gets eliminated as HTTPS load balance is not supported.
B - File storage system is Cloud Firestore which do not give full control, hence eliminated.
D - Unmanaged instance group with network load balance with regional persistent disk for storage gives full control which is required for the migration.
Since the application does not support horizontal scaling, a managed instance group is not required. Instead, an unmanaged instance group can be used to ensure that the application runs on multiple instances in different zones for high availability.
The network load balancer is designed to handle TCP and UDP traffic
The HTTP(S) load balancer is designed specifically for HTTP and HTTPS traffic
Question #: 141
Topic #: 1
Your company is developing a web-based application. You need to make sure that production deployments are linked to source code commits and are fully auditable. What should you do?
A. Make sure a developer is tagging the code commit with the date and time of commit. B. Make sure a developer is adding a comment to the commit that links to the deployment. C. Make the container tag match the source code commit hash. D. Make sure the developer is tagging the commits with latest.
https://www.examtopics.com/discussions/google/view/60698-exam-professional-cloud-architect-topic-1-question-141/
C. Make the container tag match the source code commit hash.
Sephethus Most Recent 6 months, 3 weeks ago
Selected Answer: C
Linking Deployments to Commits: By tagging the container image with the source code commit hash, you create a direct link between the deployed container and the specific state of the source code. This provides a clear and auditable trail from the deployed application back to the exact source code that was used to build it.
Auditability: Using the commit hash as the container tag ensures that each deployment can be traced back to a unique and immutable source code commit. This makes it easy to audit deployments and verify which version of the code is running in production.
Question #: 59
Topic #: 1
Your web application must comply with the requirements of the European Union’s General Data Protection Regulation (GDPR). You are responsible for the technical architecture of your web application. What should you do?
A. Ensure that your web application only uses native features and services of Google Cloud Platform, because Google already has various certifications and provides ג€pass-onג€ compliance when you use native features. B. Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use within your application. C. Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up any compliance gaps. D. Define a design for the security of data in your web application that meets GDPR requirements.
https://www.examtopics.com/discussions/google/view/7285-exam-professional-cloud-architect-topic-1-question-59/
D. Define a design for the security of data in your web application that meets GDPR requirements.
The General Data Protection Regulation (GDPR) is a comprehensive data protection law that applies to any company that processes the personal data of individuals in the European Union (EU). As the technical architect of your web application, it is your responsibility to ensure that the application is compliant with GDPR requirements.
https://cloud.google.com/security/gdpr
The GDPR lays out specific requirements for businesses and organizations who are established in Europe or who serve users in Europe. It:
Regulates how businesses can collect, use, and store personal data
Builds upon current documentation and reporting requirements to increase accountability
Authorizes fines on businesses who fail to meet its requirements
Option A: While it is true that Google has various certifications and provides pass-on compliance when you use native features, simply using native features and services of Google Cloud Platform is not sufficient to ensure compliance with GDPR. You still need to implement appropriate controls and safeguards to protect personal data and meet GDPR requirements.
Option B: Enabling the relevant GDPR compliance setting within the GCP console for each of the services in use within your application may help ensure compliance with GDPR, but it is not sufficient on its own. You still need to implement appropriate controls and safeguards to protect personal data and meet GDPR requirements.
Option C: Using Cloud Security Scanner as part of your test planning strategy can help identify potential security vulnerabilities and compliance gaps in your web application, but it is not sufficient on its own to ensure compliance with GDPR. You still need to implement appropriate controls and safeguards to protect personal data and meet GDPR requirements.
Question #: 154
Topic #: 1
Your company has a stateless web API that performs scientific calculations. The web API runs on a single Google Kubernetes Engine (GKE) cluster. The cluster is currently deployed in us-central1. Your company has expanded to offer your API to customers in Asia. You want to reduce the latency for users in Asia.
What should you do?
A. Create a second GKE cluster in asia-southeast1, and expose both APIs using a Service of type LoadBalancer. Add the public IPs to the Cloud DNS zone. B. Use a global HTTP(s) load balancer with Cloud CDN enabled. C. Create a second GKE cluster in asia-southeast1, and use kubemci to create a global HTTP(s) load balancer. D. Increase the memory and CPU allocated to the application in the cluster.
https://www.examtopics.com/discussions/google/view/60627-exam-professional-cloud-architect-topic-1-question-154/
C. Create a second GKE cluster in asia-southeast1, and use kubemci to create a global HTTP(s) load balancer.
Rzla 2 years, 9 months ago
Problem with A is that a service load bancer is not l7 https. The question is outdated, the answer will have been C. Now it would be Anthos multi cluster ingress -
https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress
Question #: 14
Topic #: 1
A production database virtual machine on Google Compute Engine has an ext4-formatted persistent disk for data files. The database is about to run out of storage space.
How can you remediate the problem with the least amount of downtime?
A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux. B. Shut down the virtual machine, use the Cloud Platform Console to increase the persistent disk size, then restart the virtual machine C. In the Cloud Platform Console, increase the size of the persistent disk and verify the new space is ready to use with the fdisk command in Linux D. In the Cloud Platform Console, create a new persistent disk attached to the virtual machine, format and mount it, and configure the database service to move the files to the new disk E. In the Cloud Platform Console, create a snapshot of the persistent disk restore the snapshot to a new larger disk, unmount the old disk, mount the new disk and restart the database service
https://www.examtopics.com/discussions/google/view/7142-exam-professional-cloud-architect-topic-1-question-14/
A. In the Cloud Platform Console, increase the size of the persistent disk and use the resize2fs command in Linux.
- Online Resizing: Google Cloud Platform allows you to increase the size of a persistent disk while it’s attached to a running VM. This means you don’t need to shut down the database server.
- resize2fs: This Linux command extends the file system to utilize the newly added space on the disk. It can be run while the file system is mounted, minimizing downtime
Question #: 147
Topic #: 1
Your company has an enterprise application running on Compute Engine that requires high availability and high performance. The application has been deployed on two instances in two zones in the same region in active-passive mode. The application writes data to a persistent disk. In the case of a single zone outage, that data should be immediately made available to the other instance in the other zone. You want to maximize performance while minimizing downtime and data loss.
What should you do?
A. 1. Attach a persistent SSD disk to the first instance. 2. Create a snapshot every hour. 3. In case of a zone outage, recreate a persistent SSD disk in the second instance where data is coming from the created snapshot. B. 1. Create a Cloud Storage bucket. 2. Mount the bucket into the first instance with gcs-fuse. 3. In case of a zone outage, mount the Cloud Storage bucket to the second instance with gcs-fuse. C. 1. Attach a regional SSD persistent disk to the first instance. 2. In case of a zone outage, force-attach the disk to the other instance. D. 1. Attach a local SSD to the first instance disk. 2. Execute an rsync command every hour where the target is a persistent SSD disk attached to the second instance. 3. In case of a zone outage, use the second instance.
https://www.examtopics.com/discussions/google/view/60583-exam-professional-cloud-architect-topic-1-question-147/
C. 1. Attach a regional SSD persistent disk to the first instance. 2. In case of a zone outage, force-attach the disk to the other instance.
https://cloud.google.com/compute/docs/disks/repd-failover
Question #: 165
Topic #: 1
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin.
What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?
A. Org viewer, project owner B. Org viewer, project viewer C. Org admin, project browser D. Project owner, network admin
https://www.examtopics.com/discussions/google/view/7068-exam-professional-cloud-architect-topic-1-question-165/
B. Org viewer, project viewer
A is not correct because Project owner is too broad. The security team does not need to be able to make changes to projects.
B is correct because:-Org viewer grants the security team permissions to view the organization’s display name.
-Project viewer grants the security team permissions to see the resources within projects.
C is not correct because Org admin is too broad. The security team does not need to be able to make changes to the organization.
D is not correct because Project owner is too broad. The security team does not need to be able to make changes to projects.
Question #: 106
Topic #: 1
You are managing an application deployed on Cloud Run for Anthos, and you need to define a strategy for deploying new versions of the application. You want to evaluate the new code with a subset of production traffic to decide whether to proceed with the rollout. What should you do?
A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions. B. Deploy a new service to Cloud Run with the new version. Add a Cloud Load Balancing instance in front of both services. C. In the Google Cloud Console page for Cloud Run, set up continuous deployment using Cloud Build for the development branch. As part of the Cloud Build trigger, configure the substitution variable TRAFFIC_PERCENTAGE with the percentage of traffic you want directed to a new version. D. In the Google Cloud Console, configure Traffic Director with a new Service that points to the new version of the application on Cloud Run. Configure Traffic Director to send a small percentage of traffic to the new version of the application.
https://www.examtopics.com/discussions/google/view/56373-exam-professional-cloud-architect-topic-1-question-106/
A. Deploy a new revision to Cloud Run with the new version. Configure traffic percentage between revisions.
o Each deployment to a service creates a revision. A revision consists of a specific container image, along with environment settings such as environment variables, memory limits, or concurrency value.
o Once the new revision is deployed to a Service you can manage the traffic using MANAGE TRAFFIC option inside the revision tab
https://cloud.google.com/run/docs/resource-model
Cloud Run for Anthos allows you to deploy new revisions of your application with a specific percentage of traffic, which allows you to perform a gradual rollout of the new version. To do this, you can follow these steps:
Deploy a new revision of your application to Cloud Run with the new version.
In the Cloud Run for Anthos console, navigate to the service that you want to roll out the new version for.
In the “Revisions” tab, you should see the new revision listed alongside the current revision.
Use the traffic percentage slider to specify the percentage of traffic that you want to send to the new revision. You can set the percentage to a small value initially, such as 5%, and gradually increase it over time as you evaluate the new version.
Once you have set the traffic percentage, Cloud Run for Anthos will start directing a portion of the traffic to the new revision, allowing you to evaluate the new version with a subset of production traffic.
Question #: 149
Topic #: 1
Your organization has stored sensitive data in a Cloud Storage bucket. For regulatory reasons, your company must be able to rotate the encryption key used to encrypt the data in the bucket. The data will be processed in Dataproc. You want to follow Google-recommended practices for security. What should you do?
A. Create a key with Cloud Key Management Service (KMS). Encrypt the data using the encrypt method of Cloud KMS. B. Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud KMS key. C. Generate a GPG key pair. Encrypt the data using the GPG key. Upload the encrypted data to the bucket. D. Generate an AES-256 encryption key. Encrypt the data in the bucket using the customer-supplied encryption keys feature.
https://www.examtopics.com/discussions/google/view/60440-exam-professional-cloud-architect-topic-1-question-149/
B. Create a key with Cloud Key Management Service (KMS). Set the encryption key on the bucket to the Cloud KMS key.
https://cloud.google.com/storage/docs/encryption/customer-managed-keys#key-rotation
Key rotation
Cloud KMS supports both automatic and manual key rotation to a new version. After rotating a key, Cloud Storage uses the new version for all operations that encrypt using the key, such as:
Object uploads when the destination bucket uses the key as its default encryption key. Object upload, copy, and rewrite operations that specifically use the key in the operation.
Previous versions of the key are not disabled or destroyed, so Cloud Storage can still decrypt existing objects that were previously encrypted using those versions.
Question #: 24
Topic #: 1
A small number of API requests to your microservices-based application take a very long time. You know that each request to the API can traverse many services.
You want to know which service takes the longest in those cases.
What should you do?
A. Set timeouts on your application so that you can fail requests faster B. Send custom metrics for each of your requests to Stackdriver Monitoring C. Use Stackdriver Monitoring to look for insights that show when your API latencies are high D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
https://www.examtopics.com/discussions/google/view/10898-exam-professional-cloud-architect-topic-1-question-24/
D. Instrument your application with Stackdriver Trace in order to break down the request latencies at each microservice
https://cloud.google.com/trace/docs/quickstart#find_a_trace
In a microservices-based application, a single API request can traverse multiple services. Therefore, when a small number of API requests take a very long time, it becomes difficult to identify the bottleneck or the service that is causing the delay. In order to identify the problematic service, it is important to instrument the application with a performance monitoring and tracing tool.
Option A suggests setting timeouts on the application so that requests fail faster. While this can help reduce the overall response time for the user, it does not provide insights into the specific service that is causing the delay.
Option B suggests sending custom metrics for each request to Stackdriver Monitoring. This can help identify patterns in request latency and can provide insights into which requests are taking the longest. However, it does not specifically identify the service that is causing the delay.
Option C suggests using Stackdriver Monitoring to look for insights when API latencies are high. While this can help identify when the API is experiencing high latency, it does not provide insights into the specific service that is causing the delay.
Option D is the correct answer. It suggests instrumenting the application with Stackdriver Trace to break down request latencies at each microservice. Stackdriver Trace can help identify the performance of each service in the microservices-based application and provide insights into which service is causing the delay. This can help identify the root cause of the delay and help in optimizing the service. Therefore, option D is the best approach to identify the problematic service.
Question #: 119
Topic #: 1
You need to migrate Hadoop jobs for your company’s Data Science team without modifying the underlying infrastructure. You want to minimize costs and infrastructure management effort. What should you do?
A. Create a Dataproc cluster using standard worker instances. B. Create a Dataproc cluster using preemptible worker instances. C. Manually deploy a Hadoop cluster on Compute Engine using standard instances. D. Manually deploy a Hadoop cluster on Compute Engine using preemptible instances.
https://www.examtopics.com/discussions/google/view/56684-exam-professional-cloud-architect-topic-1-question-119/
B. Create a Dataproc cluster using preemptible worker instances.
Should be B, you want to minimize costs.
https://cloud.google.com/dataproc/docs/concepts/compute/secondary-vms#preemptible_and_non-preemptible_secondary_workers
Agree, the migration guide also recommends to think about preemptible worker nodes: https://cloud.google.com/architecture/hadoop/hadoop-gcp-migration-jobs#using_preemptible_worker_nodes
Using preemptible worker nodes
You can gain low-cost processing power for your jobs by adding preemptible worker nodes to your cluster. These nodes use preemptible virtual machines.
Consider the inherent unreliability of preemptible nodes before choosing to use them. Dataproc attempts to smoothly handle preemption, but jobs might fail if they lose too many nodes. Only use preemptible nodes for jobs that are fault-tolerant or that are low enough priority that occasional job failure won’t disrupt your business.
If you decide to use preemptible worker nodes, consider the ratio of regular nodes to preemptible nodes. There is no universal formula to get the best results, but in general, the more preemptible nodes you use relative to standard nodes, the higher the chances are that the job won’t have enough nodes to complete the task. You can determine the best ratio of preemptible to regular nodes for a job by experimenting with different ratios and analyzing the results.
Note that SSDs are not available on preemptible worker nodes. If you use SSDs on your dedicated nodes, any preemptible worker nodes that you use will match every other aspect of the dedicated nodes, but will have no SSDs available.