Topic 6 Flashcards
TerramEarth uses cases
Question #: 2
Topic #: 8
The TerramEarth development team wants to create an API to meet the company’s business requirements. You want the development team to focus their development effort on business value versus creating a custom framework.
Which method should they use?
A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners B. Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public C. Use Google App Engine with the Swagger (Open API Specification) framework. Focus on an API for the public D. Use Google Container Engine with a Django Python container. Focus on an API for the public E. Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners
https://www.examtopics.com/discussions/google/view/11085-exam-professional-cloud-architect-topic-8-question-2/
A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners
Google offers Cloud Endpoint to develop, deploy and manage APIs on any google cloud backend.
https://cloud.google.com/endpoints
With Endpoints Frameworks, you don’t have to deploy a third-party web server (such as Apache Tomcat or Gunicorn) with your application. You annotate or decorate the code and deploy your application as you normally would to the App Engine standard environment.
Cloud Endpoints Frameworks for the App Engine standard environment : https://cloud.google.com/endpoints/docs/frameworks/about-cloud-endpoints-frameworks
Question #: 3
Topic #: 8
Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data.
What should you do?
A. Build or leverage an OAuth-compatible access control system B. Build SAML 2.0 SSO compatibility into your authentication system C. Restrict data access based on the source IP address of the partner systems D. Create secondary credentials for each dealer that can be given to the trusted third party
https://www.examtopics.com/discussions/google/view/9449-exam-professional-cloud-architect-topic-8-question-3/
A. Build or leverage an OAuth-compatible access control system
Question #: 4
Topic #: 8
[All Professional Cloud Architect Questions]
TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20 million 600 byte records a second for 40 TB an hour.
How should you design the data ingestion?
A. Vehicles write data directly to GCS B. Vehicles write data directly to Google Cloud Pub/Sub C. Vehicles stream data directly to Google BigQuery D. Vehicles continue to write data using the existing system (FTP)
https://www.examtopics.com/discussions/google/view/6657-exam-professional-cloud-architect-topic-8-question-4/
B. Vehicles write data directly to Google Cloud Pub/Sub
Question #: 5
Topic #: 8
You analyzed TerramEarth’s business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customer’s wait time for parts. You decided to focus on reduction of the 3 weeks aggregate reporting time.
Which modifications to the company’s processes should you recommend?
A. Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics D. Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor
https://www.examtopics.com/discussions/google/view/8687-exam-professional-cloud-architect-topic-8-question-5/
B. Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics
C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics
C is right choice because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.
A is not correct because machine learning analysis is a good means toward the end of reducing downtime, but shuffling formats and transport doesn’t directly help at all. B is not correct because machine learning analysis is a good means toward the end of reducing downtime, and moving to streaming can improve the freshness of the information in that analysis, but changing the format doesn’t directly help at all. D is not correct because machine learning analysis is a good means toward the end of reducing downtime, but the rest of these changes don’t directly help at all.
Question #: 6
Topic #: 8
Which of TerramEarth’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?
A. Opex/capex allocation, LAN changes, capacity planning B. Capacity planning, TCO calculations, opex/capex allocation C. Capacity planning, utilization measurement, data center expansion D. Data Center expansion, TCO calculations, utilization measurement
https://www.examtopics.com/discussions/google/view/12205-exam-professional-cloud-architect-topic-8-question-6/
B. Capacity planning, TCO calculations, opex/capex allocation
Question #: 7
Topic #: 8
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections.
What should you do?
A. Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket B. Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in US, EU, and Asia. Run the ETL process using the data in the bucket C. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket
https://www.examtopics.com/discussions/google/view/6485-exam-professional-cloud-architect-topic-8-question-7/
C. Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket
D. Directly transfer the files to a different Google Cloud Regional Storage bucket location in US, EU, and Asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket
veryone voting C are missing the point, you are not serving the data, merely ingesting it.
For that , regional buckets provide better latency and bandwidth with lower costs.
Multi-regions can be considered for redundancy in the case of regional failures , but it’s costly and extremely unlikely for an entire region to go down , zonal replication is good enough in this case
Question #: 8
Topic #: 8
TerramEarth’s 20 million vehicles are scattered around the world. Based on the vehicle’s location, its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US, Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data.
What is the most cost-effective way to run this job?
A. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job B. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the job
https://www.examtopics.com/discussions/google/view/8248-exam-professional-cloud-architect-topic-8-question-8/
D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the job
A, B says “move all data” but analysis will try to reveal breaking down after 100K miles so there is no point of transferring data of the vehicles with less than 100K milage.
Therefore, transferring all data is just waste of time and money.
There is one thing for sure here. If we move/copy data between continents it will cost us money therefore compressing the data before copying to another region/continent makes sense.
Preprocessing also makes sense because we probably want to process smaller chunks of data first (remember 100K milage).
So now type of target bucket; multi-region or standard? multi-region is good for high-availability and low latency with a little more cost however question doesn’t require any of these features.
Therefore I think standard storage option is good to go given lower costs are always better.
Question #: 9
Topic #: 8
TerramEarth has equipped all connected trucks with servers and sensors to collect telemetry data. Next year they want to use the data to train machine learning models. They want to store this data in the cloud while reducing costs.
What should they do?
A. Have the vehicle's computer compress the data in hourly snapshots, and store it in a Google Cloud Storage (GCS) Nearline bucket B. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Google BigQuery C. Push the telemetry data in real-time to a streaming dataflow job that compresses the data, and store it in Cloud Bigtable D. Have the vehicle's computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket
https://www.examtopics.com/discussions/google/view/8249-exam-professional-cloud-architect-topic-8-question-9/
D. Have the vehicle’s computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket
Question #: 10
Topic #: 8
Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation.
Which two architectures should you consider? (Choose two.)
A. Treat every micro service call between modules on the vehicle as untrusted. B. Require IPv6 for connectivity to ensure a secure address space. C. Use a trusted platform module (TPM) and verify firmware and binaries on boot. D. Use a functional programming language to isolate code execution cycles. E. Use multiple connectivity subsystems for redundancy. F. Enclose the vehicle's drive electronics in a Faraday cage to isolate chips.
https://www.examtopics.com/discussions/google/view/12751-exam-professional-cloud-architect-topic-8-question-10/
A. Treat every micro service call between modules on the vehicle as untrusted
C. Use a trusted platform module (TPM) and verify firmware and binaries on boot
B is not correct because IPv6 doesn’t have any impact on the security during vehicle operation, although it improves system scalability and simplicity.
D is not correct because merely using a functional programming language doesn’t guarantee a more secure level of execution isolation. Any impact on security from this decision would be incidental at best.
E is not correct because this improves system durability, but it doesn’t have any impact on the security during vehicle operation.
F is not correct because it doesn’t have any impact on the security during vehicle operation, although it improves system durability.
Question #: 11
Topic #: 8
Operational parameters such as oil pressure are adjustable on each of TerramEarth’s vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field.
How can you accomplish this goal?
A. Have you engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically C. Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically
https://www.examtopics.com/discussions/google/view/8250-exam-professional-cloud-architect-topic-8-question-11/
B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically
D. Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically
Question #: 1
Topic #: 9
For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation, TerramEarth is required to delete data generated from its
European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and
BigQuery. What should you do?
A. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months. B. Create a BigQuery table for the European data, and set the table retention period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36 months. C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months. D. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Age condition of 36 months.
https://www.examtopics.com/discussions/google/view/6489-exam-professional-cloud-architect-topic-9-question-1/
C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of 36 months.
Enable a bucket lifecycle management rule to delete objects older than 36 months. Use partitioned tables in BigQuery and set the partition expiration period to 36 months. is the right answer.
When you create a table partitioned by ingestion time, BigQuery automatically loads data into daily, date-based partitions that reflect the data’s ingestion or arrival time.
Ref: https://cloud.google.com/bigquery/docs/partitioned-tables#ingestion_time
And Google recommends you configure the default table expiration for your datasets, configure the expiration time for your tables, and configure the partition expiration for partitioned tables.
Ref: https://cloud.google.com/bigquery/docs/best-practices-storage#use_the_expiration_settings_to_remove_unneeded_tables_and_partitions
If the partitioned table has a table expiration configured, all the partitions in it are deleted according to the table expiration settings. For our specific requirement, we could set the partition expiration to 36 months so that partitions older than 36 months (and the data within) are automatically deleted.
Ref: https://cloud.google.com/bigquery/docs/managing-partitioned-tables#partition-expiration
Question #: 2
Topic #: 9
For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.
Which two actions should you take?
A. Create a Cloud Storage lifecycle rule with Age: ג€30ג€, Storage Class: ג€Standardג€, and Action: ג€Set to Coldlineג€, and create a second GCS life-cycle rule with Age: ג€365ג€, Storage Class: ג€Coldlineג€, and Action: ג€Deleteג€. B. Create a Cloud Storage lifecycle rule with Age: ג€30ג€, Storage Class: ג€Coldlineג€, and Action: ג€Set to Nearlineג€, and create a second GCS life-cycle rule with Age: ג€91ג€, Storage Class: ג€Coldlineג€, and Action: ג€Set to Nearlineג€. C. Create a Cloud Storage lifecycle rule with Age: ג€90ג€, Storage Class: ג€Standardג€, and Action: ג€Set to Nearlineג€, and create a second GCS life-cycle rule with Age: ג€91ג€, Storage Class: ג€Nearlineג€, and Action: ג€Set to Coldlineג€. D. Create a Cloud Storage lifecycle rule with Age: ג€30ג€, Storage Class: ג€Standardג€, and Action: ג€Set to Coldlineג€, and create a second GCS life-cycle rule with Age: ג€365ג€, Storage Class: ג€Nearlineג€, and Action: ג€Deleteג€.
https://www.examtopics.com/discussions/google/view/57128-exam-professional-cloud-architect-topic-9-question-2/
A. Create a Cloud Storage lifecycle rule with Age: ג€30ג€, Storage Class: ג€Standardג€, and Action: ג€Set to Coldlineג€, and create a second GCS life-cycle rule with Age: ג€365ג€, Storage Class: ג€Coldlineג€, and Action: ג€Deleteג€.
Option A suggests creating two lifecycle rules for Google Cloud Storage (GCS), one that sets files to the Coldline storage class after 30 days and another that deletes files after 365 days. This option is not relevant to TerramEarth’s requirements as it pertains to data storage and does not address any of the business or technical needs mentioned in the scenario.
Option B proposes two GCS lifecycle rules, one that sets files to the Nearline storage class after 30 days and another that sets files to Nearline after 91 days. This option does not address any of the technical requirements such as decreasing latency or improving data security. Additionally, it does not provide a solution for decreasing unplanned vehicle downtime or providing better data to the dealer network.
Option C suggests creating two GCS lifecycle rules, one that sets files to the Nearline storage class after 90 days and another that sets files to Coldline after 91 days. This option also does not address any of the technical requirements mentioned in the scenario. Furthermore, it does not provide any solution for improving the data warehouse or decreasing unplanned vehicle downtime.
Option D proposes two GCS lifecycle rules, one that sets files to the Coldline storage class after 30 days and another that deletes files after 365 days. This option, like Option A, does not address any of the business or technical requirements outlined in the scenario.
Question #: 3
Topic #: 9
For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company,
TerramEarth.
Considering the TerramEarth business and technical requirements, what should you do?
A. Replace the existing data warehouse with BigQuery. Use table partitioning. B. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. C. Replace the existing data warehouse with BigQuery. Use federated data sources. D. Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine preemptible instance with 32 CPUs.
https://www.examtopics.com/discussions/google/view/7260-exam-professional-cloud-architect-topic-9-question-3/
A. Replace the existing data warehouse with BigQuery. Use table partitioning.
A is the correct answer because the question was asking for a reliable way of improving the data warehouse. The reliable way is to have a table partitioned and that can be well managed.
https://cloud.google.com/solutions/bigquery-data-warehouse
BigQuery supports partitioning tables by date. You enable partitioning during the table-creation process. BigQuery creates new date-based partitions automatically, with no need for additional maintenance. In addition, you can specify an expiration time for data in the partitions.
https://cloud.google.com/solutions/bigquery-data-warehouse#partitioning_tables
Federated is an option but not a reliable option.
You can run queries on data that exists outside of BigQuery by using federated data sources, but this approach has performance implications. Use federated data sources only if the data must be maintained externally. You can also use query federation to perform ETL from an external source to BigQuery. This approach allows you to define ETL using familiar SQL syntax.
https://cloud.google.com/solutions/bigquery-data-warehouse#external_sources
Question #: 4
Topic #: 9
For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an automated daily basis while managing cost.
What should you do?
A. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline. B. Create a Cloud Function that reads data from BigQuery and cleans it. Trigger the Cloud Function from a Compute Engine instance. C. Create a SQL statement on the data in BigQuery, and save it as a view. Run the view daily, and save the result to a new table. D. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.
https://www.examtopics.com/discussions/google/view/13467-exam-professional-cloud-architect-topic-9-question-4/
A. Set up a streaming Cloud Dataflow job, receiving data by the ingestion process. Clean the data in a Cloud Dataflow pipeline.
D. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.
Question says data is dirty. So we need to clean the dirty data before loading it into big query. Option D says using Dataprep once dirty data is in big query.
Option A says use ETL: Dataflow to clean it first.
A datawarehouse developer will always go with Option A. Use ETL tool to clean the data first and then load in big query warehouse.
Selected Answer: D
Cloud Dataprep is not cheap.
Question #: 5
Topic #: 9
For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you reduce the unplanned vehicle downtime in GCP?
A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting. B. Use BigQuery as the data warehouse. Connect all vehicles to the network and upload gzip files to a Multi-Regional Cloud Storage bucket using gcloud. Use Google Data Studio for analysis and reporting. C. Use Cloud Dataproc Hive as the data warehouse. Upload gzip files to a Multi-Regional Cloud Storage bucket. Upload this data into BigQuery using gcloud. Use Google Data Studio for analysis and reporting. D. Use Cloud Dataproc Hive as the data warehouse. Directly stream data into partitioned Hive tables. Use Pig scripts to analyze data.
https://www.examtopics.com/discussions/google/view/14729-exam-professional-cloud-architect-topic-9-question-5/
A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.
This approach leverages the real-time data streaming capabilities of Cloud Pub/Sub and Cloud Dataflow, the scalability and efficiency of BigQuery for data analysis, and the powerful visualization and reporting features of Google Data Studio. This combination ensures timely insights and quick response to issues, thereby reducing unplanned vehicle downtime.
Question #: 6
Topic #: 9
For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the ingestion of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Google-recommended practices.
Considering the technical requirements, which components should you use for the ingestion of the data?
A. Google Kubernetes Engine with an SSL Ingress B. Cloud IoT Core with public/private key pairs C. Compute Engine with project-wide SSH keys D. Compute Engine with specific SSH keys
https://www.examtopics.com/discussions/google/view/6785-exam-professional-cloud-architect-topic-9-question-6/
B. Cloud IoT Core with public/private key pairs
Question #: 1
Topic #: 10
For this question, refer to the TerramEarth case study. You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query. You want func_query only to accept invocations from func_display. You also want to follow Google’s recommended best practices. What should you do?
A. Create a token and pass it in as an environment variable to func_display. When invoking func_query, include the token in the request. Pass the same token to func_query and reject the invocation if the tokens are different. B. Make func_query 'Require authentication.' Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query. C. Make func_query 'Require authentication' and only accept internal traffic. Create those two functions in the same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display. D. Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic from func_display. Also, make sure both functions use the same service account.
https://www.examtopics.com/discussions/google/view/60524-exam-professional-cloud-architect-topic-10-question-1/
B. Make func_query ‘Require authentication.’ Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query. Create an id token in func_display and include the token to the request when invoking func_query.
Question #: 2
Topic #: 10
For this question, refer to the TerramEarth case study. You have broken down a legacy monolithic application into a few containerized RESTful microservices.
You want to run those microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?
A. Deploy Cloud Run services to multiple availability zones. Create Cloud Endpoints that point to the services. Create a global HTTP(S) Load Balancing instance and attach the Cloud Endpoints to its backend. B. Deploy Cloud Run services to multiple regions. Create serverless network endpoint groups pointing to the services. Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing instance. C. Deploy Cloud Run services to multiple regions. In Cloud DNS, create a latency-based DNS name that points to the services. D. Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer. Add the Cloud Run Endpoints to its backend service.
https://www.examtopics.com/discussions/google/view/60525-exam-professional-cloud-architect-topic-10-question-2/
B. Deploy Cloud Run services to multiple regions. Create serverless network endpoint groups pointing to the services. Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing instance.
Cloud Run is a regional service.
To serve global users you need to configure a Global HTTP LB and NEG as the backend.
Cloud Run services are deployed into individual regions and to route your users to different regions of your service, you need to configure external HTTP(S) Load Balancing.
https://cloud.google.com/run/docs/multiple-regions
A network endpoint group (NEG) specifies a group of backend endpoints for a load balancer.
A serverless NEG is a backend that points to a Cloud Run, App Engine, or Cloud Functions service.
https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts
Question #: 3
Topic #: 10
For this question, refer to the TerramEarth case study. You are migrating a Linux-based application from your private data center to Google Cloud. The
TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration. What should you do? (Choose two.)
A. Open a support case regarding the CVE and chat with the support engineer. B. Read the CVEs from the Google Cloud Status Dashboard to understand the impact. C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact. D. Post a question regarding the CVE in Stack Overflow to get an explanation. E. Post a question regarding the CVE in a Google Cloud discussion group to get an explanation.
https://www.examtopics.com/discussions/google/view/60557-exam-professional-cloud-architect-topic-10-question-3/
A. Open a support case regarding the CVE and chat with the support engineer.
C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact.
The details are available in Google Cloud Platform Security Bulletins:
https://cloud.google.com/support/bulletins
Question #: 4
Topic #: 10
For this question, refer to the TerramEarth case study. TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a “Site is unavailable” page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost. What should you do?
A. Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team. B. Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team. C. Create a Cloud Monitoring uptime check to validate the application URL. If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team. D. Use Cloud Error Reporting to check the application URL. If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.
https://www.examtopics.com/discussions/google/view/60562-exam-professional-cloud-architect-topic-10-question-4/
C. Create a Cloud Monitoring uptime check to validate the application URL. If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the “Site is unavailable” page, and notify the Ops team.
Cloud monitoring for Uptime check to validate the application URL and leverage pub/sub to trigger Cloud Function to switch URL
https://cloud.google.com/monitoring/uptime-checks?hl=en
Question #: 5
Topic #: 10
For this question, refer to the TerramEarth case study. You are building a microservice-based application for TerramEarth. The application is based on Docker containers. You want to follow Google-recommended practices to build the application continuously and store the build artifacts. What should you do?
A. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. Push the images to the Container Registry. B. Configure a trigger in Cloud Build for new source changes. The trigger invokes build jobs and build container images for the microservices. Tag the images with a version number, and push them to Cloud Storage. C. Create a Scheduler job to check the repo every minute. For any new change, invoke Cloud Build to build container images for the microservices. Tag the images using the current timestamp, and push them to the Container Registry. D. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build one container image, and tag the image with the label 'latest.' Push the image to the Container Registry.
https://www.examtopics.com/discussions/google/view/60563-exam-professional-cloud-architect-topic-10-question-5/
A. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash. Push the images to the Container Registry.
https://cloud.google.com/architecture/best-practices-for-building-containers#tagging_using_the_git_commit_hash
Question #: 6
Topic #: 10
For this question, refer to the TerramEarth case study. TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center. You want to move the data to Cloud Storage for your machine learning team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?
A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud. B. Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage. C. Make sure there are no other users consuming the 1Gbps link, and use multi-thread transfer to upload the data to Cloud Storage. D. Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage.
https://www.examtopics.com/discussions/google/view/60483-exam-professional-cloud-architect-topic-10-question-6/
A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud.