Whizlabs, Practice Questions Flashcards
Google Cloud Certified Professional Cloud Architect
You are working for a Startup company as a Solutions Architect. Recently an application was deployed to production. There is a requirement to monitor the key performance indicators like CPU, memory, and Disk IOPS for the application, and also a dashboard needs to be set up where metrics are visible to the entire team. Which service will you use?
A. Use Cloud monitoring to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
B. Use Cloud Logging to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
C. Use Third-party service from marketplace to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
D. Use Cloud Trace to monitor key performance indicators and create Dashboards with key indicators that can be used by the team
Option A is correct Cloud monitoring provides detailed visibility into the application by monitoring several key performance indicators like CPU, memory and disk IOPS, etc. You can create dashboards to visualize the performance and also can share with the team to provide detailed visibility into the application performance.
Option B is incorrect because Cloud logging is a fully managed service which allows you to store, search and analyze logs
Option C is incorrect because there is no need to use a third-party service you can use Cloud monitoring for such requirements
Option D is incorrect because Cloud trace is used to detect the latency issues in your application
You are working as a Solutions Architect for a large enterprise. They are using the GKE cluster for their production workload. In the upcoming weeks, they are expecting a huge traffic increase and thus want to enable autoscaling on the GKE cluster. What is the command to enable autoscaling on the existing GKE cluster?
A. gcloud container clusters update cluster-name –enable-autoscaling –min-nodes 1 –max-nodes 10 –zone compute-zone –node-pool demo
B. gcloud container clusters create cluster-name –enable-autoscaling –min-nodes 1 –max-nodes 10 –zone compute-zone –node-pool demo
C. You cannot enable autoscaling on existing GKE cluster
D. gcloud container clusters update cluster-name –no-enable-autoscaling –node-pool pool-name [–zone compute-zone –project project-id
Option A is correct It is the right command to enable autoscaling on existing GKE cluster gcloud container clusters update cluster-name –enable-autoscaling –min-nodes 1 –max-nodes 10 –zone compute-zone –node-pool demo
Option B is incorrect because it is used to create a new GKE cluster with auto-scaling enabled
Option C is incorrect because you can enable autoscaling on an existing GKE cluster
Option D is incorrect because the command will disable autoscaling on a GKE cluster
There is a requirement to make some files from the Google Cloud Storage bucket publicly available to the customers. Which of the below commands you will use to make some objects publicly available?
A. gsutil acl ch -u allUsers:R gs://new-project-bucket/example.png
B. gsutil signurl -d 10m keyfile.json gs://new-project-bucket/example.png
C. gsutil acl ch -g my-domain.org:R gs://gcs.my-domain.org
D. gsutil requesterpays get gs://new-project-bucket
Option A is correct This is the right command to make specific files publicly available from Google Cloud storage bucket https://cloud.google.com/storage/docs/gsutil/commands/acl
Option B is incorrect because this command is used to generate a Signed URL which is mostly used to share private content securely for a limited period of time
Option C is incorrect and is used when you have to share the objects with a particular G-suite domain
Option D is incorrect because this enables the requester pay feature on the bucket
You are working as a Solutions Architect for a Startup company that is planning to migrate an on-premise application to Google Cloud. They want to transfer a large number of files to Google Cloud Storage using the gsutil command line. How can you speed up the transfer process?
A. Use -m option with gsutil command
B. Use -o option with gsutil command
C. Use du option with gsutil command
D. Use mb option with gsutil command
Option A is correct When you have to transfer a large number of files from on-premise to Cloud storage using gsutil command then -m is the best option as it enables parallel multithreading copying ttps://cloud.google.com/storage/docs/gsutil/commands/cp
Option B is incorrect because it is used when you have to copy a file which is large in size
Option C is incorrect because it is used to get object size usage
Option D is incorrect because it is used to create a bucket
You are working with a large enterprise as a Solutions architect which is planning to migrate its application from AWS cloud to GCP cloud. There is a requirement to copy data from the AWS S3 bucket to Google Cloud Storage using a command-line utility. How will you fulfill this requirement?
A. Add AWS credentials in the boto configuration file and use the gsutil command to copy data
B. Configure the AWS credentials in gcloud configuration and use the gsutil command to copy files
C. First, download the S3 data using the AWS command-line utility and then copy files to Google cloud storage using gsutil commands
D. Use –s3 flag with gsutil commands to supply AWS credentials while copying files to Google cloud storage
Option A is correct You can directly use the AWS S3 bucket as the source or destination while using the gsutil command-line utility. Just you have to put the AWS credentials in the credentials section of the .boto configuration file. https://cloud.google.com/storage/docs/interoperability Options B & D are incorrect because there are no such commands
Option C can be a possible answer but adding AWS credentials in the .boto file is the preferred and easy way.
A Financial Organization has been growing at a rapid rate and dealing with massive data sets has become an issue. The management has decided to move from on premise to Google Cloud to meet the scaling demands. The data analysts are looking at services which can analyze massive amount of data and can run SQL queries -does data manipulation and visualization in Python. What Google Cloud services can fulfill the requirements?
A. Use Bigquery to run the SQL queries and use Cloud Datalab for detailed data manipulation and visualization in Python.
B. Use Bigtable to run SQL queries and use use Cloud Datalab for detailed data manipulation and visualization in Python.
C. Use Datastore to analyze massive data and use Dataprep for data manipulation and visualization in Python.
D. Use Cloud Spanner to analyze massive data and use Data Studio for data manipulation and visualization in python.
Option A is correct. Big Query can analyze large amounts of data also you can run SQL queries , Cloud Datalab does detailed data manipulation and visualization in Python.
Option B is incorrect. Cloud Bigtable is Google’s NoSQL Big Data database service , and it doesn’t support SQL queries ,use it when you need low latency for high writes and high reads.
Option C is incorrect. Cloud Datastore is a NoSQL document database built for automatic scaling, high performance, and ease of application development not suitable for the current scenario and Dataprep is data service for visually exploring, cleaning, and preparing structured and unstructured datasets of any size with the ease of clicks(UI), not code.
Option D is incorrect. The workload is analytics and Bigquery is the right choiceand Data Studio is a decision report generator service. Reference(s): https://cloud.google.com/solutions/time-series/analyzing-financial-time-series-using-bigquery-and-cloud-datalab https://cloud.google.com/datalab/docs/ https://cloud.google.com/bigquery/
Your organization deals with a huge amount of data and lately, it has become time-consuming and complicated to handle the ever-increasing data volume that needs to be protected and classified based on data sensitivity. The management has set the objective to automate data quarantine and classification system using Google Cloud Platform services. Please select the services that would achieve the objective.
A. Cloud Storage, Cloud Function, Cloud Pub/Sub, DLP API
B. Cloud Storage, Cloud Function, VPC Service control, Cloud Pub/sub
C. Cloud Storage, Cloud Function, Cloud Armour, DLP API
D. Cloud Storage, Cloud Pub/Sub, Cloud Classifier, Cloud Function
Option A is the Correct choice because, the data is uploaded to Cloud Storage and later we create buckets example classification_bucket_1 ( for sensitive information) and classification_bucket_2 (for non-sensitive information), use Cloud Function to invoke the DLP API when files are uploaded to cloud storage, use Cloud Pub/Sub topic and subscription to notify when file processing is completed, use Cloud DLP to understand and manage sensitive data(classification).
Option B is Incorrect because VPC service doesn’t help in data classification better choice would be to use Cloud DLP API. VPC Service Controls allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets, Bigtable instances, and BigQuery datasets to constrain data within a VPC and help mitigate data exfiltration.
Option C is Incorrect because Google Cloud Armor delivers defense at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks using Google’s global infrastructure and security systems which don’t fulfill the objective set by the management.
Option D is Incorrect because Cloud Classifier is a fictitious service. Using Cloud DLP API serves the purpose of classifying data. Cloud DLP helps you better understand and manage sensitive(protected ) data. The numbers in this pipeline correspond to these steps: You upload files to Cloud Storage. You invoke a Cloud Function. The DLP API inspects and classifies the data. The file is moved to the appropriate bucket. Read more about it here: https://cloud.google.com/solutions/automating-classification-of-data-uploaded-to-cloud-storage https://cloud.google.com/dlp/
You are working as a Solutions Architect for a large media company that is planning to migrate its on-premise data warehouse to Google Cloud BigQuery. As a part of the migration, you want to write some migration scripts to interact with BigQuery. Which Command Line utility will you use?
A. gsutil
B. bq
C. gcloud
D. kubectl
Option B is correct Bq is a command-line tool for BigQuery which can be used to perform any operations on BigQuery
Option A is incorrect because it is used to interact with Google Cloud storage
Option C is incorrect because Bigquery is having its own command-line utility
Option D is incorrect because kubectl is used to manage Kubernetes
You are working as a Solutions Architect for a startup company that has recently started using Google cloud for their development environment. The developers want to know if they can persist data on Cloud shell, so they can use Cloud shell for their day to day tasks. What will you suggest to them?
A. Cloud shell can persist up to 10GB data
B. Cloud Shell can persist up to 5GB data
C. Cloud shell data is ephemeral
D. You can attach an additional persistent disk to the Cloud shell
Option B is correct Cloud shell comes with 5GB of persistent disk space which is mounted to your $HOME directory where you can keep your data. This persistent disk persists between your sessions.
Option A is incorrect because Cloud shell comes with 5GB of persistent disk
Option C is incorrect because you can persist data on the Cloud shell
Option D is incorrect because you cannot attach an additional persistent disk to the cloud shell session
You are working with a startup company as Solutions Architect which is planning to use Google Cloud Storage as a backup location for its on-prem application data. There is a requirement to sync a directory from an on-premise server to Google Cloud bucket. Which gsutil command you will use to sync the data on a daily basis?
A. Use lsync option with gsutil
B. Use rsync option with gsutil
C. Use -m option with gsutil
D. Use mb option with gsutil
Option B is correct rsync option is used to sync data between buckets/directories. By using the rsync option only the changed data from the source is copied to the destination bucket https://cloud.google.com/storage/docs/gsutil/commands/rsync
Option A is incorrect because there is no option like lsync
Option C is incorrect because it is used for parallel multithreading copying
Option D is incorrect because mb option is used to create a bucket
You are working as a DevOps engineer for an enterprise. Recently one of the microservices was facing intermittent database connectivity issues. This issue was rarely seen and whenever this problem occurs it triggers a few lines in the log file. There is a requirement to set up alerting for such a scenario. What will you do?
A. Use Cloud trace and setup alerting policies
B. Use Cloud logging to set up log-based metrics and set up alerting policies.
C. Manually monitor the log file
D. Use Cloud profiler to set up log-based metrics and set up alerting policies.
Option B is correct You can set up a log-based metric that is based on the entries in the log files. For example, you can count the number of occurrences of a specific line entry in the log file and create a metric based on the count. You can also set up alerting policies on the metric if the count goes beyond any threshold value. https://cloud.google.com/logging/docs/logs-based-metrics
Option A is incorrect because Cloud trace is used to detect the latency issues in your application
Option C is incorrect because you need to automate this procedure and also setup required alerting
Option D is incorrect because Cloud Profiler helps you to analyze the CPU and memory usage of your functions in the application
Your company is migrating the application from AWS to Google Cloud. There is a requirement to copy the data from the AWS S3 bucket to the Google Cloud Storage bucket. Which transfer service would you use to migrate the data to Google Cloud in the easiest way?
A. Storage Transfer Appliance
B. gsutil utility
C. Storage Transfer Service
D. S3cmd
Option C is correct Storage Transfer Service is used to quickly transfer data from any other cloud provider to Google cloud storage bucket using Console
Option B can also be used but they have not mentioned any specific command line requirement
Option A is incorrect because it is used to transfer data from on-premise
Option D is incorrect because it used for AWS S3 service
You are running a web application on a Compute Engine VM that is using the LAMP stack. There is a requirement to monitor the HTTP response latency of the application, diagnose, and get notified whenever the response latency reaches a defined threshold. Which GCP service will you use?
A. Use Cloud monitoring and setup alerting policies
B. Use Cloud monitoring and setup uptime checks
C. Use Cloud Trace and setup alerting policies
D. Use Cloud Logging and setup uptime checks
Option C is correct You can use cloud trace to setup and track latency based metric which will monitor the HTTP response latency and setup alerting policy on this metric which will send an alert when a certain threshold is reached https://cloud.google.com/trace
Option B is incorrect because the uptime check is used to check the system availability Options A & D are incorrect because Cloud trace is used to detect the latency issues in your application
You are using gcloud command-line utility to interact with Google Cloud resources. There is a requirement to create multiple gcloud configurations for managing resources. What is the command to create a gcloud configuration?
A. gcloud config create example-config
B. gcloud config configurations activate example-config
C. gcloud configurations create example_config
D. gcloud config configurations create example-config
Option D is correct gcloud config configurations create is the right command to create a new configuration gcloud command Options A & C are incorrect because the commands are not right
Option B is incorrect because is used to activate an existing gcloud configuration Ref URL: https://cloud.google.com/sdk/gcloud/reference/topic/configurations
You are using Cloud shell for accessing Google cloud resources and for your day to day tasks. There is a requirement to install some packages when the Cloud Shell boots. How will you fulfill this requirement?
A. Schedule a cronjob on restart
B. Add the script in /$HOME/.bashrc file
C. Add the script in /$HOME/.profile file
D. Add the script in /$HOME/.customize_environment file
Option D is correct To install any packages or run bash script while the cloud shell boots up you must write the script in /$HOME/.customize_environment file. This will install the required things and you view the execution logs in /var/log/customize_environment https://cloud.google.com/shell/docs/configuring-cloud-shell#environment_customization All other options are invalid with respect to cloud shell
You are working for a company that is using Google Cloud for its production workload. As per their new security policy, all the Admin activity logs must be retained for at least 5 and will be accessed once a year for auditing purposes. How will you ensure that all IAM Admin Activity logs are stored for at least 5 years keeping cost low?
A. Create a sink to Cloud Storage bucket with Coldline as a storage class
B. Create a sink to BigQuery
C. Create a sink to Pub/Sub
D. Store it in Cloud logging itself
A) Option is correct All the admin activity logs are enabled by default and stored in cloud logging. The default retention period for Admin activity logs is 400 Days. If you want to store logs for a longer period, you must create a sink. In our case since logs will be accessed once a year for auditing purposes then Cloud storage sink is the most suitable option.
Option B is incorrect because BigQuery is not a cost-effective solution
Option C is incorrect because Pub Sub is not used for long term storage
Option D is incorrect because Cloud Logging default retention period is 400 days
Your company recently performed an audit on your production GCP project. The audit revealed that recently an SSH port was opened to the world on a compute engine VM. The management has requested entire details of the API call made. How will you provide detailed information?
A. Navigate to the Logs viewer section from the console, select VM Instance as a resource and search for the required entry
B. Navigate to the Stackdriver trace section from the console, select GCE Network as a resource and search for the required entry
C. Connect to the compute engine VM and check system logs for API call information
D. Navigate to the Stackdriver monitoring section from the console, select GCE Network as a resource and search for the required entry
A) Option is correct All the IAM admin related activity logs are stored in the logs viewer section of Cloud Logging. You can see the entire details for an API call made in the logs viewer section of that resource. You can see what network tags were added to the particular VM in this section.
Option B is incorrect because Stackdriver trace is used to collect latency details from applications
Option C is incorrect because system logs will contain all logs related to the operating system only, not the Google cloud resources
Option D is incorrect because Stackdriver monitoring is used to monitor CPU, memory, disk, or any other custom metrics.
You are working as a Solutions Architect for a large Media Company. They are using BigQuery for their data warehouse purpose with multiple datasets in it. There is a requirement that a data scientist wants full access to a particular dataset only on which he can run queries against the data. How will you assign appropriate IAM permissions keeping the least privilege principle in mind?
A. Grant bigquery.dataEditor at the required dataset level and bigquery.user at the project level
B. Grant bigquery.dataEditor and bigquery.user at the project level
C. Grant bigquery.dataEditor at the project level and bigquery.user at the required dataset level
D. Grant bigquery.admin at required dataset level and bigquery.user at the project level
A) Option is correct bigquery.dataEditor on the required dataset will grant write access to the particular Dataset only and bigquery.user at the project level will grant him access to run query jobs in project https://cloud.google.com/bigquery/docs/access-control All other options are incorrect because they are too broad access roles as per our requirement
You are working for a large enterprise as a DevSecOps engineer. They are running several applications on compute engine VM. The database credentials required by an application are stored in the Cloud Secret Manager service. As per the best practices, what is the recommended approach for the application to authenticate with Google Secret manager service in order to obtain the credentials?
A. Ensure that the service account used by the VM’s have appropriate Cloud Secret Manager IAM roles and VM’s have proper access scopes
B. Ensure that the VM’s have full access scope to all Cloud APIs and do not have access to Cloud Secret Manager service in IAM roles
C. Generate OAuth token with appropriate IAM permissions and use it in your application
D. Create a service account and access key with appropriate IAM roles attached to access secrets and use that access key in your application
A) Option is correct In order to access Cloud services for an application running on compute engine VM, you should use a service account attached to the VM. If you are using the default service account you need to set access scope for API’s and also need to attach appropriate IAM roles to the service account https://googleapis.dev/python/google-api-core/latest/auth.html https://cloud.google.com/compute/docs/access/service-accounts
Option B is incorrect because you also need to attach IAM roles to the service account with required Cloud API’s access scope
Option C is incorrect because as per Google’s recommended best practices you should use service account attached with the service
Option D is incorrect because as per Google’s recommended best practices you should use service account attached with the service
You have been hired by a large enterprise as a Solutions Architect which has several departments like HR, development, and finance. There is a requirement that they want to control IAM policies for each department separately but centrally. Which hierarchy should you use?
A. A single organization with separate folders for each department
B. A separate organization for each department
C. A single organization with a separate project for each department
D. A separate organization with multiple folders
A) Option is correct As per Google recommended best practices you should have multiple folders within an organization for each department. Each department can have multiple teams and projects. By using folders, you can group resources for each department that shares common IAM policies. For example, you have multiple projects for the HR department and want to assign a Compute Instance Admin role to a user for each project in the HR department. You can assign a Compute Instance Admin role to the user at the HR folder level which will grant him access to each project within the HR folder. https://cloud.google.com/resource-manager/docs/creating-managing-folders
Option B is incorrect because you cannot manage IAM Policies centrally if you create separate Organization for each department
Option C is incorrect because each department can have multiple teams and multiple projects under it. So it will become difficult to manage IAM policy centrally for each project within the department
Option D is incorrect because you cannot manage IAM Policies centrally if you create separate Organizations for each department.
You are working for a Company as a Solutions architect. They want to develop a new application that will have two environments development and Production. The initial requirement is that all the resources deployed in development and Production must be able to communicate with each other using the same RFC-1918 Address space. How will you fulfill the requirement considering the least privilege principle?
A. Create a separate project for each environment and Use shared VPC
B. Create a single GCP project and single VPC for both environments
C. Create a separate project for each environment and create individual VPC in each project with VPC peering
D. Create a separate project and use direct peering
Ansawer : A Shared VPC allows you to share a single VPC in one project with another project within an organization called service project. By using shared VPC, the resources in service project can be deployed in shared VPC and they will use the same IP range from shared VPC The main advantage of Shared VPC is that we can delegate administrative responsibilities, such as creating and managing resources which will use one common VPC that allows each team to manage their own resources individually with proper access control In our case, we will create a VPC in production project which will be called a host project, and share it with the development project which will be called a service project. https://cloud.google.com/vpc/docs/shared-vpc
Option B is incorrect because if we use Single project and VPC for both environments we cannot segregate the access control for example if you want to give someone access to create resources only for staging, not production. Such kind of access control is not possible if we are using Single Project and the same VPC
Option C is incorrect because we want the Same RFC-1918 address space. VPC peering is used to connect two different VPC
Option D is incorrect because direct peering is a connection between the on-prem network and Google’s edge network
You are working with a large finance company as a Consultant which is planning to migrate petabytes of data from the on-premise data centre to Google Cloud storage. They are having 1gbps network connectivity from on-premise to Google Cloud. Which option will you recommend to transfer data?
A. Storage Transfer Service
B. Transfer Appliance
C. gsutil command-line tool
D. Transfer Service for On-premise
Answer - B Since they are having petabytes of data to transfer, a transfer Appliance is the best option. Transfer appliance is an offline data transfer service in which data is transferred via the transfer appliance which comes in two sizes 100TB version and 480TB version
Option A is incorrect because this service scales to available bandwidth and can deliver seamless transfers in just minutes and the available bandwidth is 1gbps, which is too low to transfer petabytes of data.
Option C is incorrect because they are having petabytes of data to transfer and using gsutil command-line utility will take a long time even if the bandwidth is good
Option D is incorrect because it is used when we have data in TB’s. Reference: https://cloud.google.com/storage-transfer/docs/on-prem-overview https://cloud.google.com/transfer-appliance/docs/4.0/overview
You are working for a large enterprise as a Solutions architect. They are running several applications on the Compute Engine in Development, Staging, and Production environments. The CTO has informed you that Development and Staging environments are not used on weekends and must be shut down on weekends for cost savings. How will you automate this procedure?
A. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per applied the tags. Write a Cron Job in Cloud Scheduler which will invoke cloud functions endpoint on weekends only.
B. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per the tags. Write a Cron Job in Cloud Tasks which will invoke cloud functions endpoint on weekends only.
C. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per the applied tags. Write a Cron Job in Cloud build which will invoke cloud functions endpoint on weekends only.
D. Apply appropriate tags on development and staging environments. Write a Cloud function that will shut down compute engine VM’s as per the applied tags. Write a Cron Job in Cloud Run which will invoke cloud functions endpoint on weekends only.
Answer: A
Apply tags to the development and staging Compute Engine VM’s. Write a Cloud Functions using any preferred language which will filter the VM’s based on applied tags and will shut down them. Select the trigger type as HTTP while configuring cloud function and write a cronjob in Cloud Scheduler which will trigger the HTTP endpoint only on a weekly basis. https://www.google.com/search?client=firefox-b-d&q=gcp+cloud+scheduler
Option B is incorrect because Cloud Task is used for management of a large number of distributed tasks
Option C is incorrect because Cloud Build is used to create CICD pipelines
Option D is incorrect because Cloud Run is used to run Containers where the entire infrastructure management is fully handled by GCP
For this question, refer to the Dress4Win case study: “https://cloud.google.com/certification/guides/cloud-architect/casestudy-dress4win-rev2 In the initial phase of migration how will you isolate development and test environments?
A. Create a separate project for testing and separate project for development
B. Create a Single VPC for all environments, separate by subnets
C. Create a VPC network for development and separate VPC network for testing
D. You cannot isolate access between different environments in Google cloud
Answer: A
As per the IAM best practices, you should create a separate project for each environment to isolate each environment. https://cloud.google.com/blog/products/gcp/iam-best-practice-guides-available-now
Option B is incorrect because as per IAM best practice you should create a separate project for each team
Option C is incorrect because you cannot isolate each env by creating 2 VPC in the same project. If anyone has permission to start/stop VM he can stop both environments VM’s if they are in the project
Option D is incorrect because you can isolate env’s by creating a separate project for each
You are working for a company which develops online games. Recently one of their online games started becoming more popular which is deployed on a compute engine. As the traffic is increasing they are struggling to provision additional instances globally for any time of the day. How will you design the architecture which will meet the demand of growing users and maintain the performance globally?
A. Use Global Load balancer and Managed Instance Group
B. Use Global Load balancer and Unmanaged Instance Group
C. Use Regional Load balancer and Managed Instance Group
D. Use Regional Load balancer and Unmanaged Instance Group
Answer: A
As the game is becoming more popular globally they should use Global load balancer and Managed instance groups deployed in several regions in multiple zones. Using global load balancer will distribute the traffic to the managed instance group which is closer to the user automatically. Enable autoscaling on Managed instance groups to dynamically scale up and down as the traffic increases.
Option B is incorrect because unmanaged instance group does not support autoscaling
Option C is incorrect because Regional load balancer cannot load balance managed instance group deployed in multiple regions
Option D is incorrect because unmanaged instance group does not support autoscaling
You are working as a Solutions Architect for a large financial firm. The data scientists team wants to run batch jobs on a nightly basis which will perform data analytics. These jobs can be disrupted or restarted and will use Spark and Hadoop clusters. Which GCP managed services will you use to keep analytics processing fast, easy, and more secure and cost-effective?
A. Use Cloud Dataproc with preemptible compute engine option.
B. Run Spark and Hadoop clusters on a preemptible compute engine.
C. Run Spark and Hadoop clusters on a standard compute engine.
D. Use Cloud Dataproc with standard compute engine option.
Answer: A
As they want to run data analytics jobs which will be using Hadoop and spark clusters. Dataproc is a good option because it is a managed service based on Hadoop and spark which is used for ETL workload and data analysis. https://cloud.google.com/dataproc/docs Dataproc clusters can use preemptible VM instances, which will result in huge cost saving https://cloud.google.com/dataproc/docs/concepts/compute/preemptible-vms
Option B is incorrect because they want a managed service
Option C is incorrect because they want a managed service
Option D is incorrect because they want a cost-effective solution so standard compute engine is not a good choice
You are working for a large enterprise where the management of network and security resources such as firewalls are typically managed by a dedicated Security team for the entire organization. The development teams only want flexibility to launch instances and carry out other actions related to instances in the dev project only. How will you grant respective IAM permission to the development team and security team keeping the least privilege principle in mind?
A. Compute Network Admin role for Security team at Organization level and Compute Instance Admin role for development team for dev project only
B. Compute Network Admin role for Security team at Organization level and Compute Instance Admin role for development team at organization level
C. Compute Network Admin role for Security team at Organization level and Compute Network Admin role for development team for dev project only
D. Compute Instance Admin role for Security team at Organization level and Compute Network Admin role for development team at Organization level
Answer: A
Assign Compute Network Admin role to the Security team at the Organization level. This will grant them Permissions to administer networking resources. The network admin role does not allow the security team to control the compute engine resources. By assigning this role at Organization level the Security team will have access to every project within an organization Assign Compute Instance Admin role to the Development team at a specific project-level i.e. dev project. This will grant the dev-team full access to compute engine resources only and read-only access to networking resources. By assigning this role at a specific project level will grant them access to resources in that particular project only. https://cloud.google.com/compute/docs/access/iam https://cloud.google.com/iam/docs/resource-hierarchy-access-control
Option B is incorrect because it will grant dev-team access to all projects under the organization.
Option C is incorrect because the Network admin role will not allow dev-team to control compute resources
Option D is incorrect because Compute Instance admin role will not allow the Security team to administrate networking resources and Network Admin role will not allow the dev team to administrate compute resources
You have been hired as a DevSecOps Engineer by an enterprise who are planning to migrate their application on Google Cloud Platform but as per the compliance requirement they want to use their existing Active Directory domain to manage user identities. What you should suggest in this scenario?
A. Use Google Cloud Directory Sync to sync Active Directory username with Cloud Identity
B. Use Identity-Aware Proxy configured with your Active Directory Domain
C. There is no option for using Active Directory Domain. Use G-Suite for user management
D. Create an Active Directory domain controller on Compute Engine that is a replica of on-premise AD and use Google Cloud Directory Sync
Answer: A
By using Google Cloud Directory Sync you can sync Active directory username with Cloud Identity. In order to sync users and groups, you need to install GCDS agent in you AD servers https://support.google.com/a/answer/106368?hl=en
Option B is incorrect because Identity aware proxy lets you manage access to the applications which are running on App Engine, Kubernetes engine and VM’s
Option C is incorrect because you can sync Active directory users using GCDS
Option D is incorrect because there is no need to move AD servers to compute engine, you can directly install GCDS agent on AD servers
You have been hired as Solutions Architect by a large enterprise who has recently migrated to GCP. The database warehouse team came to you as they want to know which managed service they can use for cleaning, preparing structured and unstructured data for analysis, reporting, and machine learning?
A. Cloud Dataprep
B. Cloud Dataproc
C. Cloud Dataflow
D. Cloud Datalab
Answer: A
Cloud Dataprep is a serverless service that can be used for large dataset cleaning and preparing the data for analysis and reporting. It provides a GUI for cleaning and preparing the data. https://www.youtube.com/watch?v=Q5GuTIgmt98
Option B is incorrect because it is used to run Apache spark and Hadoop clusters
Option C is incorrect because dataflow is used for real-time and batch processing of data
Option D is incorrect because Datalab is used to visualize data and build machine learning models
You are working for a large Finance company as a Solutions architect. They have multiple applications running in production. All the applications log data is stored in GCS bucket for future analysis to improve the application performance. What is the recommended approach to De-identify personally identifiable information or payment card information stored in logs?
A. Use Cloud DLP
B. Use thread detection
C. Use Web Security Scanner
D. Use Cloud Armor
Answer: A
Cloud DLP is a fully managed service used to de-identify sensitive data like credit card numbers, Phone numbers, and any other PII information stored in text files within cloud storage and Bigquery. After detecting sensitive data the DPL API provides various options like mask the data or delete the data https://cloud.google.com/dlp/docs/deidentify-sensitive-data
Option B is incorrect because it is used to detect threats like Burt force attack from logs and reports to Security command center
Option C is incorrect because it is used to find any vulnerable library used in your application code
Option D is incorrect because it is used to mitigate DDoS attack and provide WAF
You are working for a company that is planning to migrate its entire application to GCP. During the initial phase of migration, there is a requirement to set up a site-to-site VPN connection between on-prem and GCP which provides 99.99 availability on the GCP side connection. Which service will you use?
A. Cloud HA VPN
B. Cloud Classic VPN
C. Direct Peering
D. Configure Openswan on two compute engine instances and create two VPN tunnels
Answer: A
Cloud HA VPN provides an SLA of 99.99% service availability. https://cloud.google.com/network-connectivity/docs/vpn/how-to/creating-ha-vpn2?hl=nl
Option B is incorrect because Cloud classic VPN provides 99.9 availability.
Option C is used to connect on-premise location to Google’s Point of presence location(PoP)
Option D can also work but there will be lots of management work so it is not preferable. GCP has its fully managed service i.e. Cloud HA VPN
You are working as Solutions Architect for Company which is running the entire application on-premise. There is a new requirement to migrate the SQL server enterprise edition to GCP which runs in the availability group for High availability in the datacenter. Which option you will choose from below which will provide less management work ahead and can also provide data redundancy?
A. Create a Cloud SQL server instance with high availability option enabled
B. Create a Compute instance in the different zone within a region and install SQL server with always-on availability groups for data redundancy
C. Create a Cloud SQL instance, by defaults it comes with high availability
D. Create a Compute instance in a single zone with always-on availability groups
Answer: A
Cloud SQL is a fully managed service where Google manages all the heavy lifting work like patching, failover, backups and replication. Cloud SQL server instance is the best choice with a high availability option enabled on it. When you enable High Availability(regional) option, if there is an outage, your instance fails over to another zone in the region where your instance is located There are also several licensing options available for Cloud SQL. https://cloud.google.com/sql/docs/sqlserver/high-availability
Option B is incorrect because to reduce the management work ahead, we will be using managed service i.e Cloud SQL
Option C is incorrect because we need to enable the high availability option while creating Cloud SQL
Option D is incorrect because it will not provide high availability and also will not reduce management work
You are running Apache Kafka on a compute engine for real-time data processing pipeline. The machine size is n1-standard-4 with 1TB of SSD persistent disk and as per the monitoring, you are not getting the desired disk throughput required for the job to do. What configuration will you change to increase disk performance?
A. Increase the machine to n1-standard-8
B. Increase the disk size to 2TB
C. Increase the machine memory
D. change the storage type to standard persistent disk
Answer: A
Disk performance depends on its size, instance vCPU count and I/o block size In our case, we are already having a large disk size i.e. 1TB which can support 480 Mbps read/write throughput. But as per our machine size i.e. n1-standard-4 (4vcpu), the disk is only limited to 240mbps read/write throughput. We need to increase the CPU count to 8 or above to support desired disk performance. For example, consider a 1,000 GB SSD persistent disk attached to an instance with an N2 machine type and 4 vCPUs. The read limit based solely on the size of the disk is 30,000 IOPS. However, because the instance has 4 vCPUs, the read limit is restricted to 15,000 IOPS.” https://cloud.google.com/compute/docs/disks/performance#size_price_performance https://cloud.google.com/compute/docs/disks/performance#machine-type-disk-limits
Option B is incorrect because we already have large disk size, the bottleneck was CPU
Option C is incorrect because RAM does not limit the disk performance
Option D will more degrade the performance. see above URL for comparison
You are working for a large enterprise as a Solutions Architect. The development team is building a new application that will be deployed on Compute Engine. How will you set compute engine VM configuration in such a way that there is no downtime when GCP performs periodic infrastructure maintenance on the compute engine?
A. Set the on-host maintenance option to Migrate VM instance
B. Set the Automatic restart option to ON
C. You need to restart VM when there is such kind of maintenance activity from GCP
D. Set the on-host maintenance option to Terminate VM instance
Answer: A
GCP performs maintenance activity on compute engine infrastructure which includes. Host kernel upgrades, hardware repair, or upgrade. This activity occurs once every two weeks. You can configure compute engine VM to perform live migration to another host in case of such maintenance activity without downtime. You just need to set instance On host maintenance property to Migrate VM instance and the entire process is handled by GCP on your behalf. You can see compute. instances. migrateOnHostMaintenance operation type performed in Operations Suite (formerly Stackdriver) logging when such activity is carried out. https://cloud.google.com/compute/docs/instances/live-migration https://cloud.google.com/compute/docs/instances/setting-instance-scheduling-options
Option B is incorrect because it is used when the host machine crashes which holds your VM. If this property is enabled, whenever there is a host machine failure. Your compute engine will be automatically restarted
Option C is incorrect because there is no need to perform any kind of operation from your side
Option D is incorrect because if the property is set to Terminate VM instance, GCP will terminate your VM when there is a maintenance event.
You are working for a Media company as a Solutions Architect. They are having a mobile application which is used by journalists to capture and upload images on a daily basis to the GCS bucket from a different location for any Breaking News. There is a requirement to process these file images in real-time to detect any offensive content and if there is any offensive content it should be made blur and re-uploaded to the bucket. Which services will you include in your Architecture?
A. Cloud Functions, Cloud Vision API
B. Cloud functions, Cloud ML Engine
C. App Engine, Cloud Vision API
D. Cloud Tasks, Cloud Vision API
Answer: A
Google Cloud’s Vision API is an AI service provided by GCP to detect objects in an image, detect any explicit content in images, and also can extract text from images. As soon as the image is uploaded to the GCS bucket Cloud Function is invoked which will call Vision API and perform Offensive Image Detection operation. If any offensive image is detected another Cloud function will be called which will make the offensive content Blur using python pillow library and upload it to the same bucket.
Option B is incorrect because Cloud ML engine is used to train machine learning models
Option C is incorrect because we will need event-based service for such kind of requirement
Option D is incorrect because Cloud Tasks is a fully managed service used to manage distributed tasks.
You have been hired by a large U.S based healthcare firm as a Consultant which is planning to migrate entire application and on-premise data to Google Cloud. As the data includes medical records of different hospitals located in the U.S. What regulations would you look to for more guidance on complying with relevant regulations?
A. HIPAA
B. PCI-DS
C. GDPR
D. SOX
Answer: A
HIPAA (Health Insurance Portability and Accountability Act) is regulatory compliance in the U.S which is used to protect the healthCare data collected by websites and application for business purpose in the U.S https://cloud.google.com/security/compliance/hipaa-compliance
Option B is incorrect because it is a Payment Card Industry Data Security Standard to protect credit card information collected for business
Option C is incorrect because GDPR(General Data Protection Regulation) is regulatory compliance in Europe which is used to protect any personally identifiable information collected for business purpose within the Europe region
Option D is incorrect because
Option SOX compliance is used for financial auditing purpose
There is a new requirement to Deploy a web application on Google Kubernetes Engine which will be accessed by multiple users around the world. How will you enable autoscaling on the application which will scale automatically based on the CPU Utilization?
A. Create a HorizontalPodAutoscaler with CPU as target and enable autoscaling on your GKE cluster
B. Create a HorizontalPodAutoscaler with CPU as target and enable autoscaling on your managed instance group
C. Create a Deployment with the max unavailable and max surge properties and enable autoscaling on your GKE cluster
D. Create a Deployment with the max unavailable and max surge properties and enable autoscaling on your managed instance group
Answer: A
Horizontal Pod Autoscaler is used to automatically scale the pods in a deployment based on the CPU utilization or memory utilization Kubectl autoscale command is used to create HorizontalPodAutoscaler kubectl autoscale deployment example-app –max 5–min 2–cpu-percent 60 You can also enable autoscaling on your GKE cluster which can add or remove nodes from node pool based on the demands of your workloads You can use gcloud command to enable autoscaling on your GKE cluster gcloud container clusters update example-cluster –enable-autoscaling \ –min-nodes 2–max-nodes 6–zone compute-zone –node-pool default-pool
Option B is incorrect because you need to enable autoscaling on GKE cluster not managed instance group Options C & D is incorrect because deployment is a Kubernetes object which is used to run multiple replicas of your pod and will automatically replace any failed or unresponsive pod
You are working for a company that has several applications running on a compute engine. Daily files are uploaded to the GCS bucket from these instances. These files are accessed once a month by developers for analysis. After 1 year all the files are accessed only once a year but must be retained for 5 years as per compliance. How will you configure data storage in a cost-effective way?
A. Set the default storage class of the bucket to the near line and create a lifecycle rule to move objects older than 1 year to Coldline storage class
B. Set the default storage class of the bucket to standard and create a lifecycle rule to move objects older than 1 year to nearline storage class
C. Set the default storage class of the bucket to standard and create a lifecycle rule to move objects older than 1 year to Coldline storage class
D. Set the default storage class of the bucket to Coldline storage.
Answer: A
Set the default class to Nearline Nearline Storage is the best choice when you want to access objects stored in the bucket once a month After one year as the files stored in the bucket will be accessed only once a year then you should create lifecycle rule to migrate nearline objects to cold line storage https://cloud.google.com/storage/docs/lifecycle
Option B &
Option C are incorrect because standard storage is used for objects which are accessed very frequently
Option D is incorrect because Coldline storage is used for objects which are accessed once a year Note - GCP has launched a new storage class called Archival storage which was Generally available on January 08, 2020. This may reflect in exam https://cloud.google.com/storage/docs/storage-classes
You are working for a company that is using GCP for their production workload. One of the applications is using Cloud CDN for static content caching in front of the https load balancer. As per the cloud logging, you see lower than expected cache hit ratios. How will you increase the cache-hit ratio?
A. Use custom cache keys
B. Increase the cache expiration time
C. Use cache invalidation frequently
D. Decrease cache expiration time
Answer: A
To improve the cache hit ratio you should reduce the cache key by removing host and protocol information. This final URL is called a custom cache key For e.g. https://demo.com/test/cloud.jpg and https://demo2.com/test/cloud.jpg have the same image i.e. cloud.jpg but URL is different you can remove protocol and host information from the cache key https://cloud.google.com/cdn/docs/best-practices
Option B is incorrect it used to define the time that how long content is cached at PoP location
Option C is incorrect because it is used to clear cache entry manually
Option D is used to when you have an application where content is frequently updated So you can keep low cache expiration time on cache contents
You are working for a large enterprise as a GCP Cloud Architect. As per the new compliance requirement, you should regularly save your all admin activity and VM system logs within your project centrally for third party auditing which will happen once every month. How will you achieve this requirement keeping the cost low?
A. All admin and VM system logs are automatically collected by Stackdriver, just create sink for selected logs to GCS nearline bucket
B. Stackdriver automatically collects admin activity logs for most services. Only the Stackdriver Logging agent must be installed on each instance to collect system logs and create sink for selected logs to GCS nearline bucket
C. Stackdriver automatically collects admin activity logs for most services. Only the Stackdriver Logging agent must be installed on each instance to collect system logs and create sink for selected logs to GCS cold storage bucket
D. All admin and VM system logs are automatically collected by Stackdriver, just create sink for selected logs to GCS cold storage bucket
Answer: B
Admin activity logs are automatically collected for most of the services in GCP. For the VM system logs, you need to install a Logging agent in each VM whose logs you want to export to stackdriver logging. As per the compliance requirement, you must retain logs for auditing for that you should create a sink to GCS nearline bucket. These logs will be accessed once a month that’s why the nearline bucket is the best storage option. https://cloud.google.com/logging/docs/agent https://cloud.google.com/logging/docs/audit
Option A is incorrect because VM system logs are not automatically collected. You need to install a stackdriver agent to get VM system logs.
Option C is incorrect because the audit will happen once a month and for that Coldline storage is not a good option
Option D is incorrect because VM system logs are not automatically collected you need to install stackdriver agent to get VM system logs and also coldline is not a right storage option
You have been hired as a DevOps Engineer by a large finance company. As per their regulatory compliance the CTO has informed you that any resources which will be created in Google Cloud must be created in the U.S region only and all other regions are restricted by default. How can you restrict the resources creation limited to the U.S region only?
A. Create a custom IAM policy at Organization level
B. Create an Organization Policy at Organization level
C. Create an Organization policy at individual project level
D. You cannot apply such kind of restriction in Google cloud
Answer: B
An organization policy is a configuration of restrictions. You can create Organization policy at Organization level which will inherit to all resource under it and with Constraint for Google Cloud Platform - Resource Location Restriction set to the U.S only https://cloud.google.com/resource-manager/docs/organization-policy/overview https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
Option A is incorrect because IAM policy is attached to resources which are used to define access control
Option C is incorrect because we want to apply the restriction for all the projects under the organization, not a specific project
Option D is incorrect because we can have such kind of restriction using Organization Policy
You have been hired as a DevSecOps Engineer by a large finance company. As per their regulatory compliance the CTO has informed you that by default all VM instances which are created in the entire organization are not allowed to use external IP addresses. How can you fulfill this requirement?
A. Create a custom IAM policy at Organization level
B. Create an Organization Policy at Organization level
C. Create an Organization policy at individual project level
D. You cannot apply such kind of restriction in Google cloud
Answer: B
An organization policy is a configuration of restrictions. You can create Organization policy at Organization level which will inherit to all resource under it and with Constraints for Compute Engine service which include Define allowed external IPs for VM instances set to Deny All https://cloud.google.com/resource-manager/docs/organization-policy/overview https://cloud.google.com/resource-manager/docs/organization-policy/org-policy-constraints
Option A is incorrect because IAM policy is attached to resources which are used to define access control
Option C is incorrect because we want to apply the restriction for all the projects under the organization, not a specific project
Option D is incorrect because we can have such kind of restriction using Organization Policy
You have been hired as a solutions architect by Large Finance firm. The development team is developing an application which will be hosted on Google cloud and will access the Oracle Database in its own datacenter. The Network engineers have determined that a link between the on-premises network and GCP will require an 8 Gbps connection and low latency to meet the business requirements with an SLA. Which Option you will select?
A. Dedicated Interconnect
B. Partner Interconnect
C. Cloud VPN
D. Hybrid Interconnect
Answer: B
B Is right because Partner Interconnect is good up to 10Gbps and provides SLA also https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
Option A is incorrect because Dedicated interconnect is suitable and cost-effective above 10Gbps
Option C is incorrect option because it is not suitable for High-speed connections where latency is a key requirement
Option D is incorrect because there is no such service
You have been hired as a Solutions Architect by a large enterprise. They are planning the migration of their on-premise application to GCP. The application ingests time-series data at low latency collected from sensors from chemical plants located across different locations. They are using Cassandra clusters as database storage and RabbitMQ as a messaging service. One of the business requirements is to maximize the use of managed services while moving to GCP. Please select services as per the business requirements
A. Use Cloud Datastore and Pub/Sub
B. Use Cloud Bigtable and Pub/Sub
C. Use Cloud Bigquery and Pub/Sub
D. Use Dataproc and Pub/Sub
Answer: B
Cloud Bigtable is the best choice when you want to ingest time series data from sensors at low latency. It is a fully managed service used for large NoSQL analytical workloads. https://cloud.google.com/bigtable As they are using RabbitMQ as a messaging service on-premise and want to move to managed service while migration then Pub/Sub is a good choice. Pub-Sub is a fully managed service which provides asynchronous service to service communication mostly used in event-driven architectures https://cloud.google.com/pubsub/docs/overview
Option A is incorrect because Datastore is not ideal for where low latency is a key requirement.
Option C is incorrect because Bigquery is used for SQL data
Option D is incorrect because it is used to run Apache Hadoop and Spark clusters
You are working for Media Company as a Solutions Architect. There is a new requirement that the visual effects artists team requires a file share system that can be easily mounted on several Compute Engine instances for media workflow processing like video editing and video rendering which usually require common file share. Which storage solution will you use for this kind of scenario?
A. Cloud Storage
B. Cloud Filestore
C. Relational database
D. Cloud datastore
Answer: B
Cloud filestore is a fully managed network-attached storage which uses NFS protocol where multiple Linux instances can mount a common file share over a network. https://cloud.google.com/filestore
Option A is incorrect because Cloud storage is object storage and cannot be mounted on compute engines
Option C is incorrect because the Relational database is used to store SQL data
Option D is incorrect because Cloud datastore is a NoSQL database
You are working as a consultant for a company which has thousands of IoT devices installed in several chemical plants for monitoring humidity, temperature and electrochemical gas. There is a requirement to capture the data from this sensor in real-time, ingest it, run through a data processing pipeline and store it for analysis. SQL queries will be run against data for analysis and also there is a requirement for a data visualization tool that can analyze the data interactively. Which architecture you will suggest for the above requirements?
A. Cloud IoT core, Pub/Sub, Dataproc, Bigtable, Data Lab
B. Cloud IoT core, Pub/Sub, Dataflow, Bigquery, Data studio
C. Cloud IoT core, Pub/Sub, Dataprep, Biquery, Data Lab
D. Cloud IoT core, Pub/Sub, Dataflow, Bigtable, Data studio
Answer: B
Cloud IoT core, Pub/Sub, Dataflow, Bigquery, Data studio is the correct option. Cloud IoT Core is a fully managed service which will accept data from sensors and will manage the connection with sensors. After the data arrives at IoT core it is sent to Pub/Sub which will act as an asynchronous message bus and further this real-time data is processed by data flow and stored in Bigquery for analysis as they want to run SQL queries so Bigurey is the best choice. You can use Data studio which will use Bigquery as a source and create dashboards and reports for visualization as per requirement.
Option A is incorrect because Dataproc is used to run Hadoop and Spark clusters
Option C is incorrect because data prep is used to cleanse and prepare data for analysis and machine learning
Option D is incorrect because we want to run SQL queries against the data so BigTable is not the right choice as it is NoSQL database
You have been hired by a large enterprise as a Solutions Architect. The development team came to you with a requirement that they want a global load balancing solution that can support Non-HTTPS traffic and SSL termination at the load-balancing level. Which load balancer will you recommend?
A. HTTPS
B. SSL Proxy
C. TCP Proxy
D. Internal TCP/UDP
Answer: B
SSL Proxy load balancer is the best choice for non-https traffic and can also handler SSL termination. It is a Global load-balancing Solution Provided by GCP https://cloud.google.com/load-balancing/docs/choosing-load-balancer
Option A is incorrect because HTTPS load balancer is used for HTTP traffic
Option C is incorrect because the requirement is to terminate the SSL at the load balancing level. TCP proxy does not support SSL termination
Option D is incorrect because Internal TCP/UDP load balancer is used to load balancer internal traffic inside a VPC
You are working for a company as a Solutions architect. The Development team is developing a new stateful application that will be deployed on the Google Kubernetes Engine. What type of Kubernetes resource will you create for stateful application?
A. Pods
B. StatefulSets
C. Deployments
D. DaemonSets
Answer: B
StatefulSets are used for stateful applications where you want to persist application data. When you create StatefulSet, replica pods are created in order and each replica pod have its unique id, own PVC and state https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset
Option A is incorrect because the pod is the smallest unit of Kubernetes and mostly managed by Kubernetes objects like deployment, replica set, and StatefulSet
Option C is incorrect deployments are mostly used for stateless application
Option D is incorrect because DaemonSets are used when you want a run a copy of each pod on each node in Kubernetes
You are working for a large enterprise as a Solutions Architect. One of your application is running on-premise. There is a requirement that an application running on google cloud needs to access a few APIs of the on-premise applications without exposing them to the internet. Which type of topology will you implement to fulfill the requirements?
A. Meshed topology
B. Gated egress topology
C. Gated egress and ingress topology
D. Gated Ingress topology
Answer: B
This type of topology is useful when you want to expose on-premise application API’s to the workload running on Google Cloud without exposing them to internet Please refer to below link for different hybrid and multi-cloud network topologies https://cloud.google.com/solutions/hybrid-and-multi-cloud-network-topologies
Option A is incorrect because Meshed topology is used to establish flat network connectivity where every system can communicate with each other
Option C is incorrect because Gated egress and ingress topology is used when you have to expose a few API’s from on-premise to cloud and from cloud to on-premise in a secure way
Option D is incorrect because it used when you want to expose a few API’s from an application which is running on Google Cloud to On-premise in a secure way
You are working for a large enterprise as a Solutions Architect. As per their compliance requirement all the data which is stored in Cloud SQL, Compute Engine and Cloud storage must be encrypted by customer-managed encryption keys with rotation schedule for symmetric keys to automatically generate a new key, please suggest the right choice for encryption?
A. Use default encryption which is provided by Google Cloud
B. Use CMEK using Cloud KMS
C. Use CSEK
D. Use third party service from Marketplace for customer-managed-encryption
Answer: B
Use CMEK using Cloud KMS Customer-managed encryption keys (CMEK) using Cloud KMS lets you create your own encryption keys in Cloud KMS where you can create, rotate, automatically rotate and destroy symmetric encryption keys https://cloud.google.com/storage/docs/encryption/customer-managed-keys
Option A is incorrect because default encryption is fully managed by GCP from creating keys to encrypting the data and storing the keys and rotating them
Option C is incorrect because CSEK is used when there is a requirement to store the encryption keys on-premise and only supports two services i.e cloud storage compute engine
Option D is incorrect because you cannot use a third-party solution for encrypting GCP services
You are working for a large finance company as a Solutions Architect. As per the FINRA compliance regulation, the data stored in GCS buckets must be retained for 5 years. How can you ensure that the current objects or any objects uploaded to the buckets are not deleted for at least 5 years?
A. Apply Lifecycle rules to buckets
B. Apply retention policy to buckets
C. Apply IAM policy with appropriate roles
D. Enable versioning on buckets
Answer: B
when you set a retention policy on the bucket you cannot delete any objects in that bucket for the specified period of time mentioned in a retention policy https://cloud.google.com/storage/docs/using-bucket-lock
Option A is incorrect because lifecycle rules are used to move objects between different storage classes
Option C is incorrect because it is used to control access management
Option D is incorrect because versioning is used to create multiple versions of a single object
You are working for a Company as a Consultant which has recently acquired a Software Company which has their entire application on Google Cloud Platform. There is a new requirement that the application in your GCP VPC requires RFC 1918 connectivity to VPC in the acquired GCP account. How will you create connectivity?
A. Shared VPC
B. Cloud VPN
C. VPC Peering
D. Direct Peering
Answer: C
Option A is incorrect because it is used to share the VPC from the host project to service projects within an organization
Option C is correct because VPC peering is always preferred when you want to connect two VPC’s within GCP cloud because the traffic stay’s inside Google’s private network
Option D is incorrect because direct peering is a connection between the on-prem network and Google’s edge network
You have been hired as Consultant for a company which is planning migration of their enterprise application to GCP. As the company holds sensitive data and has the requirement to generate own encryption keys and manage it on-premises as per their regulatory compliance. The CTO has asked you to list the Google Cloud Products Which Supports Customer-supplied Keys(CSEK) before they perform migration. Please select the services which support CSEK.
A. All Google Cloud Products support CSEK
B. Compute Engine, Cloud Storage and Cloud SQL
C. Compute Engine and Cloud Storage
D. BigQuery, Cloud SQL and Datastore
Answer: C
You have been hired as DevSecOps Engineer by a finance firm. They are developing a new application that will be used for financial transactions thus needs to be PCI compliant and will be deployed on a compute engine. As per the security team, the infrastructure on which the application will run must be hardened by security controls to protect against rootkits and bootkit. Which compute engine option you will use?
A. Enable encryption on the Boot disk
B. Use Sole-Tenant VM
C. Use Shielded VM
D. Use Preemptible VM
CSEK is a feature in Google Cloud Storage and Google Compute Engine services https://cloud.google.com/security/encryption-at-rest/customer-supplied-encryption-keys Options A, B & D are incorrect because only Cloud Storage and Compute Engine supports CSEK.
You have been hired as Consultant by an enterprise. The company is running their production workload on Google Cloud. One of your clients requested a penetration testing report for your application and your CTO has decided to hire a Security specialist to perform penetration testing on your application, what is the procedure to conduct penetration testing on Google Cloud?
A. You need to raise a support ticket with Google cloud for permission to perform Penetration testing
B. Google Cloud does not allow to perform any kind of penetration testing
C. You do not have to notify Google when conducting a penetration test on your application
D. Raise a support ticket with Google to perform penetration testing on your behalf
Answer: C
Shielded VM is an option in a compute engine instance that comes with a set of security controls which helps to protect against rootkits and bootkits. For an application which required hardened OS, Shielded VM is a good option https://cloud.google.com/shielded-vm
Option A is incorrect because by enabling encryption on the boot disk will only encrypt the data. It will not protect against rootkits and bootkit
Option B is incorrect because this option provides us a dedicated physical server, which is allotted to us only for running compute engine instances
Option D is incorrect because Preemptible instances are short-lived instance which can run for max 24 hour and provide huge cost saving as compared to standard instances
For this question, refer to the TerramEarth case study: https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth-rev2 Initially, TerramEarth will be testing BigQuery service as the preferred replacement of their On-Premise data warehouse system. During the testing phase, they only want access to the most recent data on BigQuery. Any data older than 15 days must be deleted to optimize storage use. How will you fulfill this requirement?
A. Set the default table expiration to 15 days
B. Create a script using bq that removes records older than 15 days
C. Take advantage of BigQuery long-term storage
D. Make the tables Date-partitioned, and configure the partition expiration at 15 days
Answer: C
You can perform penetration testing on your application without informing Google Cloud but you must satisfy all the terms and conditions of Google Cloud https://support.google.com/cloud/answer/6262505?hl=en.
Option A is incorrect because there is no need to raise a support ticket to conduct penetration testing on your application
Option B is incorrect because you can perform penetration testing on GCP
Option D is incorrect because google does not perform penetration testing on your behalf. You can perform yourself without notifying google cloud
You have been working as a Solutions Architect for a company who has recently developed an online mobile game which will be mostly used for children ages 10 to 14 and will be deployed on Google Cloud in the us-west1 region. The online game will collect the personal information of the player such as name, address, age and hobbies. With which regulation would you advise them to comply with?
A. HIPAA
B. PCI-DS
C. GDPR
D. COPPA
Answer: D
As TerramEarth will be testing BigQuery initially, they don’t want data older than 15 days You can partition the table based on date and set the default table expiration to 15 days which will automatically delete data older than 15 days providing you the most recent data. https://cloud.google.com/bigquery/docs/best-practices-storage
Option B is incorrect because there is no read to write a script, it can be done by the default table expiration feature on BigQuery
Option C is incorrect because this is used when you have a table that is not edited for the last 90 Days. After 90 days the storage price drops by 50% which is similar to Nearline storage pricing
Option A is incorrect because it will directly set the default table expiration time to 15 days which will delete data older than 15 days. Please refer to https://cloud.google.com/bigquery/docs/managing-tables for more information
You have been hired as a Solutions Architect by a large enterprise. They are planning to migrate an application that is running in the AWS cloud to the GCP cloud. During the initial phase of migration, there is a requirement to create RFC-1918 connectivity with a minimum of 5Gbps bandwidth between AWS VPC and GCP VPC for secure migration. What service you will use at the GCP side with the least management work ahead?
A. Use a Cloud HA VPN
B. Use an OpenSwan VPN solution on the Compute engine with more CPU
C. Use VPC Peering
D. Use Cloud Partner Interconnect
Answer: D
COPPA is regulatory compliance in the U.S which is related to protecting the privacy of children below 13 age in the U.S https://www.ftc.gov/tips-advice/business-center/guidance/complying-coppa-frequently-asked-questions-0
Option A is incorrect because HIPAA is related to protecting the privacy of healthcare data in the U.S
Option B is incorrect because it is a Payment Card Industry Data Security Standard to protect credit card information collected for business
Option C is incorrect because GDPR(General Data Protection Regulation) is regulatory compliance in Europe which is used to protect any personally identifiable information collected for business purpose within the Europe region
You have been hired as a Cloud Consultant for a company that is planning the migration of their entire Application and data from AWS cloud to Google Cloud Platform. During the initial phase of migration, there is a requirement to migrate data from AWS S3 buckets to GCS buckets. One of the key requirements is that any new data which gets added to S3 bucket should be copied to GCS bucket on a daily basis until the migration is completed. How will you accomplish this task?
A. Use Transfer Appliance
B. Create a Linux Compute VM on GCP and schedule a cron job which will copy data on a daily basis with proper authentication
C. Use gsutil cp cmd and run on a daily basis
D. Use GCP Storage Transfer Service
Answer: D
For multicloud diagram, you can use Cloud Interconnect by resource Partner Interconnect with solutions such as megaport https://www.megaport.com/services/google-cloud-partner-interconnect/ or Equinix ECX https://cloud.google.com/architecture/connection-google-cloud-vpcs-to-aws-equinix-network-edge
Option A is incorrect since HA VPN cannot provide 6 Gbps bandwidth. VPN tunnel can support bandwidth up to 3gbps.
Option B also can be used to connect two networks, but we want a managed service.
Option C is incorrect because it is used to connect VPCs within Google Cloud.
You are working for a Company which is planning to develop a new application Which will be deployed in the Frankfurt region in Europe. The company offers an online vehicle insurance service that collects user data like name, address and vehicle-related details. Which regulation must your company comply with?
A. SOX
B. HIPAA
C. COPPA
D. GDPR
Answer: D
GCP Storage Transfer Service offers Quick transfer of data from online sources like AWS S3 and Azure Blob Storage to Cloud Storage in one simple process. You can also create a schedule in transfer service to sync data on a daily basis https://cloud.google.com/storage-transfer/docs/create-manage-transfer-console#amazon-s3
Option A is incorrect because it is used to Transfer data from on-premise to Google cloud
Option B is incorrect because managed GCP service will do most of the work for you.
Option C is incorrect because you can use gsutil cmd but as per GCP, Transfer service is the best option that will do all the work in a single process.
For this question, refer to the MountKirk Games case study: https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames-rev2 As per the Technical requirements of MountKirk Games which Compute Option is best suitable for them?
A. A Single Compute instance with sustained discounts and instance property as Preemptible
B. A Single Compute instance with sustained discounts and instance property as non-Preemptible
C. A Managed Instance group with sustained discounts and instance property as Preemptible
D. GKE
Answer: D
GDPR(General Data Protection Regulation) is regulatory compliance in Europe which is used to protect any personally identifiable information collected for business purpose within the Europe region
Option A is incorrect because SOX compliance is used for financial auditing purpose
Option B is incorrect because HIPAA is related to protecting the privacy of healthcare data in U.S
Option C is incorrect because COPPA is related to protecting the privacy of children below 13 age in the U.S
Your team has developed an application that will be deployed on the Google Kubernetes Engine. There a requirement to persist the application data on the Kubernetes pods. How will you persist the data beyond the lifetime of the pods?
A. Ingress
B. Deployments
C. ReplicaSets
D. PersistentVolumes
Answer: D
Kubernetes Engine
Option A is incorrect because MountKirk wants a scalable environment so using Single compute engine instance will not fulfill the requirement
Option B is incorrect because MountKirk wants a scalable environment so using Single compute engine instance will not fulfill the requirement
Option C is incorrect because preemptible VM is not recommended for Production workload
You are working as a Solutions Architect for an enterprise. Your company recently developed a Web-App which will be deployed on App Engine. Following the IAM best practices which roles you will grant to the members where Team Lead is responsible for auditing App Engine code in production only requires read-only access to deployed source code and where developers can release code into production?
A. roles/appengine.appAdmin, roles/appengine.appViewer
B. roles/appengine.appAdmin, roles/appengine.codeViewer
C. roles/appengine.serviceAdmin, roles/appengine.deployer
D. roles/appengine.codeViewer, roles/appengine.deployer
Answer: D
PersistentVolumes(PV) is cluster-wide storage which is used to store data. Persistent Volume has a lifecycle independent of any pod that uses the persistent Volume. When we create a persistent Volume in GKE a compute engine persistent disk is created https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Option A is incorrect because it used to expose a Kubernetes service to the public internet
Option B is incorrect because deployment is a Kubernetes object which is used to run multiple replicas of your pod and will automatically replace any failed or unresponsive pod
Option C incorrect because it used to manage the number pods running in a deployment
You have been hired as a Cloud consultant for a company which is already using Google Cloud for their production and staging workload in separate GCP projects within an organization. Recently they came across a situation where an application running on a compute engine in a staging project requires a read access to a private GCS bucket which is in a production project. According to IAM best practices how will you grant access?
A. Create a service account in production with access keys, grant Storage object viewer role and configure the application in to use access keys
B. Create a service account in a staging project and attach the service account to the compute engine where the application is running. In production project grant staging projects service account Storage object viewer role in GCS bucket permission section.
C. Create a service account in a staging project and attach the service account to the compute engine where the application is running. In production project grant staging projects service account Storage object viewer role in project IAM section.
D. Add allUsers as a member in the permission section of GCS bucket in production project and grant Storage object viewer role.
Answer: D
The team lead is responsible for auditing App engine code in production so he will need only roles/appengine.codeViewer to perform his duties. This role grants read-only access to deployed source code and application configurations. Developers can be granted roles/appengine.deployer which grants them read-only access to all application configuration, settings and allow them to create a new version of the application https://cloud.google.com/appengine/docs/admin-api/access-control#roles
Option A is incorrect because it grants Read/Write/Modify permission to team lead and developers will not have permission to create a new version of an application
Option B is incorrect because it will grant Read/Write/Modify permission to team lead
Option C is incorrect because it will not allow the team lead to read the deployed source code
You have been hired as a Security consultant for a Financial company. The company holds sensitive data like customer account numbers, credit card information in the GCS bucket. The CTO wants additional security for mitigating exfiltration of data from a discontinued employee or attacker who has stolen identities. How will you mitigate this security risk by providing access to only authorized projects?
A. Cloud Armor
B. Threat Detection
C. VPC service controls
D. DLP
Answer B As per IAM best practices, you should add staging project’s service account in GCS bucket permission section and grant Storage object viewer role to provide cross account access https://cloud.google.com/dataprep/docs/concepts/gcs-buckets
Option A is a possible option but directly using access keys in the compute engine is not a good security practice.
Option C is incorrect because assigning the role in IAM section of the project will give access to all buckets in that project, not a particular bucket, this will grant access permissions
Option D is incorrect because adding allUsers will make the bucket public and anyone can access it.
One of your clients is using customer –managed encryption, which of the following statements are true when you are applying customer-managed encryption key to an object.[Select any 3]
A. the encryption key is used to encrypt the object’s data
B. the encryption key is used to encrypt the object’s CRC32C checksum
C. the encryption key is used to encrypt the object’s name
D. the encryption key is used to encrypt the object’s MD5 hash
Answer C VPC service controls allow you to lock down GCP resources. In VPC service control you can define which projects can call on your GCP APIs allowing you to whitelist the project which you want to grant access to. This can protect sensitive data from attackers or stolen identity The most common use cases for VPC service controls are Mitigate threats such as data exfiltration Isolate parts of the environment by trust level Secure access to multi-tenant services https://cloud.google.com/vpc-service-controls
Option A is incorrect because it is used to mitigate DDoS attack and provides WAF
Option B is incorrect because it is used to detect threats like Burt force attack from logs and reports to Security command center
Option D is incorrect because Cloud DLP is used to detect and de-identify any sensitive information like credit card number or any PII data
You have a long-running job that one of your employees has permissions to start. You don’t want that job to be terminated when the employee who last started that job leaves the company. What would be the best way to address the concern in this scenario?
A. Create many IAMusers and give them the permission.
B. Create a service account. Grant the Service Account User permission to the employees who needs to start the job. Also, provide “Compute Instance Admin” permission to that service account.
C. Give full permissions to the Service Account and give permission to the employee to access this service account.
D. Use Google-managed service accounts in this scenario.
Answer:
Option A, B, D are the CORRECT choice because, When you apply a customer-managed encryption key to an object, the encryption key is used to encrypt the object, its CRC32C checksum, and its MD5 hash. The remaining metadata for the object, including the object’s name, is encrypted using standard server-side keys. This allows you to always read and update metadata, as well as list and delete objects, provided you have permission to do so. https://cloud.google.com/storage/docs/encryption/customer-managed-keys
A Global Media company is configuring a Global load balancer for non-http(s) traffic. They are looking for a service with SSL offloading and as a Cloud Architect what would be your load balancing choice?
A. HTTPS load balancing
B. SSL proxy Load balancing.
C. TCP proxy Load balancing for all non-http(s) traffic
D. Network TCP/UDP load balancing
Answer:
Option B is the CORRECT because, creating service accounts for each service with only the permissions required for that service is the best practice, even if the employee leaves the organization other employees can use the service account .
Option A is INCORRECT because Service Account is used to give permission to Application or VMs. A service account is a special type of Google account that belongs to your application or a virtual machine (VM), instead of to an individual end user. Your application assumes the identity of the service account to call Google APIs so that the users aren’t directly involved. With Admin access, the employees will be able to create Compute Engine instances which runs the service account, connect to them, and use the service account to start the job. So in nutshell,admin empowers to effectively run code as the service accounts used to run these instances, and indirectly gain access to all the resources for which the service accounts has access.
Option C is INCORRECT because Granting the service account only the minimum set of permissions required to achieve their goal is the best practice.
Option D is INCORRECT because Google Managed service accounts are created and owned by Google. These accounts represent different Google services and each account is automatically granted IAM roles to access your GCP project. This service account is designed specifically to run internal Google processes on your behalf and is not listed in the Service Accounts section of GCP Console. More reading at https://cloud.google.com/iam/docs/understanding-service-accounts
Which of the following are the best practices recommended by Google Cloud when dealing with service Accounts. Select 3 relevant options
A. Grant the service account full set of permissions
B. Do not delete service accounts that are in use by running instances on Google App Engine or Google Compute Engine
C. Grant serviceAccountUser role to all the users in the organization.
D. Use the display name of a service account to keep track of the service accounts. When you create a service account, populate its display name with the purpose of the service account.
E. Create service accounts for each service with only the permissions required for that service.
Answer:
Option B is the CORRECT choice because SSL proxy Loadbalancing supports SSL offloading and it is availability is Global and it handles non-http(s) traffic.
Option A is INCORRECT because the traffic is non-http(s).
Option C is INCORRECT because TCP proxy can handle non-http(s) traffic but it doesn’t come with SSL offloading feature.
Option D is INCORRECT because Network TCP/UDP load balancing is Regional and it doesn’t handle SSL offloading. Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic.
One of the large data Analysis company uses Big Query, Big Table, Data Proc and Cloud Storage services. They use a Hybrid Architecture involving on premise and Google Cloud, Cloud VPN is used to connect to Google Cloud Platform. One of the main challenges for the Organization is mitigating Data exfiltration risks stemming from stolen identities, IAM policy misconfigurations, malicious insiders and compromised virtual machines. What Google Cloud Service can they use to address the challenge?
A. Shared VPC
B. Cloud Armour
C. VPC Service Controls
D. Resource Manager
Answer:
Option B, D & E are the CORRECT choices.
Option A is INCORRECT because always grant the service account only the minimum set of permissions required to achieve their goal.
Option C is INCORRECT because always restrict who can act as service accounts. Users who are Service Account Users for a service account can indirectly access all the resources the service account has access to. Therefore, be cautious when granting the serviceAccountUser role to a user.
A power generation company is looking to use the Google cloud platform to monitor a power station. They have installed several IoT sensors in the power station like temperature sensors, smoke detectors, motion detectors, etc. Sensor data will be continuously streamed to the cloud. Those data need to be handled by different components for real-time monitoring and alerts, analysis, and performance improvement. What Google Cloud Architecture would serve their purpose?
A. Cloud IoT Core receives data from IoT devices and redirects the requests to aCloud Pub/Sub Topic. AfterPub/Sub, data is retrieved by a streaming job running in Cloud Dataflow that transforms the data and sends it to BigQuery for analysis.
B. Send IoT devices data to Cloud Storage, load data from cloud storage to Big Query.
C. Cloud IoT core receives data from IoT sensors, then sends the data to cloud storage, transform the data using Cloud Dataflow and send the data to BigQuery for Analysis.
D. Cloud IoT core receives the data from IoT devices, Cloud IoT core transforms and redirects the request to Pub/Sub, use data proc to transform the data and send it to BigQuery for Analysis.
Answer:
Option C is CORRECT because , VPC Service Controls create a security perimeter around data stored in API-based GCP services such as Google Cloud Storage, BigQuery and Bigtable. This helps mitigate data exfiltration risks stemming from stolen identities, IAM policy misconfigurations, malicious insiders and compromised virtual machines.
Option A is INCORRECT because , Shared VPC allows an organization to connect resources from multiple projects to a common VPC network, so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network .Here the challenge is to mitigate Data exfiltration and VPC Service Controls is the right choice.
Option B is INCORRECT because, Cloud Armor is used for delivering defense at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks using Google’s global infrastructure and security systems.
Option D is INCORRECT because , . Resource Manager enables you to programmatically manage these resource containers. Google Cloud Platform provides Resource containers such as Organizations, Folders, and Projects, that allow you to group and hierarchically organize other Cloud Platform resources. This hierarchical organization lets you easily manage common aspects of your resources such as access control and configuration settings. Security benefits of VPC Service Controls VPC Service Controls helps mitigate the following security risks without sacrificing the performance advantages of direct private access to GCP resources: Access from unauthorized networks using stolen credentials: By allowing private access only from authorized VPC networks, VPC Service Controls protects against theft of OAuth credentials or service account credentials. Data exfiltration by malicious insiders or compromised code: VPC Service Controls complements network egress controls by preventing clients within those networks from accessing the resources of Google-managed services outside the perimeter. VPC Service Controls also prevents reading data from or copying data to a resource outside the perimeter using service operations such as copying to a public Cloud Storage bucket using the gsutil cp command or to a permanent external BigQuery table using the bq mk command. The restricted VIPs feature can be used to prevent access from a trusted network to storage services that are not integrated with VPC Service Controls. Public exposure of private data caused by misconfigured Cloud IAM policies: VPC Service Controls provides an additional layer of security by denying access from unauthorized networks, even if the data is exposed by misconfigured Cloud IAM policies. By assigning the Access Context Manager Policy Admin role for Cloud IAM, VPC Service Controls can be configured by a user who is not the Cloud IAM policy administrator. VPC Service Controls is configured for your GCP organization to create a broad, uniform policy that applies consistently to all protected resources within the perimeter. You retain the flexibility to process, transform, and copy data within the perimeter. The security controls automatically apply to all new resources created within a perimeter. Read more about VPC Service Control here : https://cloud.google.com/vpc-service-controls/docs/overview A service perimeter creates a security boundary around GCP resources. You can configure a service perimeter to control communications from virtual machines (VMs) to a GCP service (API), and between GCP services. A service perimeter allows free communication within the perimeter but, by default, blocks all communication across the perimeter. For example: A VM within a Virtual Private Cloud (VPC) network that is part of a service perimeter can read from or write to a Cloud Storage bucket in the same perimeter. However, any attempt to access the bucket from VPC networks that are not inside the perimeter is denied. A copy operation between two Cloud Storage buckets will succeed if both buckets are in the same service perimeter, but will fail if one of the buckets is outside the perimeter. A VM within a VPC network that is part of a service perimeter can privately access any Cloud Storage buckets in the same perimeter. However, the VM will be denied access to Cloud Storage buckets that are outside the perimeter.
Your company just finished a rapid lift and shift to Google Compute Engine for your compute needs. You have another 9 months to design and deploy a more cloud-native solution. The business team is looking for services with lesser responsibility and easy manageability. Please select the order of services with lesser responsibility to more responsibility
A. GKE >Google App Engine Standard Environment >Cloud Functions >Compute Engine with containers >Compute Engine
B. Cloud Functions >Google App Engine Standard Environment>GKE >Compute Engine with containers >Compute Engine
C. Cloud Functions >GKE >Google App Engine Standard Environment >Compute Engine >Compute Engine with containers
D. Google App Engine Standard Environment >Cloud Functions>Compute Engine with containers>GKE>Compute Engine
Answer: A
Option A is CORRECT because Cloud IoT Core can accept data from IoT devices and Cloud Pub/Sub acts as a connector service and sends the data to Cloud Data Flow for transformation. Data Flow transforms the data and sends it to Big Query for analysis.
Option B is INCORRECT because Cloud Storage isn’t the right choice for streaming data, using Cloud Pub/Sub is the best choice.
Option C is INCORRECT because Cloud IoT Core can stream the data directly to Cloud Pub/Sub .(use Cloud Storage for Batch Upload)
Option D is INCORRECT because Dataproc is a fully managed cloud service for running Apache Spark and Apache Hadoop clusters. Sources: https://cloud.google.com/community/tutorials/cloud-iot-rtdp
One of the customers want to redact the sensitive data like credit card numbers , social security numbers that are generated by the application logs .Please select the suitable service that fulfils the necessary requirement .
A. Cloud Data Loss Prevention
B. Cloud Secure
C. VPC Service control
D. Cloud Armour
Answer: B is the CORRECT choice, Cloud Functions is the least no-Ops, then App Engine, then followed by GKE and then Compute Engine with containers and at last Compute Engine.
Your organization is developing an event-driven application in which cloud functions will access Cloud SQL for managing data. As per the security best practices, you want to store the Cloud SQL credentials securely. Where will you store the Cloud SQL credentials?
A. In the Cloud function code
B. In the Cloud function environment variable
C. In Cloud Secret Manager
D. In Cloud KMS
Answer:A
Option A is the Correct choice because , Cloud DLP helps you better understand and manage sensitive data. It provides fast, scalable classification and redaction for sensitive data elements like credit card numbers, names, social security numbers, US and selected international identifier numbers, phone numbers, and GCP credentials .
Option B is Incorrect because Cloud Secure Service doesn’t exist in GCP.
Option C is incorrect because , VPC Service Controls allow users to define a security perimeter around Google Cloud Platform resources such as Cloud Storage buckets, Bigtable instances, and BigQuery datasets to constrain data within a VPC and help mitigate data exfiltration risk but it doesn’t help in data redaction .
Option D is Incorrect because ,Cloud Armour Google Cloud Armor delivers defence at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks using Google’s global infrastructure and security systems but it doesn’t help in data redaction . Read more about it here : https://cloud.google.com/dlp/
You have been hired as a DevSecOps engineer by a finance company. They want to upload files from an on-premise server to Google Cloud Storage. But as per there security policy, the files must be encrypted using customer-supplied encryption keys on Google Cloud storage. How will you fulfill this requirement?
A. Use –encryption_key flag with gsutil command to supply encryption key while uploading files
B. Supply the encryption key in Cloud KMS and use that key for encryption
C. Add the encryption_key option in the boto configuration file and use gsutil command to upload files
D. Configure the encryption key in gcloud configuration and use gsutil to upload files
C)
Option is correct You should store Cloud SQL credentials in Cloud Secret Manager where you can rotate, create versions and can manage access to credentials https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets
Option A is incorrect because storing the credentials in the code itself will make it accessible to anyone having access to cloud functions and it will also become difficult to rotate credentials
Option B is incorrect because storing the credentials in the environment variable will make it accessible to anyone having access to cloud functions
Option D is incorrect because Cloud KMS is used to managing encryption and decryption
To set up a virtual private network between your office network and Google Cloud Platform and have the routes automatically updated when the network topology changes, what is the minimal number of each type of component you need to implement?
A. 2 Cloud VPN Gateways and 1 Peer Gateway
B. 1 Cloud VPN Gateway, 1 Peer Gateway, and 1 Cloud Router
C. 2 Peer Gateways and 1 Cloud Router
D. 2 Cloud VPN Gateways and 1 Cloud Router
C) Option is correct To use customer-supplied encryption keys with Google Cloud Storage while uploading files, you must add the encryption_key option in [GSUtil] section of the boto configuration file. Boto configuration file is the file where you can configure all configurations related to gsutil command line https://cloud.google.com/storage/docs/gsutil/addlhelp/UsingEncryptionKeys
Option A is incorrect because you need to add encryption_key option in GSUtil section of boto configuration file there is no such flag while using gsutil commands
Option B is incorrect because as per security policy they don’t want to store Keys in Google Cloud. So CMEK is not an option.
Option D is incorrect because encryption_key must be added in boto file, not the gcloud configuration
A Digital Media company has recently moved its infrastructure from On-premise to Google Cloud. They have deployed several instances under a Global HTTPS load balancer. A few days ago an Application and Infrastructure were subjected to DDOS attacks. Hence they are looking for a service that would provide a defence mechanism against the DDOS attacks. Please select the relevant service from below.
A. Cloud Armor
B. Cloud-Identity Aware Proxy
C. GCP Firewalls
D. IAM policies
Concert answer B The question describes a topology for Dynamic routing The minimal number of each type of component you need to implement Dynamic routing: 1 Cloud VPN Gateway (Show as VPN in GCP network on left), 1 Peer Gateway (Show as VPN Gateway with BGP in peer network on right), and 1 Cloud Router, displayed in the diagram
Your infrastructure includes two 100-TB enterprise file servers. You need to perform a one-way, one-time migration of this data to the Google Cloud securely. Only users in Germany will access this data. You want to create the most cost-effective solution. What should you do?
A. Use Transfer Appliance to transfer the offsite backup files to a Cloud Storage - Regionbucket as a final destination.
B. Use Transfer Appliance to transfer the offsite backup files to a Cloud Storage - Multi-Regionbucket as a final destination.
C. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage - Region bucket as a final destination.
D. Use Storage Transfer Service to transfer the offsite backup files to a Cloud Storage - Multi-Region bucket as a final destination.
Correct Answer - A
Option A is CORRECT because Cloud Armor delivers defence at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks using Google’s global infrastructure and security systems.
Option B is INCORRECT because, Cloud-Identity Aware Proxy lets you establish a central authorization layer for applications accessed by HTTPS, so you can use an application-level access control model instead of relying on network-level firewalls.
Option C is INCORRECT because GCP firewalls rules don’t apply for HTTP(S) Load Balancers, while Cloud Armor is delivered at the edge of Google’s network, helping to block attacks close to their source.
Option D is INCORRECT IAM policies don’t help in mitigating DDOS attacks. Read more about Cloud Armor: https://cloud.google.com/blog/products/gcp/getting-to-know-cloud-armor-defense-at-scale-for-internet-facing-services
You are working as a Google Cloud Architect for a large enterprise. They are using the GKE cluster for their production workload. There is a requirement to expose an existing deployment to the public internet using a service type of load balancer. Which command will you use to create a service type of load balancer?
A. kubectl expose deployment demo –port=80 –target-port=80 --name=example-service –type=LoadBalancer
B. kubectl expose deployment demo –type=LoadBalancer –expose 80
C. kubectl expose service demo –port=443 –target-port=80 –name=new-application
D. kubectl expose deployment demo –type=NodePort –name=example-service
Correct Answer - A
Option A is correct because you are performing a one-time (rather than an ongoing series) data transfer from on-premises to Google Cloud Platform for users in a single region (Germany). Using a Region storage bucket will reduce cost and also conform to regulatory requirements Options B, C, and D are incorrect because you should not use a Multi-Region storage bucket for users in a single region (B, D). Also, Storage Transfer Service does not work for data stored on-premises file servers (C, D). Reference GCS Region storage for single location access: https://cloud.google.com/storage/docs/storage-classes Google Cloud transfer appliance: https://cloud.google.com/transfer-appliance/
Your company plans to migrate a multi-petabyte data set to the cloud. The data set must be available 24hrs a day. Your business analysts have experience only with using an SQL interface. How should you store the data to optimize it for ease of analysis?
A. Load data into Google BigQuery.
B. Insert data into Google Cloud SQL.
C. Put flat files into Google Cloud Storage.
D. Stream data into Google Cloud Datastore.
Correct Answer - A
Option A is correct With the help of this command, we can create a service type of LoadBalancer and expose the port on which our application is hosted. https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
Option B is incorrect because the flag –name is not optional and is missing in
Option B.
Option C is incorrect because it will create a service type of ClusterIP.
Option D is incorrect because we need a service type of load balancer that we can expose to the public internet.
One of your clients are storing highly sensitive data on Google Cloud Storage, they strictly adhere to their compliance, hence they do not want their keys to be stored in a cloud, please suggest them the right choice of encryption.
A. Google recommends the usage of Cloud External Key Manager (Cloud EKM)
B. All objects on Google Storage are encrypted by default hence additional encryption isn’t required
C. Give your Cloud Storage service account access to an encryption key, that service account encrypts
D. Google recommends the usage of cloud KMS for storing CMEK.
Correct Answer - A
Option A is correct. BigQuery is the only of these Google products that support an SQL interface and can handle petabyte data.
For this question, refer to the TerramEarth case study. TerramEarth’s 20 million vehicles are scattered around the world. Based on the vehicle’s location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?
A. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a regional bucket and use a Cloud Dataproc cluster to finish the job.
B. Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.
C. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job.
D. Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.
Correct Answer - A
Option A is the correct choice because the client doesn’t want to store the encryption keys on Google Cloud. With Cloud EKM, you can use keys that you manage within a supported external key management partner to protect data within Google Cloud.
Option B is incorrect because, even though All objects on Google Storage are encrypted by default, the client is storing sensitive data and hence default encryption isn’t the best option. https://cloud.google.com/security/encryption-at-rest/
Option C is incorrect because giving your Cloud Storage service account access to an encryption key, that the service account encrypts comes under Customer-Managed Encryption Keys, these keys are stored in Google cloud, hence not the correct choice here.
Option D is incorrect because, in a customer-managed encryption key, your encryption keys are stored within Cloud KMS. The client doesn’t want to store keys on the Cloud. Reference: https://cloud.google.com/kms/docs/ekm
You have an application server running on Compute Engine in the europe-west1-d zone. You need to ensure high availability and replicate the server to the europe-west2-c zone using the fewest steps possible. What should you do to achieve the requirement?
A. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west2-c zone. Create a new VM with that disk.
B. Create a snapshot from the disk. Create a disk from the snapshot in the europe-west1-d zone and then move the disk to europe-west2-c. Create a new VM with that disk.
C. Use “gcloud” to copy the disk to the europe-west2-c zone. Create a new VM with that disk.
D. Use “gcloud compute instances move” with parameter “–destination-zone europe-west2-c” to move the instance to the new zone.
Correct Answer - A A (Correct answer) - Launch a cluster in each region to preprocess and compress the raw data, then move the data into a regional bucket and use a Cloud Dataproc cluster to finish the job. Since the raw data are saved based on the vehicle’s location all over the world, most likely they’ll scatter in many different regions, and eventually they need to move to a centralized location for final processing. Preprocessing raw data and compressing them from each location to reduce the size so to save the between-region data egress cost. Dataproc is a region-specific resource and since you want to run this job on all data and you or your group probably are the only consumers for the data, moving the data into a regional bucket same or closest to the DataProc cluster’s region for final analysis is most cost-effective. Use a regional location to help optimize latency, availability, and network bandwidth for data consumers grouped in the same region. Use a multi-regional location when you want to serve content to data consumers that are outside of the Google network and distributed across large geographic areas. Store frequently accessed data, or data that needs to be geo-redundant as Multi-Regional Storage.
B - Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job. Since the raw data are save based on the vehicles’ location all over the world, moving them to a centralized region without preprocessing and compressing would incur additional between-region data egress cost
C - Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi-region bucket and use a Dataproc cluster to finish the job. Dataproc is Region-specific resource and since you want to run this job on all data and data consumption occurs in a centralized location, then moving the data into a multi-region bucket for Dataproc cluster jobs is not most cost-effective. Use a multi-regional location when you want to serve content to data consumers that are outside of the Google network and distributed across large geographic areas. · Store frequently accessed data or data that needs to be geo-redundant as Multi-Regional Storage.
D - Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job. GCS is either Regional or Multi-Regional not Zonal Resources
You need to reduce the number of unplanned rollbacks of erroneous production deployments in your company’s web hosting platform. Improvement to the QA and Test processes accomplished an 80% reduction. Which additional two approaches can you take to further reduce the rollbacks? (Choose two)
A. Introduce a blue-green deployment model.
B. Fragment the monolithic platform into microservices.
C. Remove the QA environment. Start executing canary releases.
D. Remove the platform’s dependency on relational database systems.
E. Replace the platform’s relational database systems with a NoSQL database.
Correct Answer - A A is correct because this makes sure the VM gets replicated in the new zone. B is incorrect because this takes more steps than A. C is incorrect because this will generate an error because gcloud cannot copy disks. D is incorrect because the original VM will be moved, not replicated. References: https://cloud.google.com/compute/docs/instances/create-start-instance#createsnapshot
You are now working for an international company that has many Kubernetes projects on various Cloud platforms. These projects involve mainly microservices web applications and are executed either in GCP or other cloud providers. They have many inter-relationships and there is the involvement of many teams related to development, staging, and production environments. Your new task is to find the best way to organize these systems. You need a solution for gaining control on application organization and networking: monitor functionalities, performances, and security in a complex environment. Which of the following services may help you?
A. Traffic Director
B. Istio on GKE
C. Apigee
D. App Engine Flexible Edition
Correct Answer - A and B A (Correct Answer) - The blue-green model allows for extensive testing of the application in the green environment before sending traffic to it. Typically, the two environments are identical otherwise which gives the highest level of testing assurance. B (Correct Answer) - Microservices allows for smaller, more incremental rollouts of updates (each microservice can be updated individually) which will reduce the likelihood of an error in each rollout. C is incorrect - Would remove a well proven step from the general release strategy, a canary release platform is not a replacement for QA, it should be additive. D is incorrect - Doesn’t really help the rollout strategy, there is no inherent property of a relational database that makes it more subject to failed releases than any other type of data storage. E is incorrect - Doesn’t really help either since NoSQL databases do not offer anything over relational databases that would help with release quality.
Your company plans to host a large donation website on the Google Cloud Platform. You anticipate a large and undetermined amount of traffic that will create many databases writes. Which managed service hosted on GCP would you suggest to ensure no drop for any write traffic to a database?
A. Cloud SQL with Bigger (More CPU, Memory, and Disk Size) machine type with throughput capacity matching to the anticipated peak write throughput.
B. Cloud Pub/Sub for capturing the writes and draining the queue to write to the database.
C. Memcached to store the writes until the writes are committed to the database.
D. Install your MySQL database on Compute instance and enable autoscaling.
Correct Answer - A What you need is Service Management with capabilities of real-time monitoring, security, and telemetry data collection in a multi-cloud microservices environment. They are called Service Mesh. The most popular product in this category is ISTIO, which collects traffic flows and telemetry data between microservices, enforcing security, with the help of proxies that operate without changes to application code. Traffic Director can help in a global service mesh because it is a fully managed Service Management control plane. With Traffic Director, you can manage on-premise and multi-cloud destinations, too. B is incorrect because Istio on Google Kubernetes Engine is a tool for GKE that offers automated installation and management of Istio Service Mesh. So, only inside GCP. C is incorrect because Apigee is a powerful tool for API Management suitable also for on-premise and multi-cloud environments. But API Management is for managing application APIs and Service Mesh is for managing Service to Service communication, security, Service Levels, and control. Similar services with different scopes. D is incorrect because App Engine Flexible Edition is a PaaS for microservices applications within Google Cloud. For any further detail: https://cloud.google.com/traffic-director/docs/overview Choosing between service management and API management
Your customer needs a dedicated System with MongoDB and 2 replicas. He also wants maximum availability and protection against failures and interruptions for maintenance/ updates to the instances. The Database operates only in one US region and is actively queried and updated 24/7. So, you cannot select a comfortable maintenance Window. What do you advise?
A. Use an internal load balancing Service with a Managed Instance Group and Regional persistent disks
B. Use a 3rd party MongoDB Managed Service like MongoDBAtlas
C. Implement Live Migration and use persistent regional SSDs
D. Use internal TCP/UDP Load Balancing with local SSD disks
Correct Answer - B
A - you anticipate a “large and undetermined amount of traffic”, so regardless of any provisioned IOPS there is always a risk it will not be enough and potentially high none necessary cost B (Correct answer) - Cloud Pub/Sub for capturing the writes and draining the queue to write to the database. Cloud Pub/Sub brings the scalability, flexibility, and reliability of enterprise message-oriented middleware to the cloud. By providing many-to-many, asynchronous messaging that decouples senders and receivers, it allows for secure and highly available communication between independently written applications. Cloud Pub/Sub delivers low-latency, durable messaging that helps developers quickly integrate systems hosted on the Google Cloud Platform and externally.
C - Memcached is for reading not for write
D - Install your MySQL database on Compute instance and enable autoscaling. If you roll out your own MySql instance, then you don’t have the advantage from manage Google Cloud SQL. Furthermore, it’ll be complicated and costly to implement a horizontal autoscaling feature even if you can try some sharding and master/slave. So, Answer B is the clear winner.
You are designing a relational data repository on Google Cloud to grow as needed. The data will be transactional consistent and added from any location in the world. You want to monitor and adjust node count for input traffic, which can spike unpredictably. What should you do?
A. Use Cloud Spanner for storage. Monitor storage usage and increase node count if more than 70% utilized.
B. Use Cloud Spanner for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span.
C. Use Cloud Bigtable for storage. Monitor data stored and increase node count if more than 70% utilized.
D. Use Cloud Bigtable for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span.
Correct Answer - B
Option A is incorrect. A load balancer with MIG will provide scalability with an inbuilt feature of Live Migration if needed. But it is not as optimized response as compared to a managed service
Option B is correct. The requirement is to have a dedicated System. MongoDB Atlas provides the same. MongoDB Atlas provides customers a fully managed service on Google’s globally scalable and reliable infrastructure. Atlas allows you to manage your databases easily with just a few clicks in the UI or an API call, is easy to migrate to, and offers advanced features such as global clusters for low-latency read and write access anywhere in the world.
Option C is incorrect. Live migration is not an option but an inbuilt feature provided by Google.
Option D is incorrect. A DB Instance, even if NoSQL, cannot scale in a simple way. In case of failover, it is likely to have inconsistencies and loss of services. In addition, local SSD disks are really fast but they persist only until the instance is stopped or deleted. Definitely not according to the requirements. For any further detail, please check the following URLs: https://cloud.google.com/mongodb https://www.mongodb.com/cloud/atlas/
How are subnetworks (VPC Networks) different than the legacy networks?
A. They’re the same, only the branding is different.
B. Each subnetwork controls the IP address range used for instances that are allocated to that subnetwork.
C. With subnetworks IP address allocation occurs at the global network level.
D. Legacy networks are the preferred way to create networks.
Correct Answer - B
Option B is correct because of the requirement to globally scalable transactions—use Cloud Spanner. CPU utilization is the recommended metric for scaling, per Google best practices, see linked below. A is incorrect because you should not use storage utilization as a scaling metric. C, D are incorrect because you should not use Cloud Bigtable for this scenario: The data will be transactional consistent and added from any location in the world. References: Cloud Spanner Monitoring Using Operations Suite (formerly Stackdriver) https://cloud.google.com/spanner/docs/monitoring Best Practices: https://cloud.google.com/spanner/docs/best-practice-list
You have been hired as a DevSecOps Engineer by a large enterprise. They recently migrated their on-premise servers to GCP. There is a requirement that the instances running in the VPC should only send traffic to Active Directory Servers in the same VPC and all other outgoing traffic should be blocked. How will you create the firewall rules for this scenario?
A. Create firewall rules which deny all egress traffic and assign a priority of 100. Also, create a firewall rule which allows egress traffic to Active Directory Servers and assign a priority of 1000 and apply to all instances.
B. Create firewall rules which deny all egress traffic and assign a priority of 1000. Also, create a firewall rule which allows egress traffic to Active Directory Servers and assign a priority of 100 and apply both rules to all instances.
C. Create firewall rules which deny all ingress traffic and assign a priority of 100. Also, create a firewall rule which allows egress traffic to Active Directory Servers and assign a priority of 1000 and apply to all instances.
D. Create firewall rules which deny all ingress traffic and assign a priority of 1000. Also, create a firewall rule which allows egress traffic to Active Directory Servers and assign a priority of 100 and apply both rules to all instances.
Correct Answer - B Google Cloud Platform (GCP) legacy networking vs. VPC subnet: Legacy networking Legacy networks have a single RFC 1918 range, which you specify when you create the network. The network is global in scope and spans all cloud regions. In a legacy network, instance IP addresses are not grouped by region or zone. One IP address can appear in one region, and the following IP address can be in a different region. Any given range of IPs can be spread across all regions, and the IP addresses of instances created within a region are not necessarily contiguous. It is not possible to create regional subnets with a legacy network. Legacy networking Example: Subnets and IP ranges Each VPC network consists of one or more useful IP range partitions called subnetworks or subnets. Each subnet is associated with a region. Networks can contain one or more subnets in any given region. Subnets are regional resources. Each subnet must have a primary address range, which is a valid RFC 1918 CIDR block. Subnets in the same network must use unique IP ranges. Subnets in different networks, even in the same project, can re-use the same IP address ranges. VPC network example: subnet3 is defined as 10.2.0.0/16, in the us-east1 region. One VM instance in the us-east1-a zone and a second instance in the us-east1-b zone, each receiving an IP addresses from its available range. Note: Legacy networks are not recommended. Many newer GCP features are not supported in legacy networks. It is still possible to create legacy networks through the gcloud command-line tool and the REST API. It is not possible to create legacy networks using the Google Cloud Platform Console. Reference resources Virtual Private Cloud (VPC) Network Overview https://cloud.google.com/vpc/docs/vpc Google Cloud Platform (GCP) legacy networking vs. VPC subnet https://cloud.google.com/vpc/docs/legacy
You are working as a DevOps engineer for a large enterprise. Recently there was an update deployed for an application running on the Compute engine server which caused a memory leak issue. Due to this, the compute engine memory was full leading to an outage. How will you avoid this issue in the future by setting up a proper alerting solution for memory metric, so the SRE teams get notified in time?
A. Install Cloud Logging agent in VM to monitor memory usage and setup alerting policies to notify the SRE team
B. Install monitoring agent on VM for memory usage monitoring and set up alerting policies in Cloud Operations to notify the SRE team using Cloud Operations.
C. Install Cloud monitoring agent in VM to monitor memory usage and setup uptime checks policies to notify the SRE team
D. Memory metrics are by default available for a VM, just setup alerting policies to notify the SRE team
Correct Answer - B Since we need to allow egress traffic to Active Directory servers only, we will create an egress rule which has a destination IP range of Active Directory servers and Assign a Low priority number because the lower the number, the higher the priority. The second rule to deny all egress traffic with Higher Priority Number i.e. 1000 https://cloud.google.com/vpc/docs/using-firewalls
Option A is incorrect because creating a deny rule for all egress traffic with priority 100 will block all traffic including Active Directory. The lower number takes the first precedence Options C & D are incorrect because there is a requirement to all block outgoing traffic except traffic to the active directory server, so configuring ingress rules will not work.
You are working as a DevOps engineer for a startup company. Recently they deployed a Python based application on the Google Kubernetes engine which is running slow and using more infrastructure resources than expected as per monitoring alerts. Which GCP service can you use to troubleshoot such an issue?
A. Cloud Monitoring
B. Cloud Trace
C. Cloud Profiler
D. Cloud Logging
Correct Answer - B We can monitor the GCP services using the Operations suite from Google. Though it doesn’t provide memory metrics out of the box. We have to install the agent on VM for additional metrics. After configuring the Cloud monitoring you can set up alerting policies in Cloud monitoring to notify the SRE team.
Option A is incorrect because Cloud logging is used for logging application logs or any other logs to Cloud logging.
Option C is incorrect because the uptime check is used to check the system availability.
Option D is incorrect because by default memory metrics are not available on Cloud monitoring.
You have to deploy an update to your scalable app, operating with managed instance groups but you cannot undergo any service disruption during the migration. You have already tested the new configuration and you need to deploy it in the fastest and safest way. Which is the best solution?
A. Use a new Template and everything will be automatic.
B. Use a new Template, then start new instances and stop the old ones.
C. Use a new Template and ask for a Rolling update.
D. Use a new Template and ask for a Canary update.
Correct Answer - C
Option A is incorrect because it is used to monitor resource utilization or any custom metric.
Option B is incorrect because Cloud trace is used to detect the latency issues in your application.
Option C is correct because Cloud profiler is a Google cloud service that helps you to analyze the CPU and memory usage of your functions in the application https://codelabs.developers.google.com/codelabs/cloud-stackdriver-profiler/#0
Option D is incorrect because Cloud logging is a fully managed service which allows you to store, search and analyze logs
Rules must be set to allow data traffic to database servers only from application servers, in 3 different projects: A, B, and C. The resources of the 3 projects must be isolated from each other. You want to organize operations in order to create simple and intuitive standards to use, which can be repeated for other projects. In your organization, it is not necessary to provide different security for various projects. Which of the following strategies will you choose?
A. Create 2 Firewall Rules, one in ingress and one in egress, between each Database Server and App Server using the ephemeral external IP address
B. Create 1 Firewall Rule, in ingress, between each Database Server and App Server using private IP addresses
C. Configure your Servers with appropriate Network Tags (AppVM and DBVM, for example) and create 1 Firewall Rule, in ingress, between each Database Server and App Server using these Tags
D. Configure your Servers with appropriate Network Tags (AppVM and DBVM, for example) and create 2 Firewall Rules, in ingress and egress, between each Database Server and App Server using these Tags
E. Create and assign appropriate Service Accounts and rights to the VMs and create a Firewall Rule between each Database Server and App Server using source-service-accounts and target-service-accounts
Correct Answer - C A is incorrect. Instance Template are immutable so you have to create a new Instance Template and update the Managed Group Definition B is incorrect. It is not advisable to do such a manual operation. It’s cumbersome and prone to errors. C is correct. With managed instance group updater, you may roll out an update automatically based on your specifications: maxSurge is the number of Instances beyond the targetSize of the group maxUnavailable set the number of instances unavailable at any time during the update Minimal action: if the updater has to REPLACE or RESTART the Instances D is incorrect. A canary update is a partial update to a few numbers of instances in the instance group. The requirement was to deploy it in all the VMs in the fastest and safest way For more details, please refer to the URLs below: https://cloud.google.com/compute/docs/instance-groups/ https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
You can SSH into an instance from another instance in the same VPC by its internal IP address, but not its external IP address. What is the possible reason for this scenario?
A. The outgoing instance does not have correct permission granted to its service account.
B. The internal IP address is disabled.
C. The SSH firewall rule is restricted only to the internal VPC.
D. The receiving instance has an ephemeral address instead of a reserved address.
Correct Answer - C GCP firewall rules are stateful. When a connection is allowed through the firewall in either direction, return traffic matching this connection is also allowed. You cannot configure a firewall rule to deny associated response traffic. Return traffic must match the 5-tuple (source IP, destination IP, source port, destination port, protocol) of the accepted request traffic, but with the source and destination addresses and ports reversed. Options A and D are incorrect. A service account represents an identity associated with an instance. Only one service account can be associated with an instance. So it is the best option in case of strict security constraints. Be careful because you cannot mix and match service accounts and network tags in any firewall rules.
Option E is incorrect because it is not necessary to provide different security to various projects. So service accounts are not required for this requirement. For any further detail, please refer to the URLs below: https://cloud.google.com/vpc/docs/using-firewalls https://cloud.google.com/vpc/docs/firewalls#service-accounts-vs-tags https://cloud.google.com/vpc/docs/firewalls#specifications
You are creating a solution to remove backup files older than 90 days from your backup Cloud Storage bucket. You want to optimize ongoing Cloud Storage spending. What should you do?
A. Write a lifecycle management rule in XML and push it to the bucket with gsutil.
B. Schedule a cron script using gsutil ls -lr gs://backups/** to find and remove items older than 90 days.
C. Schedule a cron script using gsutil ls -1 gs://backups/** to find and remove items older than 90 days and schedule it with cron.
D. Write a lifecycle management rule in JSON and push it to the bucket with gsutil.
Correct Answer - C The firewall rule to allow SSH is restricted to the internal VPC Instances can have both Internal and External IP addresses. When connecting to another instance by its external address, you’re going out of your internal network to the external Internet and coming back to access the instance by its external address. If traffic is restricted to the local VPC, it will reject this attempt as it is coming from an external source. Reference: https://cloud.google.com/vpc/docs/firewalls#firewall_rules_in
If external auditors need to be able to access your admin activity logs once a year for compliance, what is the best method of preserving and sharing that log data? (Choose two)
A. If they need access to multiple logs in a single bucket, and they have a GCP account, export logs to a Cloud Storage bucket for long-term retention and grant auditor accounts the Storage Object Viewer role to the bucket.
B. Create GCP accounts for the auditors and grant the Project Viewer role to view logs in Operations Suite (formerly Stackdriver) Logging.
C. If they do not need a GCP account and need to view a single date’s object, export the logs to a Cloud Storage bucket for long-term retention and generate a signed URL for temporary object-level access.
D. Export logs to Cloud Storage bucket and email a list of the logs once per year.
Correct Answer - D Opion
A - Write a lifecycle management rule in XML and push it to the bucket with gsutil: you can set lifecycle configuration for an existing bucket with a PUT API call request (NOT the “gsutil lifecycle” command!). You must include an XML document in the request body that contains the lifecycle configuration. https://cloud.google.com/storage/docs/xml-api/put-bucket-lifecycle#request_body_elements B and C can be eliminated. They do the similar thing slightly different: write script listing object and get their timestamps gsutil ls -[l or lr] gs://[BUCKET_NAME]/** If an object’s age is older than 90 days, do deleting, then schedule a cron job for the recurring process. However, gsutil ls -l/-lr does not list versioned objects. To list versioned object, need gsutil ls -a. Using this approach, versioned archives won’t be deleted. There is a better, easier, and more consistent way to do this in Answer D D (Correct answer) - Write a lifecycle management rule in JSON and push it to the bucket with gsutil. To enable lifecycle management for a bucket: https://cloud.google.com/storage/docs/managing-lifecycles · Create a .json file with the lifecycle configuration rules you would like to apply (see examples below). · Use the lifecycle set command to apply the configuration gsutil lifecycle set [LIFECYCLE_JSON-CONFIG_FILE] gs://[BUCKET_NAME] The following lifecycle configuration JSON document specifies that all objects in this bucket that are more than 90 days old will be deleted automatically: { “rule”: [ { “action”: {“type”: “Delete”}, “condition”: {“age”: 90} } ] }
When creating firewall rules, what forms of segmentation can narrow which resources the rule is applied to? (Choose all that apply)
A. Network range in source filters
B. Zone
C. Region
D. Network tags
Correct Answer A and C Explanation For long-term logs preserving and retention: There are 3 type of sink destinations you can export Logs to: Cloud Storage, Cloud Pub/Sub, BigQuery. Export logs to Cloud Storage via an export sink. Cloud Storage is perfect solution for long-term logs retention. For Sharing: The choice to use IAM or signed URL’s depends on if the auditors need a GCP account or need access to a single object or all logs in a bucket. You could either create a GCP account for auditor ACL object access or signed URL’s depending on if they need to have a GCP account or not. Answer A is correct. If Auditors have GCP accounts, you can grant them “roles/storage.objectViewer” which can view objects and their metadata. Note the different between “storage.objectViewer” and “Project Viewer” https://cloud.google.com/storage/docs/access-control/iam-roles Cloud Storage IAM Roles Answer C is correct: “A signed URL is associated with a bucket or object and gives time-limited read or write access to that specific resource. Anyone in possession of the URL has the access granted by the URL, regardless of whether they have a Google account.” https://cloud.google.com/storage/docs/access-control/create-signed-urls-program Answer B is incorrect: Project Viewer role is not enough to view Admin Activity logs in Operations Suite (formerly Stackdriver) Logging. “To view the logs, you must have the IAM roles Logging/Private Logs Viewer or Project/Owner”. https://cloud.google.com/logging/docs/audit/#admin-activity Note: the Operations Suite (formerly Stackdriver) Admin activity log retention period is 400 days which meets and exceeds the required once-a-year access. Answer D is incorrect due to this part: “email a list of the logs once per year”
You are transferring a very large number of small files to Google Cloud Storage from an on-premises location. You need to speed up the transfer of your files. Assuming a fast network connection, what two actions can you do to help speed up the process? Choose the 2 correct answers:
A. Compress and combine files before transferring.
B. Use the -r option for large transfers.
C. Copy the files in bigger pieces at a time.
D. Use the -m option for multi-threading on transfers.
Correct Answer A and D Explanation You can restrict network access on the firewall by network tags and network ranges/subnets. Here is the console screenshot showing the options when you create firewall rules - network tags and network ranges/subnets are highlighted
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take? Choose 2 answers
A. Load logs into Google BigQuery.
B. Import logs into Google Operations Suite (formerly Stackdriver)
C. Insert logs into Google Cloud Bigtable.
D. Load logs into Google Cloud SQL.
E. Upload log files into Google Cloud Storage.
Correct Answer A and D
B - Use the -r option for large transfers. The -R and -r options are synonymous. Causes directories, buckets, and bucket subdirectories to be copied recursively.
C - Copy the files in bigger pieces at a time. No applicable to the question requirements D (Correct answer) - Use the -m option for multi-threading on transfers. If you have a large number of files to transfer you might want to use the gsutil -m option, to perform a parallel (multi-threaded/multi-processing) copy: gsutil -m cp -r dir gs://my-bucket A (Correct answer) - Compress and combine files before transferring. Compressing and combining smaller files info fewer larger files is also a best practice for speeding up transfer speeds because it saves network bandwidth and space in Google Cloud Storage gsutil cp -z html -a public-read cattypes.html tabby.jpeg gs://mycats Reference cp - Copy files and objects https://cloud.google.com/storage/docs/gsutil/commands/cp