GCP Flashcards
Your company decided to use the Google Kubernetes Engine service with local PersistentVolumes to handle its batch processing jobs. These jobs only run overnight to process non-critical workloads and can be restarted at any time. You are tasked to
deploy the most cost-effective solution
What should you do?
A. Create a Google Kubernetes Engine Cluster. Enable autoscaling to automatically create and delete nodes.
B. Create a Google Kubernetes Engine Cluster and enable the node
auto-provisioning feature.
C. Create a Google Kubernetes Engine Cluster and enable Vertical Pod Autoscaling using the VerticalPodAutoscaler custom resource.
D. Create a Google Kubernetes Engine Cluster. Create a node pool and select the Enable preemptible nodes checkbox
D. Create a Google Kubernetes Engine Cluster. Create a node pool and select the Enable preemptible nodes checkbox.
Your team manager wants you to configure a group of autohealing Compute Engine instances that run on multiple zones for network load balancing. You want to accomplish this task with the least amount of steps possible. You have to ensure that all the Compute Engine instances are automatically recreated if they are unresponsive after three attempts with a 10-second interval.
What should you do?
A. Build a managed instance group. Activate the autoscaling setting.
B. Provision an HTTP load balancer that references its backend to an existinginstance group. Specify a balancing mode and set the maximum RPS (request per second) to 10.
C. Build a managed instance group. Set the Autohealing health check to healthy (HTTP).
D. Provision an HTTP load balancer that references its backend to an existing instance group. Configure the health check to healthy (HTTP).
C. Build a managed instance group. Set the Autohealing health check to healthy (HTTP).
Your team is developing a new application for your company. You want to use Jenkins as your CI/CD solution for your application. You want to deploy this solution as quickly as possible.
What should you do?
A. Deploy a new Kubernetes Engine cluster. Use kubectl to create deployment using Jenkins docker image.
B. Create an instance template with the Jenkins installation script as a startup script. Use the template to launch a managed instance group.
C. Go to Google Cloud Marketplace in the GCP console and search for Jenkins. Select and configure the appropriate Jenkins solution.
D. Deploy a new Compute Engine instance. Download and execute the Jenkins installer.
C. Go to Google Cloud Marketplace in the GCP console and search for Jenkins. Select and configure the appropriate Jenkins solution.
Your company has a live application deployed in a Google App Engine environment. You developed a new version of the application containing several new enhancements and you want to test it first with only 1% of users before entirely switching over to the
new version.
What should you do?
A. Use gcloud app create to deploy a new app with –traffic-split flag to split the traffic between the current and new app.
B. Deploy a new application that includes the enhancements. Configure App Engine to split traffic between the two applications.
C. Use gcloud app deploy to deploy a new version of the app with
–traffic-split flag to split the traffic between the current and new version.
D. Deploy a new version of the app that includes the enhancements. Configure App Engine to split traffic between the current and new versions.
D. Deploy a new version of the app that includes the enhancements. Configure App Engine to split traffic between the current and new versions.
Your team leader wants to get an email whenever a file is deleted from a Cloud Storage bucket. In relation to this, you created a program that accomplishes this requirement
and you are now ready to deploy.
What should you do?
A. Create a batch job with your code by using Cloud Dataflow. Configure the bucket as a data source.
B. Deploy your program to Google Kubernetes Engine (GKE). Configure a cron job to trigger the application using Cloud Pub/Sub.
C. Utilize App Engine and configure Cloud Scheduler to trigger the application using a Pub/Sub subscription.
D. Deploy your code to Google Cloud Functions. Set a Cloud Storage trigger when an object is deleted from your bucket.
D. Deploy your code to Google Cloud Functions. Set a Cloud Storage trigger when an object is deleted from your bucket.
Your company is planning to launch a web application to App Engine. It is crucial that your application can dynamically scale up and down based on the request rate. Moreover, you want to ensure that you have at least 3 unoccupied VMs at all times.
How should you configure your App Engine to support these scaling requirements?
A. Configure Basic Scaling setting with min_instances set to 3.
B. Configure Basic Scaling setting with max_instances set to 3.
C. Set Automatic Scaling settings with min_idle_instances set to 3.
D. Set Manual Scaling settings to 3 instances.
C. Set Automatic Scaling settings with min_idle_instances set to 3.
You are in charge of maintaining your organization’s GCP infrastructure and you need to perform some significant changes. You have to find a way to share the proposed changes with your entire team before deployment. You want to follow Google’s
recommended best practices.
What should you do?
A. Create Deployment Manager templates to define the proposed changes and save them into a Cloud Storage bucket.
B. Create Deployment Manager templates to define the proposed changes and save them into Cloud Source Repositories.
C. Manually perform the changes in the development environment. Execute the gcloud compute instances list command and store the displayed output into Cloud Source Repositories.
C. Manually perform the changes in the development environment. Execute the gcloud compute instances list command and store the displayed output into a Cloud Storage bucket.
A. Create Deployment Manager templates to define the proposed changes and save them into a Cloud Storage bucket.
You developed an application packaged in a container image and you are ready to deploy it on the Google Cloud Platform. You want to deploy the application to a cost-effective GCP service that provides a stable out-of-the-box HTTPS endpoint. The application only receives few client requests per day.
What should you do?
A. Use Cloud Run to deploy the container image.
B. Use a Compute Engine instance with Cloud IAP enabled to deploy the container image.
C. Use App Engine Flexible to deploy the container image.
D. Use Google Kubernetes Engine to create a cluster with horizontal pod scaling and cluster autoscaling enabled. Deploy the container image on the infrastructure you just created.
A. Use Cloud Run to deploy the container image.
You are training four newly hired junior cloud engineers in your company. Part of their training is to familiarize themselves with Cloud Spanner. You need to provide access to these four users to view and edit table information on a Cloud Spanner instance found in the test project.
What should you do?
A. Using the gcloud tool, execute the gcloud iam roles describe
roles/spanner.databaseUser command on Cloud Shell. Attach the users to the role.
B. Using the gcloud tool, execute the gcloud iam roles describe
roles/spanner.databaseUser command on Cloud Shell. Attach the users to a newly created Google group and add the group to the role.
C. Using the gcloud tool, execute the gcloud iam roles describe
roles/spanner.viewer –project my-project command on Cloud Shell.
Attach the users to the role.
D. Using the gcloud tool, execute the gcloud iam roles describe
roles/spanner.viewer –project my-project command on Cloud Shell.
Attach the users to a newly created Google group and add the group to the role
B. Using the gcloud tool, execute the gcloud iam roles describe
roles/spanner.databaseUser command on Cloud Shell. Attach the users to a
newly created Google group and add the group to the role.
Your team is building a cost-effective Disaster Recovery solution for your company.
You are tasked to archive 5 TB worth of data in Cloud Storage that is only accessed quarterly.
What should you do?
A. Use the Archive Storage class to store the data.
B. Use the Coldline Storage class to store the data.
C. Use the Nearline Storage class to store the data.
D. Use the Standard Storage class to store the data.
B. Use the Coldline Storage class to store the data.
You are asked to get a list of all the enabled APIs for all of the GCP Projects on your company’s GCP account as preparation for the upcoming audit. You have been instructed to use the gcloud command-line tool to complete this task.
What should you do?
A. Use the gcloud projects get-list command to get the Project ID. Invoke the gcloud services list –project {ProjectID} command to get the list of enabled GCP APIs.
B. Use the gcloud projects list command to get the Project ID. Invoke the gcloud services list –project {ProjectID} command to get the list of enabled GCP APIs.
C. Use the gcloud projects list command to get the Project ID. Invoke the gcloud services list –available –project {ProjectID} command to
get the list of enabled GCP APIs.
D. Use gcloud projects get-list command to get the Project ID. Invoke the gcloud services list –available –project {ProjectID} command to
get the list of enabled GCP APIs.
B. Use the gcloud projects list command to get the Project ID. Invoke the gcloud services list –project {ProjectID} command to get the list of enabled GCP APIs.
Your team is hosting a website on Google Cloud Storage (GCS). On the website, you provided links to PDF files found on your Cloud Storage. You noticed that the browser always prompts you to save the files on your local machine when you click the links on
the website. You want the PDF files to be displayed on the browser window right away instead of prompting users to save the files locally.
What should you do?
A. Activate Cloud CDN on your website
B. Edit the PDF objects in Cloud Storage and reconfigure their Content-Type metadata into application/pdf.
C. Activate the “Share publicly” setting on all the PDF objects in the bucket.
D. Add a new label to the GCS bucket with a key of Content-Type and value of application/pdf.
B. Edit the PDF objects in Cloud Storage and reconfigure their Content-Type metadata into application/pdf.
Your team is building an application hosted on a VM instance in Compute Engine. The application is designed to enhance and resize images. You want your application to be able to upload images on a Cloud Storage bucket. You want to do this with the least
number of steps possible without compromising security.
What should you do?
A. Create a Service Account with roles/storage.objectCreator (Storage Object Creator) role. Configure the VM instance to use the Service Account.
B. Create a Service Account with roles/storage.objectAdmin (Storage Object Admin) role. Configure the VM instance to use the Service Account.
C. Verify if the VM instance and the bucket have the same region.
D. Set the Cloud Storage bucket to public and configure the objects to have a randomized suffix in its object name.
A. Create a Service Account with roles/storage.objectCreator (Storage Object Creator) role. Configure the VM instance to use the Service Account.
Your company has decided to use Google Cloud Platform to host their applications. Your network team created a VPC on GCP and connected it to your company’s on-premises network via a secure VPN. You need to create a GCE instance to host an application. This instance should not be accessible from the public Internet.
What should you do?
A. Create the GCE instance outside the VPC.
B. Create the GCE instance with a deny-all egress firewall.
C. Create the GCE instance and enable the Private Google Access option.
D. Create the GCE instance with no External IP address.
D. Create the GCE instance with no External IP address.
Your development team wants to migrate an on-premises web application, which is hosted in multiple VMs, to the Google Cloud Platform. The new cloud infrastructure must be highly available and can scale automatically based on CPU usage. You must also be able to access the new VMs directly. You need to implement this with the least number of steps while maintaining operational efficiency.
What should you do?
A. Build an instance template on Compute Engine. Using the template, configure a managed instance group that scales vertically based on your preferred time of day.
B. Build an instance template on Compute Engine. Using the template, configure a managed instance group and enable autoscaling.
C. Deploy your application using Google Kubernetes Engine and enable horizontal pod autoscaling.
D. Research and implement third-party tools to build an automated workflow that scales the application up and down accordingly based on Cloud Monitoring CPU usage metrics.
B. Build an instance template on Compute Engine. Using the template, configure a managed instance group and enable autoscaling.
Your company created a Dataproc cluster running on a Virtual Private Cloud (VPC) network within a single subnet with a CIDR range of 10.0.0.0/24. You have to deploy new VMs that can communicate with your existing cluster. However, there are neither private nor alias IP addresses available that you can use in the VPC network. You must deploy the VMs with the least possible steps.
What should you do?
A. Expand the existing subnet range to 10.0.0.0/23.
B. Set up a new Secondary CIDR Range in the VPC. Configure the VMs to use IPs from the new CIDR range.
C. Set up a new VPC network and deploy the new VMs to it. Activate VPC Peering between the new VPC network and the Dataproc cluster’s VPC network.
D. Set up a new VPC network and deploy the new VMs to it with a subnet of 10.0.1.0/24. Perform VPC Network Peering between the Dataproc VPC network and the new VPC network. Set up a custom Route exchange between these networks.
C. Set up a new VPC network and deploy the new VMs to it. Activate VPC Peering between the new VPC network and the Dataproc cluster’s VPC network.
Your team deployed a new application on a VM instance on Google Compute Engine. You are expecting large traffic in the next coming weeks as your application becomes more popular. You want to launch multiple copies of your instance to handle this
traffic. You want to follow Google’s recommended best practices.
What should you do?
A. Create a snapshot of your instance’s base VM. Use the snapshot to launch new instances.
B. Create a snapshot of your instance boot disk. Create a custom image from the snapshot. Use the custom image to launch new instances.
C. Create a snapshot of your instance’s base VM. Use the snapshot to handle the large traffic.
D. Create a snapshot of your instance boot disk. Create a custom image from the snapshot to handle the large traffic
B. Create a snapshot of your instance boot disk. Create a custom image from the snapshot. Use the custom image to launch new instances.
Your team wants to deploy several VMs on Compute Engine. Part of the plan is to spin up the required VMs using a dedicated YAML file to ensure that all VMs are deployed correctly and consistently. You want to follow Google’s best practices.
Which method should you choose?
A. Managed Instance Group
B. Unmanaged Instance Group
C. Deployment Manager
D. Cloud Composer
A. Managed Instance Group
You have created a GCP project in the development environment to build and test various applications. Cloud SQL, Compute Engine, and Cloud Storage service are being heavily utilized by your applications and other system components. You need to set up a production environment for the company’s enterprise applications. You have to
ensure that the new production environment cannot connect or share resources with the development environment via any routes.
What should you do?
A. Create a new subnet for the production environment under the existing VPC. Verify if the necessary APIs are enabled. Ask the developer team to deploy the application in the new subnet.
B. Create a new project for the production environment. Enable APIs necessary for the application. Establish VPC Peering between the VPC on development and production environment. Ask the developer team to deploy the application in the
new project.
C. Create a new project as a host project for the Shared VPC. Attach the VPC from the development environment to the host project. Ask the developer team to deploy the application in the host project.
D. Create a new project for the production environment. Enable APIs necessary for the application. Ask the developer team to deploy the application in the new
project.
D. Create a new project for the production environment. Enable APIs necessary for the application. Ask the developer team to deploy the application in the new
project.
You are sharing a GCP project with your company’s mobile app development team. You are ready to deploy your web application, and you need to provision a Compute Engine instance. You don’t want the mobile development team accidentally deleting your instance from the project.
What should you do?
A. Provision a Preemptible VM.
B. Utilize a Shielded VM.
C. Activate the Enable Deletion Protection setting in the Compute Engine page using the Cloud Console.
D. Build an instance group
C. Activate the Enable Deletion Protection setting in the Compute Engine page using the Cloud Console.
You have a microservice running on Google Kubernetes Engine (GKE) cluster running on asia-southeast1 region. The GKE cluster has the autoscaler feature enabled. You realized that you need to monitor containers in your cluster. You have to deploy a monitoring pod on each node of your cluster that transmits container metrics to a
third-party cloud monitoring system.
What should you do?
A. Create a Service object that references the monitoring pod.
B. Deploy the monitoring pod into your GKE cluster inside a StatefulSet object.
C. Reference the monitoring pod into your cluster in a Deployment object.
D. Deploy the monitoring pod into your cluster in a DaemonSet object.
D. Deploy the monitoring pod into your cluster in a DaemonSet object.
Your team recently created a new deployment that creates two replicas in a Google Kubernetes Engine (GKE) cluster configured with a single preemptible node pool. After waiting for a few minutes, you noticed that the Pod’s status is still Pending after running kubectl get pods command.
What is the most likely cause of this issue? (*theres a screen shot in docs)
A. The pending Pod’s resource request is too small for the single cluster node
B. The pending Pod is stuck and can’t be scheduled to a node. There are too many Pods running in the cluster, and you don’t have enough node resources left.
C. The pending Pod was scheduled on a node that was getting preempted. You need to wait while it’s being scheduled to a new node.
D. The service account used for the node pool does not have the right permissions to pull images from Container Registry
B. The pending Pod is stuck and can’t be scheduled to a node. There are too many Pods running in the cluster, and you don’t have enough node resources left.
Your company has deployed multiple GCP resources that span across various projects and are linked to different billing accounts. Your finance team is currently analyzing cost patterns on your company’s cloud expenditure and asked you to provide a
dashboard to visualize all the costs incurred. You want to finish the task as quickly as possible.
What should you do?
A. Export your Cloud Billing data to BigQuery. Use the Data Catalog to visualize the Cloud Billing data.
B. Export your Cloud Billing data to BigQuery. Use Google Data Studio to visualize the data.Go to the Billing page in the GCP Console.
C. Export your Cloud Billing data to a CSV file.
D. Use the GCP Pricing Calculator to analyze the cost.
B. Export your Cloud Billing data to BigQuery. Use Google Data Studio to visualize the data.Go to the Billing page in the GCP Console.
You are the head engineer of a software development organization, and you control the IAM access for everyone. You granted the Project Creator role to all engineering team users, but you don’t want them to link projects to a billing account. It is also
essential that the finance team can link projects to a billing account, but they should not have the privilege to access or perform changes on any resource in the organization.
What should you do?
A. Grant the Billing Account User role on the billing account to all of the users in the finance team.
B. Grant the Billing Account User role on the billing account to all of the users in the engineering team.
C. Grant the Billing Account User role on the billing account as well as the Project Billing Manager role on the organization to all of the users in the finance team.
D. Grant the Billing Account User role on the billing account as well as the Project Billing Manager role on the organization to all of the users in the engineering team.
C. Grant the Billing Account User role on the billing account as well as the Project Billing Manager role on the organization to all of the users in the finance team.
You are working for a finance company and are assigned to configure a relational database solution on Google Cloud Platform to support a small set of operational data in a particular geographical location. Your company requires the database to be highly
reliable and supports point-in-time recovery while minimizing operating costs.
What should you do?
A. Choose Cloud Spanner and set up your instance as multi-regional.
B. Choose Cloud SQL (MySQL) and select the create failover replicas option.
C. Choose Cloud SQL (MySQL) and verify that the enable binary logging option is selected.
D. Choose Cloud Spanner and configure your instance with 2 nodes.
C. Choose Cloud SQL (MySQL) and verify that the enable binary logging option is selected.
Your company has a mission-critical application deployed on Google Compute Engine.
You want to avoid the accidental deletion of this instance.
What should you do?
A. Create a snapshot of the instance.
B. Turn on the Deletion Protection feature on the instance.
C. Add the tag DeletionProtection with the value set to Yes.
D. Deploy the application in a Managed Instance Group and add a health check to monitor the instance.
B. Turn on the Deletion Protection feature on the instance.
You are assigned to set up a solution that stores a large amount of financial data in a cost-effective manner and archive it after 30 days. The data will only be accessed once a year for auditing purposes. As part of compliance objectives, you also have to
ensure that the data is stored in a single geographic location.
What should you do?
A. Create a Cloud Storage bucket and set its location to Multi-Regional. Configure an object lifecycle rule that transitions the bucket into Cloud Storage after 30 days.
B. Create a Cloud Storage bucket and set its location to Regional. Configure an object lifecycle rule that transitions the bucket into Coldline Storage after 30 days.
C. Create a Cloud Storage bucket and set its location to Dual-Region. Configure an object bucket lifecycle rule that transitions the bucket into Nearline Storage after 30 days.
D. Create a Cloud Storage bucket and set its location to Regional. Configure an object lifecycle rule that transitions the bucket into Nearline Storage after 30 days
B. Create a Cloud Storage bucket and set its location to Regional. Configure an object lifecycle rule that transitions the bucket into Coldline Storage after 30 days.
You have three different projects for your development, staging, and production environments in your GCP account. You want to use Cloud SDK to develop a script that generates a list of all Google Compute Engine instances in your account.
What should you do?
A. Create three different configurations using the gcloud config command for your development, staging, and production environments. Use the gcloud compute instances list command to list all the compute resources for each configuration.
B. Create one configuration for your development, staging, and production environments using the gcloud config command. Use the gsutil compute instances list command to list all the compute resources in your account.
C. Use the bq compute instances list command to list all the available
compute resources in your entire account.
D. Set up three different configurations using the gsutil config command for your development, staging, and production environments. Invoke the gsutil compute instances list command to list all the compute resources for each configuration.
D. Set up three different configurations using the gsutil config command for your development, staging, and production environments. Invoke the gsutil compute instances list command to list all the compute resources for each configuration.
You are setting up a new billing account for your team. You want to link this billing account with an existing project called proj-dev.
What should you do?
A. Confirm that you have the Billing Administrator role for the billing account. Using the Cloud Console, link the existing billing account to the proj-dev project.
B. Confirm that you have the Billing Administrator role for the billing account. Create a new project. Link the newly created project to the existing billing account.
C. Confirm that you have the Project Billing Manager role for the project. Using the Cloud Console, link the existing billing account to the proj-dev project.
D. Confirm that you have the Project Billing Manager role for the project. Create a new billing account. Update the proj-dev project to use the billing account that
you just created
D. Confirm that you have the Project Billing Manager role for the project. Create a new billing account. Update the proj-dev project to use the billing account that
you just created
You created a test project on GCP and defined the appropriate IAM roles that will be used by the users. You now need to replicate the exact same IAM roles on the production project. Your manager wants you to accomplish this task with the fewest possible steps.
What should you do?
A. Using the Cloud Shell, run the gcloud iam roles copy command and specify the production project as the destination project.
B. Utilize the CREATE ROLE functionality in the Cloud Console and select all applicable permissions.
C. Utilize the CREATE ROLE FROM SELECTION functionality found in the IAM page.
D. Using the Cloud Shell, run the gcloud iam roles copy command and specify your organization as the destination organization.
A. Using the Cloud Shell, run the gcloud iam roles copy command and specify the production project as the destination project.
Your organization plans to backup a 32 GB CCTV footage stored in a single file to a Nearline Storage bucket. For this task, a 1 Gbps WAN connection has been dedicated for your exclusive use. You want to maximize your connection speed as much as possible so you can upload the files to Cloud Storage at the quickest time.
What do you think should be done to upload the file rapidly?
A. Lower down the value of the TCP window size when you upload the file to Cloud Storage.
B Using gsutil, activate parallel composite uploads during the file transfer for faster upload.
C. Set the storage class of the bucket from Nearline to Regional.
D. Use the Cloud Storage browser in the Google Cloud Console to upload the file.
B Using gsutil, activate parallel composite uploads during the file transfer for faster upload.
You are hosting a web application in your on-premises data center that needs to fetch files from a Cloud Storage bucket. However, your company strictly implements security policies that prohibit your bare-metal servers from having a public IP address or having any access to the Internet. You want to follow Google-recommended
practices to provide your web application the necessary access to Cloud Storage.
What should you do?
A.)1. Issue nslookup command on your command-line to get the IP address for
storage.googleapis.com.
2.Discuss with the security team why you need to have a public IP address for
the servers.
3.Explicitly allow egress traffic from your servers to the IP address of
storage.googleapis.com.
B.)1. Migrate your on-premises server using Migrate for Compute Engine
(formerly known as Velostrata).
2.Provision an internal load balancer (ILB) that uses storage.googleapis.com
as a backend
3.Set up the new instances to use the ILB as a proxy to connect to the Cloud
Storage.
C.) 1. Create a VPN tunnel connecting to a custom-mode VPC in the Google Cloud
Platform using Cloud VPN.
2. Create a Compute Engine instance and install the Squid Proxy Server. Use the
custom-mode VPC as the location.
3. Configure your on-premises servers to use the new instance as a proxy to
access the Cloud Storage bucket
D.)1.Create a VPN tunnel to GCP using Cloud VPN or Cloud Interconnect
2. Use Cloud Router to create a custom route advertisement for
199.36.153.4/30. Announce that network to your on-premises network via
VPN tunnel
3. Configure the DNS server in your on-premises network to resolve
*.googleapis.com as a CNAME to restricted.googleapis.com.
D.)1.Create a VPN tunnel to GCP using Cloud VPN or Cloud Interconnect
2. Use Cloud Router to create a custom route advertisement for
199.36.153.4/30. Announce that network to your on-premises network via
VPN tunnel
3. Configure the DNS server in your on-premises network to resolve
*.googleapis.com as a CNAME to restricted.googleapis.com.
You are working for a tech company that plans to deploy a web application that serves HTTPS requests. You need to build a managed instance group that scales automatically for this application. Part of the requirement is to have the capability to
recreate unhealthy virtual instances automatically.
What should you do?
A. Build an instance template and add a startup script that sends a message to a Cloud Pub/Sub topic via Cloud Function that triggers recreating the instance if it is unhealthy.
B. Configure a health check and set the Protocol settings to HTTPS. Define the appropriate health criteria. Use this health check when you create a managed instance group.
C. In the Instance Group page, create a managed instance group and select Multi-Zone instead of Single-Zone.
D. Add the health-check label with a value of https when creating an instance template.
B. Configure a health check and set the Protocol settings to HTTPS. Define the appropriate health criteria. Use this health check when you create a managed instance group.
Your DevOps team plans to provision a Jenkins server for their project on the Google Cloud Platform. The server needs to be deployed quickly, so the group decided to minimize the number of steps necessary to accomplish this task.
What should you do?
A. Download the Jenkins Java WAR file and deploy it to App Engine Standard.
B. Build a new Compute Engine instance and install Jenkins through the Google Cloud Shell command-line interface.
C. Provision a Kubernetes cluster on Compute Engine and build a deployment using the Jenkins Docker image.
D. Utilize the GCP Marketplace to launch the Jenkins server
D. Utilize the GCP Marketplace to launch the Jenkins server
Your company is running an application in a Managed Instance Group (MIG) on Compute Engine. You noticed that your MIG fails to create new instances even though the scale-up was triggered. You want to maintain your instance count defined on the
instance template to efficiently handle the traffic.
What should you do? (Choose two.)
A. Ensure that the instance template used by the instance group is valid.
B. Ensure that the tags applied on instances are the same.
C. Ensure that disks.autoDelete property is set to False in the instance
template.
D. Ensure that snapshots from boot disks are successfully created.
E. Ensure that existing persistent disks and instances have different names
A. Ensure that the instance template used by the instance group is valid.
E. Ensure that existing persistent disks and instances have different names
You have an App Engine application built by your team that is running in your development environment. The application has successfully passed the necessary regression tests and you need to build a new project for your production environment.
What should you do?
A. Deploy your application again using the gcloud tool and supply the project parameter named production to create the new project.
B. Utilize the gcloud tool to build a new project named production. Deploy your team’s application to the newly created project.
C. Create a new project named production using the Cloud Console. Set up a Deployment Manager configuration file that replicates the current App Engine deployment into the newly created project.
D. Utilize the gcloud tool to build the new project named production. Copy the deployed application to the new project.
B. Utilize the gcloud tool to build a new project named production. Deploy your team’s application to the newly created project.
You have two groups of Compute Engine instances deployed in separate GCP projects. Each group of instances runs on its own VPC. You need to enable network traffic between the two groups.
What should you do?
A. Check if you have the Project Administrator role for both projects. Set up a new Shared VPC host project that will automatically add all the instances from the two projects.
B. Check if you have the Project Administrator role for both projects. Build two new VPCs and deploy all the instances.
C. Confirm that both projects belong to a single Organization. Set up a new Shared VPC host project from the first project and send a request to allow the Compute Engine instances from the other project to use this Shared VPC.
D. Confirm that both projects belong to a single Organization. Set up a new VPC and add all the instances from the two projects.
C. Confirm that both projects belong to a single Organization. Set up a new Shared VPC host project from the first project and send a request to allow the Compute Engine instances from the other project to use this Shared VPC.
Your development team deployed a CRM web application on a managed instance group (MIG) and is ready to serve customers all over the world. You continuously update your application every week, and you are preparing to deploy the new version
gradually. You need to ensure that during the deployment, the available number of instances does not decrease.
What should you do?
A. On the Cloud Console, choose the managed instance group you want to update and click Rolling Action. Set the Maximum surge to 0 and Maximum unavailable to 1.
B. On the Cloud Console, select the managed instance group you want to update and click Rolling Action. Configure the Maximum surge to 1 and Maximum unavailable to 0.
C. Build a new managed instance group using an instance template that uses your web application’s recent image version. Use a load balancer to direct traffic to the newly created instance group. Delete the old instance group once the instances on the new managed instance group are healthy.
D. Build a new instance template that contains the latest version of your application. Update the managed instance group to use this new template.
Delete the instances in the managed instance group to rebuild new instances using the new instance template.
B. On the Cloud Console, select the managed instance group you want to update and click Rolling Action. Configure the Maximum surge to 1 and Maximum unavailable to 0.
Your company wants to review the IAM users and roles assigned on a specific Google Cloud project named finance-project.
What should you do to fulfill this requirement?
A. Set up the Cloud SDK to run the gcloud iam roles list command and review the output.
B. Using the Cloud Console, navigate to the finance-project, and go to the IAM section. Under the ‘Permissions’ tab, review the Members and Roles section.
C. Using the Cloud Console, navigate to the finance-project, and go to the Roles section. From there, review the Roles and Status of the project.
D. Use the Cloud Shell to run the gcloud iam service-accounts list command and then review the output.
B. Using the Cloud Console, navigate to the finance-project, and go to the IAM section. Under the ‘Permissions’ tab, review the Members and Roles section.
You are a team leader for a project that builds a microservice application on a Google Kubernetes (GKE) cluster. You need to ensure that this GKE cluster is up-to-date and always supports a stable version of Kubernetes.
What should you do?
A. In the Cloud Console, activate the Node Auto-Repair feature for your Google Kubernetes Engine cluster.
B. In the Cloud Console, activate the Node Auto-Upgrades configuration for your Google Kubernetes Engine cluster.
C. Explicitly define the latest available cluster version for your Google Kubernetes Engine when creating the cluster.
D. When choosing a node image for the GKE cluster on Cloud Console, select the default value of “Container-Optimized OS (cos)”.
B. In the Cloud Console, activate the Node Auto-Upgrades configuration for your Google Kubernetes Engine cluster.
Your company strictly observes the best practice of giving least-privilege access to control the GCP projects and other resources. Your Site Reliability Engineers (SRE) team recently opened a support case to Google Cloud Support. The SREs should be able to grant permission requests from the Google Cloud Support team while working through the case. You want to follow Google-recommended practices.
What should you do?
A. Create a Google group named sre-group. Use the predefined
roles/accessapproval role and assign it to the newly created group.
B. Use the predefined roles/iam.organizationRoleAdmin role and assign it to the accounts of your SREs.
C. Use the predefined roles/iam.roleAdmin role and assign it to the accounts of your SREs.
D. Create a Google group named sre-group. Use the predefined
roles/iam.roleAdmin role and assign it to the newly created group.
A. Create a Google group named sre-group. Use the predefined
roles/accessapproval role and assign it to the newly created group.
Your company conducts a quarterly security audit as part of its effort to comply with government requirements. You are assigned to provide IAM access to some external auditors on your company’s BigQuery audit logs. You want to follow Google-recommended practices.
What should you do?
A. Create two new custom IAM roles. Add the auditors’ group to the new custom roles.
B. Attach the auditors’ accounts to the logging.viewer and
bigQuery.dataViewer predefined IAM roles.
C. Create two new custom IAM roles. Add the auditor user accounts to the new custom roles.
D. Create a new Google group for the auditors. Attach the logging.viewer and bigQuery.dataViewer predefined IAM roles to the newly created group.
D. Create a new Google group for the auditors. Attach the logging.viewer and bigQuery.dataViewer predefined IAM roles to the newly created group.
You created a Dockerfile, and you plan to deploy it on Google Kubernetes Engine (GKE).
What should you do?
A. Build a docker image using the Dockerfile and upload it to Cloud Storage. Then, create a Deployment YAML file to point to the image you uploaded on Cloud Storage. Utilize the kubectl command to create the deployment using the YAML file.
B. Run kubectl app deploy dockerfilename on Cloud Console.
C. Build a docker image using the Dockerfile and upload it to the Google Container Registry (GCR). Create a Deployment YAML file to point to the image you just uploaded on the Container Registry. Utilize the kubectl command to create the deployment using the YAML file.
D. Run gcloud app deploy dockerfilename on Cloud Console.
C. Build a docker image using the Dockerfile and upload it to the Google Container Registry (GCR). Create a Deployment YAML file to point to the image you just uploaded on the Container Registry. Utilize the kubectl command to create the deployment using the YAML file.
Your company has an application hosted on a VM instance in Google Compute Engine. This application is configured to persist its system logs on the disk. You want to stream the application logs to troubleshoot a user-reported issue.
What should you do?
A. Connect to the instance using the interactive serial console and download the application logs.
B. Configure the Cloud Logging Agent on the VM instance to collect the logs. Navigate to Cloud Logging in the GCP console to view the logs.
C. In the GCP Console, go to Cloud Logging and view the application logs.
D. Configure a custom script that copies application logs to a Cloud Storage Bucket
B. Configure the Cloud Logging Agent on the VM instance to collect the logs. Navigate to Cloud Logging in the GCP console to view the logs.
Your company is reviewing its GCP expenses in order to determine ways to reduce its monthly expenditure. You are tasked to decommission all resources on one particular
GCP project that is used in the previous testing activities, and you need to do this with the fewest possible steps. You want to follow Google-recommended practices.
What should you do?
A. 1. Confirm that you have the Organizational Administrators IAM role for this project.
● 2. Select the project in the GCP console, find the resources, and delete them.
B. 1. Confirm that you have the Organizational Administrator IAM role for this project.
● 2. Select the project in the GCP console, go to Admin > Settings, click Shut down and enter the Project ID to confirm the deletion.
C. 1. Confirm that you have the Project Owners IAM role for this project.
● 2. Select the project in the GCP console, find the resources, and delete them.
D. 1. Confirm that you have the Project Owners IAM role for this project.
● 2. Select the project in the GCP console, go to Admin > Settings, click Shut down and enter the Project ID to confirm the deletion.
D. 1. Confirm that you have the Project Owners IAM role for this project.
● 2. Select the project in the GCP console, go to Admin > Settings, click Shut down and enter the Project ID to confirm the deletion.
You are developing an application that stores and processes files from thousands of producers. Data security and expiration of obsolete data are your top priorities in building the application. Moreover, the application has to:
Provide producers write permissions to data for 30 minutes only.
Delete files that are stored for over 45 days
Restrict producers from reading files they don’t own.
The development timeline for the application is short, and you need to ensure that the solution has a low maintenance overhead.
Which strategies should you implement to satisfy the requirements? (Choose two.)
A. Create an object lifecycle configuration to delete Cloud Storage objects after 45 days of storage.
B. Generate signed URLs to give limited-time access for producers to store objects.
C. Set up an SFTP server on a Compute Engine instance and create user accounts for each producer.
D. Deploy a Cloud function that triggers a countdown timer of 45 days and deletes the expired objects.
E. Create a script written in Python that loops through all objects inside a Cloud Storage bucket and deletes objects that are 45 days old.
A. Create an object lifecycle configuration to delete Cloud Storage objects after 45 days of storage.
B. Generate signed URLs to give limited-time access for producers to store objects.
You are currently investigating an issue that requires you to access and analyze the audit logs of several GCP projects. You need to run custom queries against these logs for the past 60 days in the easiest way possible. You want to follow Google-recommended best practices.
What should you do?
A. In the Google Cloud Console, export the audit logs from Cloud Logging and select Cloud Storage as the Sink destination. Create a bucket lifecycle rule to remove objects after 60 days.
B. Export the audit logs from Cloud Logging and select a BigQuery dataset as the Sink destination. Configure the table expiration to 60 days.
C. Go to Cloud Logging and select all projects in the search filter.
D. Configure a Cloud Function that will export all the logs to a Cloud Engine instance from Cloud Logging. Delete the Cloud Engine instance after 60 days
B. Export the audit logs from Cloud Logging and select a BigQuery dataset as the Sink destination. Configure the table expiration to 60 days.
Your company has hundreds of user identities in Microsoft Active Directory. Your company needs to retain the use of your Active Directory as your source of truth for user identities and authorization. Your company requires to have full control over the
employees’ Google accounts for all Google services as well as your Google Cloud Platform (GCP) organization.
What should you do?
A. Require each employee to set up a Google account using the self signup process. Mandate each employee to use their corporate email address and password.
B. Export the company’s users from the Microsoft Active Directory as a CSV file.
Import them into Google Cloud Identity via the Admin Console.
C. Utilize Google Cloud Directory Sync (GCDS) to synchronize users into Google Cloud Identity.
D. Write a custom script using the Cloud Identity APIs to synchronize users to Cloud Identity.
C. Utilize Google Cloud Directory Sync (GCDS) to synchronize users into Google Cloud Identity.
Your company is in the process of merging with another company that also uses GCP as its cloud infrastructure. Both companies manage hundreds of GCP projects and have their own billing accounts. Your company’s finance officer asked you to
consolidate the costs for both GCP Organizations into a single invoice and submit it by tomorrow.
What should you do?
A. Attach your Organization’s billing account to the projects of the other Organization.
B. Open a support case to Google to migrate the projects of the other company into your Organization. Link your billing account to your Organization.
C. Configure a third GCP Organization linked to a new billing account. Migrate the projects of both Organizations into the newly created Organization by creating a support case to Google. Configure the projects to use the newly created billing account.
D. Create a BigQuery dataset and configure both Organizations to export their billing data into the same dataset.
A. Attach your Organization’s billing account to the projects of the other Organization.
You have a Google Cloud Platform (GCP) project in your organization that is used for managing confidential files and documents. There is a need to delegate the management of buckets and files in Cloud Storage to your co-workers. You want to follow Google-recommended practices.
Which of the following IAM roles should you grant to your co-workers?
A. Storage Object Creator
B. Storage Object Admin
C. Storage Admin
D. Project Editor
C. Storage Admin
In your organization, employees pay for their Google Cloud Platform projects using their personal credit cards, which will be refunded by the finance team at the end of each month. Your management team decided to centralize all projects under a new single billing account.
What should you do?
A. In the GCP Console, navigate to the Resource Manage section and move all projects to the root Organization.
B. Using the GCP Console, create a new billing account and set up a payment method. Afterward, associate all of the projects in this newly created billing account.
C. Create a support ticket with Google Support and be ready for their call when they ask to share the corporate credit card details over the phone.
D. Send an email to cloud-billing@google.com detailing your bank account information. Afterward, request a corporate billing account for your organization.
A. In the GCP Console, navigate to the Resource Manage section and move all projects to the root Organization.
Your mobile app development company uses G Suite to run your regular daily communication and team collaboration. You need to give some of these G Suite users access to a newly created GCP project.
What should you do?
A. Create a Google group called gcp-console-sers@tutorialsdojo.com. Wait for Google Cloud to automatically grant the permissions needed to access the project once users join the newly created group.
B. Generate a CSV file that contains a list of users. Utilize the gcloud tool to convert the CSV into Google Cloud accounts.
C. Go to the IAM page and grant the G Suite email addresses with appropriate IAM roles to access the project.
D. Activate the Cloud Identity API in the GCP Console for your domain.
C. Go to the IAM page and grant the G Suite email addresses with appropriate IAM roles to access the project.
You are using Cloud SDK to interact with Google Cloud services. You have two GCP accounts and you need to create new Compute Engine instances on each account using the command-line interface. The first account runs on the us-west1 region and zone while the other runs on us-central1.
What should you do?
A. Set up two configurations and activate both of them using the gcloud config configurations activate [CONFIG_NAME] command. Launch the Compute Engine instances for both the accounts simultaneously using the gcloud compute instances start command.
B. Set up two configurations with the appropriate properties by running the gcloud config configurations command. Issue the gcloud compute instances start command to create the instances.
C. Set up two configurations with the appropriate properties by running the gcloud config configurations command. Issue the gcloud config configurations activate [CONFIG_NAME] command to switch accounts when running the necessary commands to create the Compute Engine instances.
D. Set up two configurations and activate both of them using the gcloud config configurations activate [CONFIG_NAME] command. Create the instances for both the accounts at the same time using the gcloud config list command.
C. Set up two configurations with the appropriate properties by running the gcloud config configurations command. Issue the gcloud config configurations activate [CONFIG_NAME] command to switch accounts when running the necessary commands to create the Compute Engine instances
You are working for a startup that wants to track the operational costs of its cloud resources. The startup has three separate projects on the Google Cloud Platform. You need to analyze your cost estimates on a daily and monthly basis as well as by service
type across all projects for the next six months. You also want to use standard query syntax for cost analysis.
What should you do?
A. Enable billing data export on your Cloud Billing Account. Export your billing to a Cloud Storage bucket and import it into Cloud Bigtable to conduct the analysis.
B. Enable billing data export on your Cloud Billing Account. Export your billing report to a BigQuery dataset and write SQL queries for analysis.
C. Enable billing data export on your Cloud Billing Account. Export your billing report to a Cloud Storage bucket and import it into Google Sheets to conduct the analysis.
D. Enable billing data export on your Cloud Billing Account. Export your billing transactions to a JSON file, and produce a summary report using a desktop tool
B. Enable billing data export on your Cloud Billing Account. Export your billing report to a BigQuery dataset and write SQL queries for analysis.
Your company’s finance team needs to back up data on a Cloud Storage bucket for disaster recovery purposes. You want to comply with Google’s recommended practices in implementing the solution for this task.
Which storage class do you think would be the best option?
A. Coldline Storage
B. Archive Storage
C. Multi-Regional Storage
D. Nearline Storage
B. Archive Storage
A company hires you to set up its test and production VMs on Google Compute Engine. You have to ensure that all the production virtual machines are located on a separate subnet from the test workloads. Moreover, you need to configure the VMs in such a
way that they can communicate using Internal IP addresses in a VPC without the need to create additional custom routes.
How should you set up your VPC to comply with these requirements?
A. Set up a custom mode VPC configured with 2 subnets on different regions. Configure the subnets to have different CIDR ranges.
B. Set up 2 custom mode VPCs, with a single subnet on each one. Create each subnet in the same region and with the same CIDR range.
C. Set up 2 custom mode VPCs, each with a single subnet and similar CIDR ranges. Create each subnet in a different region.
D. Set up a custom mode VPC configured with 2 subnets on the same region. Configure the subnets with a similar CIDR range.
A. Set up a custom mode VPC configured with 2 subnets on different regions. Configure the subnets to have different CIDR ranges.
Your team is maintaining an application that receives SSL/TLS-encrypted traffic on port 443. Your customers from various parts of the globe are reporting latency issues when accessing your application.
What should you do?
A. Use an External HTTP(S) Load Balancer in front of your application.
B. Use an SSL Proxy Load Balancer in front of your application.
C. Use a TCP Proxy in front of your application.
D. Use an Internal HTTP(S) Load Balancer in front of your application
B. Use an SSL Proxy Load Balancer in front of your application.
It’s the end of the quarter and you are required to generate a report for data found in your BigQuery dataset. You want to execute a query in BigQuery, but you suspect it will return a large chunk of records. You need to find out how much your query would cost
before running it, especially since you are using on-demand pricing.
What should you do?
A. Switch to Flat-Rate pricing and run the query. Once done, change it back to on-demand pricing to avoid any additional cost.
B. Execute a SELECT COUNT (*) query against your BigQuery dataset to get an idea of the total number of records your query will look through. Convert the total number of records to dollars using the Pricing Calculator.
C. Use Cloud Shell to execute a dry run query to determine the number of bytes read for the query. Utilize the Pricing Calculator to convert that bytes estimate to dollars.
D. Utilize Cloud Shell to execute a dry run query to determine the number of bytes returned by your query. Utilize the Pricing Calculator to convert that bytes
estimate to dollars.
C. Use Cloud Shell to execute a dry run query to determine the number of bytes read for the query. Utilize the Pricing Calculator to convert that bytes estimate to dollars.
Your company runs hundreds of projects on the Google Cloud Platform. You are tasked to store the company’s audit log files for three years for compliance purposes. You need to implement a solution to store these audit logs in a cost-effective manner.
What should you do?
A. Develop a custom script written in Python that utilizes the Logging API to duplicate the logs generated by Operations Suite to BigQuery.
B. Create a Cloud Storage bucket using a Coldline storage class. Then on the Logs Router, create a sink. Choose Cloud Storage as a sink service and select the bucket you previously created.
C. On the Logs Router, create a sink with Cloud BigQuery as a destination to save audit logs.
D. Configure all resources to be a publisher on a Cloud Pub/Sub topic and publish all the message logs received from the topic to Cloud SQL to store the logs.
B. Create a Cloud Storage bucket using a Coldline storage class. Then on the Logs Router, create a sink. Choose Cloud Storage as a sink service and select the bucket you previously created.
You deploy a web application running on a Cloud Engine instance in the asia-northeast1-a zone. You want to eliminate the risk of possible downtime due to the failure of a single Compute Engine zone while minimizing costs.
What should you do?
A. Deploy another instance in asia-northeast1-b. Balance the load in
asia-northeast1-a, and asia-northeast1-b using an Internal Load Balancer (ILB).
B. Deploy multiple instances on asia-northeast1-a, asia-northeast1-b, and asia-northeast1-c. Balance the load across all zones using an Internal Load Balancer (ILB).
C. Create an instance template and deploy a managed instance group in a single zone. Configure a health check to monitor the instances.
D. Create a snapshot schedule for your instance. Set up a Cloud Monitoring Alert to monitor the instance. Restore the instance using the snapshot when the instance goes down.
A. Deploy another instance in asia-northeast1-b. Balance the load in
asia-northeast1-a, and asia-northeast1-b using an Internal Load Balancer (ILB).
Your company stores all of its container images on Google Container Registry in a project called td-devops. The development team created a Google Kubernetes Engine (GKE) cluster on a separate project and needs to download container images from the
td-devops project.
What should you do to ensure that Kubernetes can download the images from Container Registry securely?
A. In the Google Cloud Storage, configure the ACLs on each container image stored and provide read-write access to the service account used by the GKE nodes.
B. Generate a P12 key for a new service account. Use the generated key as an imagePullSecrets in Kubernetes to access the private registry.
C. Upon creating the GKE cluster, set the Access Scopes setting under Node Security to Allow Full Access to all Cloud APIs.
D. In the td-devops project, assign the Storage Object Viewer IAM role to the service account used by the GKE nodes.
D. In the td-devops project, assign the Storage Object Viewer IAM role to the service account used by the GKE nodes.
You have designed a cloud solution that uses a wide variety of Google Cloud Platform Services. Your company agreed to use these cloud services but asked you to provide an estimated cost of running this cloud solution. You need to submit an estimate to properly forecast future expenses.
What should you do?
A. Deploy the solution on Google Cloud Platform. Leave the solution running for a week. Go to the GCP console and navigate to the Billing Report page. Multiply the 1-week cost by four to determine the monthly costs.
B. Provide a list of GCP services of your cloud solution and check its pricing details on the GCP products pricing page. Create a Google Sheet with a monthly estimate of GCP services cost.
C. Provide a list of GCP services of your cloud solution. Submit an email to GCP support with your GCP services list and ask them to estimate the monthly cost.
D. Provide a list of GCP services of your cloud solution. Use the GCP Pricing Calculator and input the necessary details to get an estimated monthly cost for each GCP product.
D. Provide a list of GCP services of your cloud solution. Use the GCP Pricing Calculator and input the necessary details to get an estimated monthly cost for each GCP product.
You built an application and deployed it to the Google Cloud Platform. This application needs to connect to a licensing server that you plan to host on Compute Engine. You configure the application to connect to the licensing server on the 10.146.0.17 IP
address. You intend to keep this setting intact to avoid manually reconfiguring the application.
What should you do?
A. Start the licensing server with an automatically generated ephemeral IP address. Afterward, promote it to a static external IP address set to 10.0.146.0.17.
B. Do not assign an IP while creating the licensing server on Compute Engine to automatically get an ephemeral internal IP address.
C. Using the Cloud Console, create a Compute Engine instance. Configure the External IP as a static IP address and set it to 10.146.0.17.
D. Using the Cloud Console, create a Compute Engine instance. Configure the Primary internal IP as a static internal IP address and set it to 10.146.0.17
D. Using the Cloud Console, create a Compute Engine instance. Configure the Primary internal IP as a static internal IP address and set it to 10.146.0.17
You have been assigned to launch three new Compute Engine instances in your test environment in GCP. These servers should accept incoming TCP traffic on port 8080 and can be managed using RDP. You want to follow Google-recommended best
practices in configuring an instance firewall.
What should you do?
A. Create an egress firewall rule using gcloud compute firewall-rules create command and specify the network tags and ports.
B. Create a network tag for the three instances. Create an ingress firewall rule that allows TCP traffic in ports 8080 and 3389 then specify the instance’s network tag as target tags.
C. Add a network tag for the three instances. Create an ingress firewall rule that allows UDP traffic in ports 8080 and 636 then specify the instance’s network tag as target tags.
D. Create a firewall rule to allow incoming TCP traffic in ports 8080 and 3389 then leave the firewall target to default.
B. Create a network tag for the three instances. Create an ingress firewall rule that allows TCP traffic in ports 8080 and 3389 then specify the instance’s network tag as target tags
You plan to implement new changes to a previous production deployment using the Google Cloud Deployment Manager. You want to achieve this without any resource downtime during the deployment.
What command should you utilize to accomplish this?
A. gcloud deployment-manager deployments update –config
{deployment-config-path}
B. gcloud deployment-manager deployments create –properties
{deployment-config-path}
C. gcloud deployment-manager resources describe {resource-name>} –deployment {deployment-name}
D. gcloud deployment-manager resources list –deployment
{deployment-name}
A. gcloud deployment-manager deployments update –config
{deployment-config-path}
Your company has a 5 TB file in Parquet format stored in Google Cloud Storage bucket. A team of analysts, who are only proficient in SQL, needs to temporarily access these files to run ad-hoc queries. You need a cost-effective solution to fulfill their request as soon as possible.
What should you do?
A. Create external tables in BigQuery. Use the Cloud Storage URL as a data source.
B. Import the data to Memorystore to provide quick access to Parquet data in the Cloud Storage bucket.
C. Load the data in BigTable. Give the analysts the necessary IAM roles to run SQL queries.
D. Load the data in a new BigQuery table. Use the bq load command, specify PARQUET using the –source_format flag, and include a Cloud Storage
A. Create external tables in BigQuery. Use the Cloud Storage URL as a data source.
Your company wants to set up a new Virtual Private Cloud (VPC) behind a firewall to secure the data egress. You have to filter the traffic flowing out of the VPC. You need to configure the VPC to have the least possible number of open egress ports.
What should you do?
A. Create a firewall rule that blocks all egress traffic with a high-priority number of 200. Create another firewall rule that allows egress traffic for specific ports needed with a high-priority number of 65534.
B. Create a firewall rule that allows inbound traffic to specific ports needed and set its priority to 1000. Remove both the implied allow egress rule and implied allow egress rule.
C. Create a firewall rule that blocks all egress traffic with a low-priority number of 65534. Create another firewall rule that allows egress traffic for specific ports needed with a high-priority number set to 200.
D. Create a firewall rule that blocks all egress traffic and allows specific ports with the same priority number.
C. Create a firewall rule that blocks all egress traffic with a low-priority number of 65534. Create another firewall rule that allows egress traffic for specific ports needed with a high-priority number set to 200.
You have installed the gcloud command-line interface (CLI) on your windows machine and have successfully authenticated it with your corporate Google Account. You are working on a project in which resources are mostly deployed in asia-southeast1-a zone. You want to deploy instances on this region but you don’t want to define its zone every time you run a gcloud command.
What should you do?
A. On your CLI, set the asia-southeast1-a as the default compute zone by using the gcloud config set zone ZONE command.
B. On your Windows machine, go to the C:\Windows\System32\drivers\etc directory. Open your host file and add this line: asia-southeast1-a compute/zone.
C. On your CLI, set the default compute zone by running the gcloud init command.
D. On your CLI, set the asia-southeast1-a as the default compute zone by using the gcloud config set compute/zone ZONE command
D. On your CLI, set the asia-southeast1-a as the default compute zone by using the gcloud config set compute/zone ZONE command
Your team is testing a new application hosted on a general-purpose Compute Engine instance that uses Zonal SSD Persistent Disk and Google Cloud Storage (GCS) to process and store data. Upon testing, you found out that the application encounters
excessive disk read throttling. You have to provide the maximum disk throughput to improve performance in a cost-effective manner.
What should you do?
A. Create a disk partition on the Zonal SSD Persistent Disk.
B. Increase the number of CPU cores of the instance.
C. Use a Local SSD instead of Zonal SSD Persistent Disk.
D. Use a Regional SSD Persistent Disk instead of Zonal SSD Persistent Disk
C. Use a Local SSD instead of Zonal SSD Persistent Disk.
You have a technical report stored in an object in Google Cloud Storage (GCS) that needs to be evaluated by an external auditing firm. The report contains sensitive information, so you decided to limit the object’s access to four hours only. The auditing
firm does not own a Google account where you can delegate the necessary privileges to access the object. You must implement a secure approach to do this task and have it done with the fewest possible steps.
What should you do?
A. Set up the storage bucket to host a static website and submit the object’s URL to the auditing firm. Manually delete the object from the Cloud Storage bucket after four hours.
B. Provision a new bucket dedicated for the auditing firm. Move the object to the new bucket. Create an object lifecycle policy to remove the object after four hours.
C. Generate a signed URL and specify the expiration to four hours. Share the signed URL with the auditing firm.
D. Edit the object’s permission to allow allUsers access. Add an object lifecycle policy to delete the object after four hours.
C. Generate a signed URL and specify the expiration to four hours. Share the signed URL with the auditing firm
A new auditor joins your organization and you need to add him to your team’s Google Cloud project. The auditor needs to have read access permissions but should be restricted from modifying resources in the project.
How should you grant the necessary permissions to the new auditor?
A. Build a custom IAM role with view-only project permissions and attach it to the user’s account.
B. Build a custom IAM role with view-only service permissions and attach it to the user’s account.
C. Use the built-in IAM project Viewer role to grant the required permissions. Attach this role to the user’s account.
D. Use the built-in existing IAM service Viewer role to grant the required permissions. Attach this role to the user’s account.
C. Use the built-in IAM project Viewer role to grant the required permissions. Attach this role to the user’s account.
Your company just deployed a major version release of its web application to Google App Engine. A few hours later, users started reporting a critical issue with the latest release. You decided to quickly revert back to the previous version of the application
while your team is investigating the issue.
What should you do?
A. Deploy the working version of your web app as a separate application. Go to App Engine settings and configure the application to route 100% of the traffic to the original version.
B. Use the Cloud Console to go to the App Engine Versions page. Choose the previous web application version to split the traffic between the current and previous versions.
C. On the Cloud Shell, execute the command gcloud components restore.
D. Use the Cloud Console to go to the App Engine Versions page. Reroute 100% of the traffic to the previous working version of the application.
D. Use the Cloud Console to go to the App Engine Versions page. Reroute 100% of the traffic to the previous working version of the application.
A company has an application that uses Cloud Spanner as its backend database. After a few months of monitoring your Cloud Spanner resource, you noticed that the incoming traffic of the application has a predictable pattern. You need to set up
automatic scaling that will scale up or scale down your Spanner nodes based on the incoming traffic.
What should you do?
A. Set up an alerting policy on Cloud Monitoring that sends an email alert to on-call Site Reliability Engineers (SRE) when the Cloud Spanner CPU metric exceeds the desired threshold. The SREs shall scale the resources up or down appropriately.
B. Set up an alerting policy on Cloud Monitoring that sends an alert to a webhook when the Cloud Spanner CPU metric is over or under your desired threshold. Create a Cloud Function that listens to this HTTP webhook and resizes Spanner resources appropriately.
C. Set up an alerting policy on Cloud Monitoring that sends an email alert to Google Cloud Support email when the Cloud Spanner CPU metric exceeds the desired threshold. The Google Support team shall scale the resources up or down appropriately.
D. Build a cron job that executes based on a schedule to review Cloud Monitoring metrics, and then resize the Spanner resources appropriately.
B. Set up an alerting policy on Cloud Monitoring that sends an alert to a webhook when the Cloud Spanner CPU metric is over or under your desired threshold. Create a Cloud Function that listens to this HTTP webhook and resizes Spanner resources appropriately.
You just finished building an application and you deployed it on a Google Kubernetes Engine (GKE) cluster in a custom-mode VPC in the us-west1 region. The application exposes a TCP endpoint backed with several replicas of the application. You are running another Compute Engine instance located in the same region as your
cluster, but in a different custom-mode VPC called td-compute-network. The CIDR ranges of the two VPCs do not overlap. You have to establish a connection between your Compute Engine instance and the application on GKE. You want to reduce the amount of work required to accomplish the task. What should
you do?
A● 1. Provision a Service of type LoadBalancer that uses the application’s Pods as its backend.
● 2. Set the externalTrafficPolicy value to Cluster in the Service
configuration file.
● 3. Configure the Compute Engine instance to use the IP address of the load balancer that you just created.
B● 1. Provision a Service of type LoadBalancer that uses the application’s Pods as its backend.
● 2. Use Cloud Armor Security Policy to the load balancer to whitelist the internal IP addresses of the instances found in the managed instance group.
● 3. Connect the two VPCs using VPC Peering.
● 4. Configure the Compute Engine instance to use the IP address of the load balancer that you just created.
C● 1. Provision a Service of type NodePort that uses the application’s Pods as its backend.
● 2. Build a new Compute Engine instance named proxy with two network interfaces, each assigned to a VPC.
● 3. Manage iptables rule on the new instance to forward traffic coming from the td-compute-network to the GKE nodes.
4. Set up your Compute Engine instance to use the address of proxy in td-compute-network as endpoint.
D● 1. Provision a Service of type LoadBalancer that uses the application’s Pods as its backend.
● 2. Set the annotation for the service’s metadata to service:
cloud.google.com/load-balancer-type: “Internal”
● 3. Connect the two VPCs using VPC Peering.
●4. Configure the Compute Engine instance to use the IP address of the load balancer that you just created.
D● 1. Provision a Service of type LoadBalancer that uses the application’s Pods as its backend.
● 2. Set the annotation for the service’s metadata to service:
cloud.google.com/load-balancer-type: “Internal”
● 3. Connect the two VPCs using VPC Peering.
●4. Configure the Compute Engine instance to use the IP address of the load balancer that you just created.
You are working as a Cloud Security Officer in your company. You are asked to log all read requests and activities on your Cloud Storage bucket where you store all of the company’s sensitive data. You need to enable this feature as soon as possible because
this is also a compliance requirement that will be checked on the next audit.
What should you do?
A.Enable Data Access audit logs for Cloud Storage
B. Enable Identity-Aware Proxy feature on the Cloud Storage.
C. Enable Certificate Authority (CA) Service on the bucket.
D. Enable Object Versioning on the bucket
A. Enable Data Access audit logs for Cloud Storage
A senior developer in your company is assigned to manage and create service accounts for your company’s Google Cloud projects. You have to make sure that the assigned personnel is granted the least permissions to manage the projects.
What should you do?
A. Grant the roles/iam.roleAdmin role to the senior developer’s account.
B. Grant the roles/iam.serviceAccountUser role to the senior developer’s account.
C. Grant the roles/iam.serviceAccountAdmin role to the senior developer’s account.
D. Grant the roles/iam.serviceAccountKeyAdmin role to the senior developer’s account.
C. Grant the roles/iam.serviceAccountAdmin role to the senior developer’s account.
Your company is having its yearly audit. You need to grant access to a group of auditors who want to view the folders and project hierarchy on your company’s GCP account. You want to follow Google-recommended best practices.
What should you do?
A. Grant roles/browser role to the auditors individually.
B. Create a group for the auditors. Grant roles/viewer role to the group.
C.Create a group for the auditors. Grant roles/browser role to the group.
D. Grant roles/viewer role to the auditors individually.
C. Create a group for the auditors. Grant roles/browser role to the group.
All employees in your organization have a Google account. Your operations team
needs to manage over a hundred Compute Engine instances. The members of this team must be provided only with administrative access to the VM instances. Moreover, the security team wants to audit instance logins and ensure that the provision of credentials is operationally efficient.
What should you do?
A. Create a new SSH key pair. Issue the private key to each member of the operations team. Configure the public key as a project-wide public SSH key in your project. Lastly, allow project-wide public SSH keys on each instance.
B. Require each member of the team to generate a new SSH key pair. Have them send their public key to you. Utilize a configuration management tool to deploy those SSH keys on each instance.
C. Create a new SSH key pair. Issue the private key to each member of the team. Configure the public key in the metadata of each instance.
D. Require each member of the team to generate a new SSH key pair and to add the public key to their respective Google account. Then grant the compute.osAdminLogin role to the corresponding Google group of the operations team.
D. Require each member of the team to generate a new SSH key pair and to add the public key to their respective Google account. Then grant the compute.osAdminLogin role to the corresponding Google group of the operations team.
Your company is having its yearly business audit. Your external editor needs to review the Data Access and Access Transparency audit logs of your Google Cloud Platform account. Your company also wants to keep a copy of these logs as a reference for the
next audit. You want to follow Google-recommended practices on granting Cloud IAM roles.
What should you do?
A. Grant the external auditor a custom role that has logging.logs.list and logging.logServices.list permissions. Create a log sink and export the logs to BigQuery.
B. Grant the external auditor the Project Viewer IAM role. Create a log sink and export the logs to BigQuery.
C. Grant the external auditor the roles/logging.viewer IAM role. Create a log sink and export the logs to Cloud Storage.
D. Grant the external auditor the roles/logging.privateLogViewer IAM role. Create a log sink and export the logs to Cloud Storage.
D. Grant the external auditor the roles/logging.privateLogViewer IAM role. Create a log sink and export the logs to Cloud Storage
You are running a group of Compute Engine instances on the Google Cloud Platform. You want to set up the necessary permissions to allow all of your instances to write data into a specific Cloud Storage bucket. You want to follow Google-recommended practices.
What should you do?
A. Using the GCP Console, create a service account with an IAM role of storage.objectCreator. Use it for your GCE instances to get write
permissions on the bucket.
B. Create an authentication request from your application to access Google API with https://www.googleapis.com/auth/compute as an access scope.
C. Create an authentication request from your application to access the Google API with https://www.googleapis.com/auth/devstorage.read_only as an
access scope.
D. Using the GCP Console, create a service account with an IAM role of storage.objectAdmin. Use it for your GCE instances to get write permissions on the bucket.
A. Using the GCP Console, create a service account with an IAM role of storage.objectCreator. Use it for your GCE instances to get write
permissions on the bucket.
Your company just started using Google Cloud Platform to host their application. You are tasked to ensure that the finance department can only view the billing reports of all of the company’s GCP projects. You want to follow Google’s recommended best practices.
What should you do?
A. Create a group for the finance department. Grant the roles/billing viewer role to the finance group.
B. Grant the roles/billing user role to finance users individually.
C. Create a group for the finance department. Grant the roles/billing user role to the finance group.
D. Grant the roles/billing viewer role to finance users individually
A. Create a group for the finance department. Grant the roles/billing viewer role to the finance group.
Your Data Analytics team is requesting access to datasets found in BigQuery. You need to ensure that the team is only able to perform read operations on the datasets in BigQuery but they should be restricted from deleting them. You want to utilize a Google-recommended solution that follows best practices.
What should you do as an administrator?
A. Create a Google group and build a custom role with delete permissions removed. Add the group to the newly created custom role.
B. Build a custom role and attach it to the accounts of the Data Analytics users.
C. Attach the roles/bigquery.dataEditor role to the user accounts of the Data Analytics team.
D. Attach the roles/bigquery.user role to the user accounts of the Data Analytics team.
D. Attach the roles/bigquery.user role to the user accounts of the Data Analytics team.
You developed a decoupled application that is set to be deployed on a Kubernetes cluster on Google Kubernetes Engine (GKE). You need to be able to run on high IOPS for the application’s high-performance computing and you also need to use disk snapshots as part of your disaster recovery strategy. You used the GCP Pricing
Calculator to generate a cost estimate and entered some information regarding your cluster, such as the number of nodes, average days, and average hours.
What should you do next?
A. Request for quotation from the GCP Cloud Support Team.
B. Tick the add GPUs option. Check the option to add the cost estimate for GKE cluster management.
C. Enter the number of Local SSDs you want to use. Check the option to add the cost estimate for GKE cluster management.
D. Enter the number of Local SSDs you want to use. Fill out Persistent Disk storage and snapshot storage fields.
D. Enter the number of Local SSDs you want to use. Fill out Persistent Disk storage and snapshot storage fields.
You are working on a web application that uses Cloud Datastore as a backend. You want to test the application and the Cloud Datastore integration locally using an Ubuntu machine that has Google Cloud SDK installed.
What should you do?
A. Install the Datastore emulator using apt-get install
cloud-datastore-emulator command.
B. Use the gcloud datastore export command to export all Datastore entities and save them in the Ubuntu machine.
C. Create a VM instance that uses Ubuntu in Google Compute Engine. Attach a Service Account with the necessary permissions to access Datastore.
D. Install the Google Cloud SDK on the Ubuntu Machine. Install the Datastore emulator using the gcloud components install command.
D. Install the Google Cloud SDK on the Ubuntu Machine. Install the Datastore emulator using the gcloud components install command.
Your company regularly executes a batch job process hosted in an on-premises server which takes around 33 hours in total to complete. The batch job consists of smaller tasks that can be performed offline and can be restarted in case of process
interruption. You are assigned to migrate this workload to the Google Cloud Platform and implement a cost-effective solution.
What should you do?
A. Build an instance template configured to launch a Preemptible VM. Provision a managed instance group (MIG) from the template you just created. Adjust the Target CPU Utilization setting.
B. Move your workload to a Compute Engine instance. Start and stop the instance in the event of failure.
C. Use Google Kubernetes Engine (GKE) to build Preemptible nodes.
D. Use Compute Engine Preemptible VMs for your workload.
A. Build an instance template configured to launch a Preemptible VM. Provision a managed instance group (MIG) from the template you just created. Adjust the Target CPU Utilization setting.
Your company has several applications that use Compute Engine and Cloud Storage services in GCP. You were assigned to set up a budget alert for the total Cloud Storage service cost incurred in all of your GCP projects. All of these projects are using the same billing account. You want to follow Google-recommended best practices.
What should you do?
A. Ensure that you are the Billing Account Administrator. Select the billing account and create the budget alert for each of the projects.
B. Ensure that you are the Billing Account User. Select the billing account and
create a budget. Select all projects and the Cloud Storage service as the budget scope and finally, create the budget alert.
C. Ensure that you are the Billing Account Administrator. Select the billing account and create a budget. Select all projects and the Cloud Storage service as the budget scope and finally, create the budget alert.
D. Ensure that you are the Billing Account User. Select the billing account and create a budget. Set the budget scope as default then create the budget alert
C. Ensure that you are the Billing Account Administrator. Select the billing account and create a budget. Select all projects and the Cloud Storage service as the budget scope and finally, create the budget alert.
You are running VMs that are currently reaching the maximum capacity on your on-premises data center. You decided to extend your data center infrastructure to Google Cloud to accommodate new workloads. You have to ensure that the VMs that you provisioned in GCP can communicate directly with on-premises resources via a private IP range.
What should you do?
A. Set up Cloud VPN between your on-premises network to a VPC network through an IPsec VPN connection.
B. Build a custom-mode VPC. Set up VPC Network Peering between your on-premises network and your newly created VPC to establish a connection through a private IP range.
C. Create a VPC on Google Cloud and configure it as a host for a Shared VPC.
D. Provision virtual machines on your on-premises and Google Cloud VPC networks that will serve as bastion hosts. Configure the VMs as proxy servers using public IP addresses.
A. Set up Cloud VPN between your on-premises network to a VPC network through an IPsec VPN connection.