ACE-T3 Flashcards

1
Q
You want to find a list of regions and the prebuilt images offered by Google Compute Engine. Which commands should you execute to retrieve this information?
A - gcloud regions list.
      gcloud images list.
B - gcloud compute regions list
      gcloud images list.
C - gcloud regions list
     gcloud compute images list.
D - gcloud compute regions list
     gcloud compute images list.
A

D

Both the commands correctly retrieve images and regions offered by Google Compute Engine

Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list

Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Your company has an App Engine application that needs to store stateful data in a proper storage service. Your data is non-relational data. You do not expect the database size to grow beyond 10 GB and you need to have the ability to scale down to zero to avoid unnecessary costs. Which storage service should you use?

A - Cloud Dataproc
B - Cloud Datastore.
C - Cloud Bigtable.
D - Cloud SQL.

A

B

Cloud Datastore is a highly-scalable NoSQL database. Cloud Datastore scales seamlessly and automatically with your data, allowing applications to maintain high performance as they receive more traffic; automatically scales back when the traffic reduces.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You work for a leading retail platform that enables its retailers to sell their items to over 200 million users worldwide. You persist all analytics data captured during user navigation to BigQuery. A business analyst wants to run a query to identify products that were popular with buyers in the recent thanksgiving sale. The analyst understands the query needs to iterate through billions of rows to fetch the required information but is not sure of the costs involved in the on-demand pricing model, and has asked you to help estimate the query cost. What should you do?

A - Run the query using bq with the –dry_run flag to estimate the number of bytes returned by the query. Make use of the pricing calculator to estimate the query cost.
B - Run the query using bq with the –dry_run flag to estimate the number of bytes read by the query. Make use of the pricing calculator to estimate the query cost.
C - Execute the query using bq to estimate the number of rows returned by the query. Make use of the pricing calculator to estimate the query cost.
D - Switch to BigQuery flat-rate pricing. Coordinate with the analyst to run the query while on flat-rate pricing and switch back to on-demand pricing.

A

B
BigQuery pricing is based on the number of bytes processed/read. Under on-demand pricing, BigQuery charges for queries by using one metric: the number of bytes processed (also referred to as bytes read). You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Google Drive, or Cloud Bigtable. On-demand pricing is based solely on usage.

Ref: https://cloud.google.com/bigquery/pricing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

A recent reorganization in your company has seen the creation of a new data custodian team – responsible for managing data in all storage locations. Your production GCP project uses buckets in Cloud Storage, and you need to delegate control to the new team to manage objects and buckets in your GCP project. What role should you grant them?

A - Grant the data custodian team Project Editor IAM role.
B - Grant the data custodian team Storage Object Admin IAM role.
C - Grant the data custodian team Storage Admin IAM role.
D - Grant the data custodian team Project Editor IAM role.

A

C

Grant the data custodian team Storage Admin IAM role.

This role grants full control of buckets and objects. When applied to an individual bucket, control applies only to the specified bucket and objects within the bucket.

Ref: https://cloud.google.com/iam/docs/understanding-roles#storage-roles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Your company produces documentary videos for a reputed television channel and stores its videos in Google Cloud Storage for long term archival. Videos older than 90 days are accessed only in exceptional circumstances and videos older than one year are no longer needed. How should you optimise the storage to reduce costs?

A - Use a Cloud Function to rewrite the storage class to Coldline for objects older than 90 days. Use another Cloud Function to delete objects older than 365 days from Coldline Storage Class.
B - Use a Cloud Function to rewrite the storage class to Coldline for objects older than 90 days. Use another Cloud Function to delete objects older than 275 days from Coldline Storage Class.
C - Configure a lifecycle rule to transition objects older than 90 days to Coldline Storage Class. Configure another lifecycle rule to delete objects older than 275 days from Coldline Storage Class.
D - Configure a lifecycle rule to transition objects older than 90 days to Coldline Storage Class. Configure another lifecycle rule to delete objects older than 365 days from Coldline Storage Class.
A
D
Object Lifecycle Management does not rewrite an object when changing its storage class. When an object is transitioned to Nearline Storage, Coldline Storage, or Archive Storage using the SetStorageClass feature, any subsequent early deletion and associated charges are based on the original creation time of the object, regardless of when the storage class changed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Your company has migrated most of the data center VMs to Google Compute Engine. The remaining VMs in the data center host legacy applications that are due to be decommissioned soon and your company has decided to retain them in the datacenter. Due to a change in the business operational model, you need to introduce changes to one of the legacy applications to read files from Google Cloud Storage. However, your data center does not have access to the internet and your company doesn’t want to invest in setting up internet access as the data center is due to be turned off soon. Your data center has a partner interconnect to GCP. You wish to route traffic from your datacenter to Google Storage through partner interconnect. What should you do?

A - In the following example, the on-premises network is connected to a VPC network through a Cloud VPN tunnel. Traffic from on-premises hosts to Google APIs travels through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent through a route that uses the default internet gateway as its next hop. The next hop allows traffic to leave the VPC network and be delivered to restricted.googleapis.com (199.36.153.4/30).

A

A (there is only one answer)

  1. In on-premises DNS configuration, map *.googleapis.com to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
  2. Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel.
  3. Add a custom static route to the VPC network to direct traffic with the destination 199.36.153.4/30 to the default internet gateway.
  4. Created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and authorize the zone for use by VPC network
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Your company is migrating all applications from the on-premises data centre to Google Cloud, and one of the applications is dependent on Websockets protocol and session affinity. You want to ensure this application can be migrated to Google Cloud platform and continue serving requests without issues. What should you do?

A - So the next possible step is to Discuss load balancer options with the relevant teams.

A

A (There is only one answer)

Google HTTP(S) Load Balancing has native support for the WebSocket protocol when you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend. Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support

The load balancer also supports session affinity.

Ref: https://cloud.google.com/load-balancing/docs/backend-service#session_affinity

So the next possible step is to Discuss load balancer options with the relevant teams.

We don’t need to convert WebSocket code to use HTTP streaming or Redesign the application, as WebSocket support and session affinity are offered by Google HTTP(S) Load Balancing. Reviewing the design is a good idea, but it has nothing to do with WebSockets.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The deployment team currently spends a lot of time creating and configuring VMs in Google Cloud Console, and feel they could be more productive and consistent if the same can be automated using Infrastructure as Code. You want to help them identify a suitable service. What should you recommend?

A - Managed Instance Group (MIG).
B - Deployment Manager
C - Cloud Build
D - Unmanaged Instance Group.

A

B

Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using YAML. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load-balanced, auto-scaled instance group. You can deploy many resources at one time, in parallel. Using the deployment manager, you can apply a Python/Jinja2 template to create a MIG/auto-scaling policy that dynamically provisions VM. And our other requirement of “dedicated configuration file” is also met. Using the deployment manager for provisioning results in a repeatable deployment process. By creating configuration files that define the resources, the process of creating those resources can be repeated over and over with consistent results. Google recommends we script our infrastructure and deploy using Deployment Manager.

Ref: https://cloud.google.com/deployment-manager

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

An intern joined your team recently and needs access to Google Compute Engine in your sandbox project to explore various settings and spin up compute instances to test features. You have been asked to facilitate this. How should you give your intern access to compute engine without giving more permissions than is necessary?

A - Grant Compute Engine Admin Role for sandbox project.
B - Create a shared VPC to enable the intern access Compute resources.
C - Grant Project Editor IAM role for sandbox project.
D - Grant Compute Engine Instance Admin Role for the sandbox project.

A

D
Compute Engine Instance Admin Role grants full control of Compute Engine instances, instance groups, disks, snapshots, and images. It also provides read access to all Compute Engine networking resources. This provides just the required permissions to the intern.

Ref: https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You deployed your application to a default node pool on the GKE cluster and you want to configure cluster autoscaling for this GKE cluster. For your application to be profitable, you must limit the number of Kubernetes nodes to 10. You want to start small and scale up as traffic increases and scale down when the traffic goes down. What should you do?

A - To enable autoscaling, add a tag to the instances in the cluster by running the command gcloud compute instances add-tags [INSTANCE] –tags=enable-autoscaling,min-nodes=1,max-nodes=10.
B - Create a new GKE cluster by running the command gcloud container clusters create [CLUSTER_NAME] –enable-autoscaling –min-nodes=1 –max-nodes=10. Redeploy your application.
C - Update existing GKE cluster to enable autoscaling by running the command gcloud container clusters update [CLUSTER_NAME] –enable-autoscaling –min-nodes=1 –max-nodes=10.
D - Set up a stack driver alert to detect slowness in the application. When the alert is triggered, increase nodes in the cluster by running the command gcloud container clusters resize CLUSTER_Name –size .

A

C
The command:
# gcloud container clusters update

Updates an existing GKE cluster. The flag –enable-autoscaling enables autoscaling and the parameters –min-nodes=1 –max-nodes=10 define the minimum and maximum number of nodes in the node pool. This enables cluster autoscaling which scales up and scales down the nodes automatically between 1 and 10 nodes in the node pool.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have a web application deployed as a managed instance group. You noticed some of the compute instances are running low on memory. You suspect this is due to JVM memory leak and you want to restart the compute instances to reclaim the leaked memory. Your web application is currently serving live web traffic. You want to ensure that the available capacity does not go below 80% at any time during the restarts and you want to do this at the earliest. What would you do?

A - Perform a rolling-action reboot with max-surge set to 20%.
B - Perform a rolling-action restart with max-unavailable set to 20%.
C - Perform a rolling-action replace with max-unavailable set to 20%.
D - Stop instances in the managed instance group (MIG) one at a time and rely on autohealing to bring them back up.

A

B
This option achieves the outcome in the most optimal manner. The restart action restarts instances in a managed instance group. By performing a rolling restart with max-unavailable set to 20%, the rolling update restarts instances while ensuring there is at least 80% available capacity. The rolling update carries on restarting all the remaining instances until all instances in the MIG have been restarted.

Ref: https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instance-groups/managed/rolling-action/restart

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Your company has many Citrix services deployed in the on-premises datacenter, and they all connect to the Citrix Licensing Server on 10.10.10.10 in the same data centre. Your company wants to migrate the Citrix Licensing Server and all Citrix services to Google Cloud Platform. You want to minimize changes while ensuring the services can continue to connect to the Citrix licensing server. How should you do this in Google Cloud?

A - Deploy the Citrix Licensing Server on a Google Compute Engine instance with an ephemeral IP address. Once the server is responding to requests, promote the ephemeral IP address to a static internal IP address.
B - Deploy the Citrix Licensing Server on a Google Compute Engine instance and set its ephemeral IP address to 10.10.10.10.
C - Use gcloud compute addresses create to reserve 10.10.10.10 as a static internal IP and assign it to the Citrix Licensing Server VM Instance.
D - Use gcloud compute addresses create to reserve 10.10.10.10 as a static external IP and assign it to the Citrix Licensing Server VM Instance.

A

C
This option lets us reserve IP 10.10.10.10 as a static internal IP address because it falls within the standard IP Address range as defined by IETF (Ref: https://tools.ietf.org/html/rfc1918). 10.0.0.0/8 is one of the allowed ranges, so all IP Addresses from 10.0.0.0 to 10.255.255.255 belong to this internal IP range. Since we can now reserve this IP Address as a static internal IP address, it can be assigned to the licensing server in the VPC so that the application can reach the licensing server.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

You want to use Google Cloud Storage to host a static website on www.example.com for your staff. You created a bucket example-static-website and uploaded index.html and css files to it. You turned on static website hosting on the bucket and set up a CNAME record on www.example.com to point to c.storage.googleapis.com. You access the static website by navigating to www.example.com in the browser but your index page is not displayed. What should you do?
A - In example.com zone, modify the CNAME record to c.storage.googleapis.com/example-static-website.
B - Delete the existing bucket, create a new bucket with the name www.example.com and upload the html/css files.
C - In example.com zone, delete the existing CNAME record and set up an A record instead to point to c.storage.googleapis.com.
D - Reload the Cloud Storage static website server to load the objects.

A

B
We need to create a bucket whose name matches the CNAME you created for your domain. For example, if you added a CNAME record pointing www.example.com to c.storage.googleapis.com., then create a bucket with the name “www.example.com”.A CNAME record is a type of DNS record. It directs traffic that requests a URL from your domain to the resources you want to serve, in this case, objects in your Cloud Storage buckets. For www.example.com, the CNAME record might contain the following information:

NAME TYPE DATA
www.example.com CNAME c.storage.googleapis.com.
Ref: https://cloud.google.com/storage/docs/hosting-static-website

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Your company hosts a number of applications in Google Cloud and requires that log messages from all applications be archived for 10 years to comply with local regulatory requirements. Which approach should you use?

A - Create a Stackdriver account and a Stackdriver group in one of the production GCP projects. Add all other projects as members of the group. Configure a monitoring dashboard in the Stackdriver account
B - Create a Stackdriver account in each project and configure all accounts to use the same service account. Create a monitoring dashboard in one of the projects.
C-1. Enable Stackdriver Logging API
C-2. Configure web applications to send logs to Stackdriver
C-3. Export logs to Google Cloud Storage
D - Set up a shared VPC across all production GCP projects and configure Cloud Monitoring dashboard on one of the projects.

A

C

C-1. Enable Stackdriver Logging API
C-2. Configure web applications to send logs to Stackdriver
C-3. Export logs to Google Cloud Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Your company has deployed several production applications across many Google Cloud Projects. Your operations team requires a consolidated monitoring dashboard for all the projects. What should you do?

A - Create a single Stackdriver workspace and link all production GCP projects to it. Configure a monitoring dashboard in the Stackdriver account.
B - Create a Stackdriver account in each project and configure all accounts to use the same service account. Create a monitoring dashboard in one of the projects.
C - Set up a shared VPC across all production GCP projects and configure Cloud Monitoring dashboard on one of the projects.
D - Create a Stackdriver account and a Stackdriver group in one of the production GCP projects. Add all other projects as members of the group. Configure a monitoring dashboard in the Stackdriver account.

A

A
You can monitor resources of different projects in a single Stackdriver account by creating a Stackdriver workspace. A Stackdriver workspace is a tool for monitoring resources contained in one or more Google Cloud projects or AWS accounts. Each Workspace can have between 1 and 100 monitored projects, including Google Cloud projects and AWS accounts. A Workspace accesses metric data from its monitored projects, but the metric data and log entries remain in the individual projects. Ref: https://cloud.google.com/monitoring/workspaces

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

You want to list all the internal and external IP addresses of all compute instances. Which of the commands below should you run to retrieve this information?

A - gcloud compute instances list-ip.
B - gcloud compute networks list-ip.
C - gcloud compute instances list.
D - gcloud compute networks list.

A

C
gcloud compute instances list - lists Google Compute Engine instances. The output includes internal as well as external IP addresses.

Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A mission-critical application running in Google Cloud Platform requires an urgent update to fix a security issue without any downtime. How should you do this in CLI using deployment manager?

A - Use gcloud deployment-manager deployments update and point to the deployment config file.
B - Use gcloud deployment-manager deployments create and point to the deployment config file.
C - Use gcloud deployment-manager resources update and point to the deployment config file.
D - Use gcloud deployment-manager resources create and point to the deployment config file.

A

A
gcloud deployment-manager deployments update - updates a deployment based on a provided config file and fits our requirement.

Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/update

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Your organization is planning to deploy a Python web application to Google Cloud. The web application uses a custom Linux distribution and you want to minimize rework. The web application underpins an important website that is accessible to the customers globally. You have been asked to design a solution that scales to meet demand. What would you recommend to fulfill this requirement? (Select Two)

A - Network Load Balance.
B - HTTP(S) Load Balancer.
C - App Engine Standard environment.
D - Managed Instance Group on Compute Engine.

A

B and D

HTTP(S) Load Balancing is a global service (when the Premium Network Service Tier is used). We can create backend services in more than one region and have them all serviced by the same global load balancer

Managed instance groups (MIGs) maintain the high availability of your applications by proactively keeping your virtual machine (VM) instances available. An autohealing policy on the MIG relies on an application-based health check to verify that an application is responding as expected. If the auto-healer determines that an application isn’t responding, the managed instance group automatically recreates that instance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Your team uses Splunk for centralized logging and you have a number of reports and dashboards based on the logs in Splunk. You want to install Splunk forwarder on all nodes of your new Kubernetes Engine Autoscaled Cluster. The Splunk forwarder forwards the logs to a centralized Splunk Server. You want to minimize operational overhead. What is the best way to install Splunk Forwarder on all nodes in the cluster?

A - Use Deployment Manager to orchestrate the deployment of forwarder agents on all nodes.
B - Include the forwarder agent in a DaemonSet deployment.
C - Include the forwarder agent in a StatefulSet deployment.
D - SSH to each node and run a script to install the forwarder agent.

A

B

In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes. So by configuring the pod to use Splunk forwarder agent image and with some minimal configuration (e.g. identifying which logs need to be forwarded), you can automate the installation and configuration of Splunk forwarder agent on each GKE cluster node.

Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

You plan to deploy an application on an autoscaled managed instances group. The application uses a tomcat server and runs on port 8080. You want to access the application on https://www.example.com. You want to follow Google recommended practices. What services would you use?
A - Google Domains
B - Cloud DNS
C - Google Domains, Cloud DNS, HTTP(S) Load Balancer
D - HTTP(S) Load Balancer

A

C

To serve traffic on https://www.example.com, we have to first own the domain example.com. We can use Google Domains service to register a domain.

Ref: https://domains.google/

Once we own example.com domain, we need to create a zone www.example.com. We can use Cloud DNS, which is a scalable, reliable, and managed authoritative Domain Name System (DNS) to create a DNS zone.

Ref: https://cloud.google.com/dns

Once the www.example.com zone is set up, we need to create a DNS (A) record to point to the public IP of the Load Balancer. This is also carried out in Cloud DNS.

Finally, we need a load balancer to front the autoscaled managed instances group. Google recommends we use HTTP(S) Load Balancer for this requirement as “SSL Proxy Load Balancing is intended for non-HTTP(S) traffic. For HTTP(S) traffic, we recommend that you use HTTP(S) Load Balancing.”

Ref: https://cloud.google.com/load-balancing/docs/ssl

So the correct answer is Google Domains, Cloud DNS, HTTP(S) Load Balancer

21
Q

You developed an enhancement to a production application deployed in App Engine Standard service. Unit testing and user acceptance testing has succeeded, and you deployed the new version to production. Users have started complaining of slow performance after the recent update, and you need to revert to the previous version immediately. How can you do this?

A - Execute gcloud app restore to rollback to the previous version.
B -In the App Engine Console, identify the App Engine application versions and make the previous version the default to route all traffic to it.
C -Deploy the previous version as a new App Engine Application and use traffic splitting feature to send all traffic to the new application.
D -In the App Engine Console, identify the App Engine application and select Revert.

A

B

You can roll back to a previous version in the app engine GCP console. Go back to the list of versions and check the box next to the version that you want to receive all traffic and click the MAKE DEFAULT button located above the list. Traffic immediately switches over to the selected version.

Ref: https://cloud.google.com/community/tutorials/how-to-roll-your-app-engine-managed-vms-app-back-to-a-previous-version-part-1

22
Q

Your company has three GCP projects – for development, test and production environments. The budgeting team in the finance department needs to know the cost estimates for the next financial year to include it in the budget. They have years of experience using SQL and need to group costs by parameters such as duration (day/week/month/quarter), service type, region, etc. How can you enable this?

Requirements

  1. use query syntax
  2. need the billing data of all three projects

A - Export billing data to a Google Cloud Storage bucket. Trigger a Cloud Function that reads the data and inserts into Cloud BigTable. Ask the budgeting team to run queries against BigTable to analyze current costs and estimate future costs.
B - Export billing data to a Google Cloud Storage bucket. Manually copy the data from Cloud Storage bucket to a Google sheet. Ask the budgeting team to apply formulas in the Google sheet to analyze current costs and estimate future costs.
C - Export billing data to a BigQuery dataset. Ask the budgeting team to run queries against BigQuery to analyze current costs and estimate future costs.
D -Download the costs as CSV file from the Cost Table page. Ask the budgeting team to open this file Microsoft Excel and apply formulas to analyze current costs and estimate future costs.

A

C

Export billing data to a BigQuery dataset. Ask the budgeting team to run queries against BigQuery to analyze current costs and estimate future costs.

You can export billing information from multiple projects into a BigQuery dataset. Unlike the export to Cloud Storage bucket, export to BigQuery dataset includes all information making it easy and straightforward to construct queries in BigQuery to estimate the cost. BigQuery supports Standard SQL so you can join tables and group by fields (labels in this case) as needed.

23
Q

You transitioned an application to your operations team. The lead operations engineer has asked you to help understand what this lifecycle management rule does. What should your response be?
A - The lifecycle rule archives current (live) objects older than 60 days and transitions Multi-regional objects older than 365 days to Nearline storage class.
B -The lifecycle rule transitions Multi-regional objects older than 365 days to Nearline storage class.
C -The lifecycle rule deletes non-current (archived) objects older than 60 days and transitions Multi-regional objects older than 365 days to Nearline storage class.
D -The lifecycle rule deletes current (live) objects older than 60 days and transitions Multi-regional objects older than 365 days to Nearline storage class.

A

C

The first part of the rule: The action has “type”:”Delete” which means we want to Delete. “isLive”:false condition means we are looking for objects that are not Live, i.e. objects that are archived. Together, it means we want to delete archived objects older than 60 days. Note that if an object is deleted, it cannot be undeleted. Take care in setting up your lifecycle rules so that you do not cause more data to be deleted than you intend.

Ref: https://cloud.google.com/storage/docs/managing-lifecycles

The second part of the rule: The action indicates we want to set storage class to Nearline. The condition is satisfied if the existing storage class is multi-regional, and the age of the object is 365 days or over. Together it means we want to set the storage class to Nearline if existing storage class is multi-regional and the age of the object is 365 days or over.

24
Q

You are hosting a new application on https://www.my-new-gcp-ace-website.com. The static content of the application is served from /static path and is hosted in a Cloud Storage bucket. The dynamic content is served from /dynamic path and is hosted on a fleet of compute engine instances belonging to a Managed Instance Group. How can you configure a single GCP Load Balancer to serve content from both paths?

A - Create a CNAME DNS record on www.my-new-gcp-ace-website.com to point to storage.googleapis.com. Configure an HTTP(s) Load Balancer for the Managed Instance Group (MIG)

B - Use HAProxy Alpine Docker images to deploy to GKE cluster. Configure HAProxy to route /dynamic/ to the Managed Instance Group (MIG) and /static/ to GCS bucket. Create a service of type LoadBalancer.

C - Configure an HTTP(s) Load Balancer for the Managed Instance Group (MIG). Configure the necessary TXT DNS records on www.my-new-gcp-ace-website.com to route requests on /dynamic/ to the Managed Instance Group (MIG) and /static/ to GCS bucket. is

D - Configure an HTTP(s) Load Balancer and configure it to route requests on /dynamic/ to the Managed Instance Group (MIG) and /static/ to GCS bucket. Create a DNS A record on www.my-new-gcp-ace-website.com to point to the address of LoadBalancer.

A

D

Since we need to send requests to multiple backends, Cloud DNS can’t alone help us. We need Cloud HTTPS Load Balancer - it’s URL maps (a fancy name for path-based routing) helps distribute traffic to backends based on the path information.
Traffic received by Cloud HTTPS Load Balancer can be configured to send all requests on /dynamic path to the MIG group; and requests on /static/ path to the bucket.
The Load Balancer has a public IP address. But we want to instead access on www.my-new-gcp-ace-website.com, so we configure this as an A Record in our DNS provider.

25
Q

Your company wants to migrate a mission-critical application to Google Cloud Platform. The application is currently hosted in your on-premises data centre and runs off several VMs. Your migration manager has suggested a “lift and shift” to Google Compute Engine Virtual Machines and has asked you to ensure the application scales quickly, automatically and efficiently based on the CPU utilization. You want to follow Google recommended practices. What should you do?

A - Deploy the application to Google Compute Engine Managed Instance Group (MIG) with autoscaling enabled based on CPU utilization.
B -Deploy the application to Google Compute Engine Managed Instance Group (MIG). Deploy a Cloud Function to look up CPU utilization in Cloud Monitoring every minute and scale up or scale down the MIG group as needed.
C -Deploy the application to Google Compute Engine Managed Instance Group (MIG) with time-based autoscaling based on last months’ traffic patterns.
D -Deploy the application to GKE cluster with Horizontal Pod Autoscaling (HPA) enabled based on CPU utilization.

A

A

Managed instance groups offer autoscaling capabilities that let you automatically add or delete instances from a managed instance group based on increases or decreases in load (CPU Utilization in this case). Autoscaling helps your apps gracefully handle traffic increases and reduce costs when the need for resources is lower. You define the autoscaling policy, and the autoscaler performs automatic scaling based on the measured load (CPU Utilization in this case). Autoscaling works by adding more instances to your instance group when there is more load (upscaling), and deleting instances when the need for instances is lowered (downscaling).

26
Q

Your company owns a mobile game that is popular with users all over the world. The mobile game backend uses Cloud Spanner to store user state. An overnight job exports user state to a Cloud Storage bucket. Your operations team needs access to monitor the spanner instance but not have the permissions to view or edit user data. What IAM role should you grant the operations team?

A - Grant the operations team roles/spanner.database.reader IAM role.
B - Grant the operations team roles/spanner.database.user IAM role.
C - Grant the operations team roles/monitoring.viewer IAM role.
D - Grant the operations team roles/stackdriver.accounts.viewer IAM role.

A

C

roles/monitoring.viewer provides read-only access to get and list information about all monitoring data and configurations. This role provides monitoring access and fits our requirements.

27
Q

The storage costs for your application logs have far exceeded the project budget. The logs are currently being retained indefinitely in the Cloud Storage bucket myapp-gcp-ace-logs. You have been asked to remove logs older than 90 days from your Cloud Storage bucket. You want to optimize ongoing Cloud Storage spend. What should you do?

A - You write a lifecycle management rule in XML and push it to the bucket with gsutil lifecycle set config-xml-file.
B - Write a script that runs gsutil ls -lr gs://myapp-gcp-ace-logs/** to find and remove items older than 90 days. Repeat this process every morning.
C - Write a script that runs gsutil ls -l gs://myapp-gcp-ace-logs/** to find and remove items older than 90 days. Schedule the script with cron.
D - Write a lifecycle management rule in JSON and push it to the bucket with gsutil lifecycle set config-json-file.

A

D

You can assign a lifecycle management configuration to a bucket. The configuration contains a set of rules which apply to current and future objects in the bucket. When an object meets the criteria of one of the rules, Cloud Storage automatically performs a specified action on the object. One of the supported actions is to Delete objects. You can set up a lifecycle management to delete objects older than 90 days. “gsutil lifecycle set” enables you to set the lifecycle configuration on the bucket based on the configuration file. JSON is the only supported type for the configuration file. The config-json-file specified on the command line should be a path to a local file containing the lifecycle configuration JSON document.

28
Q

An engineer from your team accidentally deployed several new versions of NodeJS application on Google App Engine Standard. You are concerned the new versions are serving traffic. You have been asked to produce a list of all the versions of the application that are receiving traffic as well the percent traffic split between them. What should you do?

A - gcloud app versions list –hide-no-traffic.
B - gcloud app versions list –show-traffic.
C - gcloud app versions list –traffic.
D - gcloud app versions list.

A

A

This command correctly lists just the versions that are receiving traffic by hiding versions that do not receive traffic. This is the only command that fits our requirements.

29
Q

You deployed a workload to your GKE cluster by running the command kubectl apply -f app.yaml. You also enabled a LoadBalancer service to expose the deployment by running kubectl apply -f service.yaml. Your pods are struggling due to increased load so you decided to enable horizontal pod autoscaler by running kubectl autoscale deployment [YOUR DEPLOYMENT] –cpu-percent=50 –min=1 –max=10. You noticed the autoscaler has launched several new pods but the new pods have failed with the message “Insufficient cpu”. What should you do to resolve this issue?

A - Use “kubectl container clusters resize” to add more nodes to the node pool.
B - Use “gcloud container clusters resize” to add more nodes to the node pool.
C - Edit the managed instance group of the cluster and enable autoscaling.
D - Edit the managed instance group of the cluster and increase the number of VMs by 1.

A

B

Your pods are failing with “Insufficient cpu”. This is because the existing nodes in the node pool are maxed out, therefore, you need to add more nodes to your node pool. For such scenarios, enabling cluster autoscaling is ideal, however, this is not in any of the answer options. In the absence of cluster autoscaling, the next best approach is to add more nodes to the cluster manually. This is achieved by running the command gcloud container clusters resize which resizes an existing cluster for running containers.

30
Q

The application development team at your company wants to use the biggest CIDR range possible for a VPC and has asked for your suggestion. Your operations team is averse to using any beta features. What should you suggest?
Use 10.0.0.0/8 CIDR range.

Only 1 Answer

A

Use 10.0.0.0/8 CIDR range. is the right answer.

The private network range is defined by IETF (Ref: https://tools.ietf.org/html/rfc1918) and adhered to by all cloud providers. The supported internal IP Address ranges are

  1. 24-bit block 10.0.0.0/8 (16777216 IP Addresses)
  2. 20-bit block 172.16.0.0/12 (1048576 IP Addresses)
  3. 16-bit block 192.168.0.0/16 (65536 IP Addresses)
  4. 0.0.0/8 gives you the most extensive range - 16777216 IP Addresses.
31
Q

You have a Cloud Function that is triggered every night by Cloud Scheduler. The Cloud Function creates a snapshot of VMs running in all projects in the department. Your team created a new project ptech-vm, and you now need to provide IAM access to the service account used by the Cloud Function to let it create snapshots of VMs in the new ptech-vm project. You want to follow Google recommended practices. What should you do?

A - Grant Compute Storage Admin IAM role on the ptech-vm project to the service account used by the Cloud Function.
B - Set the scope of the service account to Read/Write when provisioning compute engine instances in the ptech-vm project.
C - Use gcloud to generate a JSON key for the existing service account used by the Cloud Function. Register the JSON key as SSH key on all VM instances in the ptech-vm project.
D - Use gcloud to generate a JSON key for the existing service account used by the Cloud Function. Add a metadata tag to all compute engine instances in the ptech-vm project with key: service-account and value: .

A

A

Compute Storage Admin role provides permissions to create, modify, and delete disks, images, and snapshots. If the service account in ptech-sa is granted the IAM Role of Compute Storage Admin in the project called ptech-vm, it can take snapshots and carry out other activities as defined by the role.

32
Q

You are the operations manager at your company, and you have been requested to provide administrative access to the virtual machines in the development GCP project to all members of the development team. There are over a hundred VM instances, and everyone at your company has a Google account. How can you simplify this access request while ensuring you can audit logins if needed?

A - Run a script to generate SSH key pairs for all developers. Send an email to each developer with their private key attached. Update all VM instances in the development to add all the public keys. Have the developers present their private key to SSH to the instances.
B - Run a script to generate SSH key pairs for all developers. Send an email to each developer with their private key attached. Add public keys to project-wide public SSH keys in your GCP project and configure all VM instances in the project to allow project-wide SSH keys.
C - Share a script with the developers and ask them to run it to generate a new SSH key pair. Have them email their pubic key to you and run a script to add all the public keys to all instances in the project.
D - Share a script with the developers and ask them to run it to generate a new SSH key pair. Have the developers add their public key to their Google Account. Ask the security administrator to grant compute.osAdminLogin role to the developers’ Google group.

A

D

By letting users manage their SSH key pair (and it’s rotation, etc.), you delete the operational burden of managing SSH keys to individual users. Secondly, granting compute.osAdminLogin role grants the group administrator permissions (as opposed to granting compute.osLogin, which does not grant administrator permissions). Finally, managing provisioning and de-provisioning is as simple as adding or removing the user from the group.

33
Q

Your operations team have deployed an update to a production application running in Google Cloud App Engine Standard service. The deployment was successful, but your operations are unable to find this deployment in the production GCP project. What should you do?

A - Review the project settings in the App Engine deployment YAML file.
B - Review the properties of the active gcloud configurations by executing gcloud config list.
C - Review the project settings in the App Engine application configuration files.
D - Review the project settings in the Deployment Manager console.

A

B

If the deployment was successful, but it did not deploy to the intended project, the application would have been deployed to a different project. In the same gcloud shell, you can identify the current properties of the configuration by executing gcloud config list. The output returns config properties such as project, account, etc., as well as app-specific properties such as app/promote_by_default, app/stop_previous_version.

34
Q

You created a Kubernetes deployment by running kubectl run nginx –image=nginx –labels=”app=prod”. Your Kubernetes cluster is also used by a number of other deployments. How can you find the identifier of the pods for this nginx deployment?

A - #kubectl get pods -l “app=prod”.
B - #gcloud list gke-deployments –filter={ pod }.
C - #gcloud get pods –selector=”app=prod”.
D - #kubectl get deployments –output=pods.

A

A

kubectl get pods -l “app=prod”.

This command correctly lists pods that have the label app=prod. When creating the deployment, we used the label app=prod so listing pods that have this label retrieve the pods belonging to nginx deployments. You can list pods by using Kubernetes CLI - kubectl get pods.

Ref: https://kubernetes.io/docs/tasks/access-application-cluster/list-all-running-container-images/

35
Q

You want to deploy a python application to an autoscaled managed instance group on Compute Engine. You want to use GCP deployment manager to do this. What is the fastest way to get the application onto the instances without introducing undue complexity?

A - Once the instance starts up, connect over SSH and install the appliInclude a startup script to bootstrap the python application when creating instance template by running:
#gcloud compute instance-templates create app-template –metadata-from-file startup-script-url=/scripts/install_app.sh.cation.
B - Include a startup script to bootstrap the python application when creating instance template by running:
#gcloud compute instance-templates create app-template –metadata-from-file startup-script-url=/scripts/install_app.sh.
C - Include a startup script to bootstrap the python application when creating instance template by running:
D - Include a startup script to bootstrap the python application when creating instance template by running:
#gcloud compute instance-templates create app-template –metadata-from-file startup-script=/scripts/install_app.sh.

A

gcloud compute instance-templates create app-template –startup-script=/scripts/install_app.sh.- gcloud compute instance-templates create app-template –metadata-from-file startup-script=/scripts/install_app.sh.

D

This command correctly provides the startup script using the flag metadata-from-file and providing a valid startup-script value. When creating compute engine images, the startup script can be provided through a special metadata key called startup-script which specifies a script that will be executed by the instances once they start running.

For convenience, –metadata-from-file can be used to pull the value from a file.

36
Q

Your company owns a web application that lets users post travel stories. You began noticing errors in logs for a specific Deployment. The deployment is responsible for translating a post from one language to another. You’ve narrowed the issue down to a specific container named “msg-translator-22” that is throwing the errors. You are unable to reproduce the error in any other environment, and none of the other containers serving the deployment have this issue. You would like to connect to this container to figure out the root cause. What steps would allow you to run commands against the msg-translator-22?

A - Use the kubectl exec -it msg-translator-22 – /bin/bash command to run a shell on that container.
B - Use the kubectl exec -it – /bin/bash command to run a shell on that container.
C - Use the kubectl run msg-translator-22 /bin/ bash command to run a shell on that container.
D - Use the kubectl run command to run a shell on that container.

A

A

kubectl exec is used to execute a command in a container. We pass the container name msg-translator-22 so kubectl exec knows which container to connect to. And we pass the command /bin/bash to it, so it starts a shell on the container and we can then run custom commands and identify the root cause of the issue.

37
Q

You work for a big multinational financial company that has several hundreds of Google Cloud Projects for various development, test and production workloads. Financial regulations require your company to store all audit files for three years. What should you do to implement a log retention solution while minimizing storage cost?

A - Export audit logs from Cloud Logging to BigQuery via an export sink.
B - Export audit logs from Cloud Logging to Coldline Storage bucket via an export sink.
C - Write a script that exports audit logs from Cloud Logging to BigQuery. Use Cloud Scheduler to trigger the script every hour.
D - Export audit logs from Cloud Logging to Cloud Pub/Sub via an export sink. Configure a Cloud Dataflow pipeline to process these messages and store them in Cloud SQL for MySQL.

A

B

Coldline Storage is the perfect service to store audit logs from all the projects and is very cost-efficient as well. Coldline Storage is a very-low-cost, highly durable storage service for storing infrequently accessed data. Coldline Storage is a better choice than Standard Storage or Nearline Storage in scenarios where slightly lower availability, a 90-day minimum storage duration, and higher costs for data access are acceptable trade-offs for lowered at-rest storage costs. Coldline Storage is ideal for data you plan to read or modify at most once a quarter.

38
Q

You have developed an enhancement for a photo compression application running on the App Engine Standard service in Google Cloud Platform, and you want to canary test this enhancement on a small percentage of live users. How can you do this?

A - Deploy the enhancement as a new App Engine Application in the existing GCP project. Configure the network load balancer to route 99% of the requests to the old (existing) App Engine Application and 1% to the new App Engine Application.
B - Deploy the enhancement as a new App Engine Application in the existing GCP project. Make use of App Engine native routing to have the old App Engine application proxy 1% of the requests to the new App Engine application.
C - Use gcloud app deploy to deploy the enhancement as a new version in the existing application and use –splits flag to split the traffic between the old version and the new version. Assign a weight of 1 to the new version and 99 to the old version.
D - Use gcloud app deploy to deploy the enhancement as a new version in the existing application with –migrate flag.

A

C

You can use traffic splitting to specify a percentage distribution of traffic across two or more of the versions within a service. Splitting traffic allows you to conduct A/B testing between your versions and provides control over the pace when rolling out features.

For this scenario, we can split the traffic as shown below, sending 1% to v2 and 99% to v1
by executing the command
#gcloud app services set-traffic service1 –splits v2=1,v1=99

39
Q

Your company recently migrated all infrastructure to Google Cloud Platform (GCP) and you want to use Google Cloud Build to build all container images. You want to store the build logs in Google Cloud Storage. You also have a requirement to push the images to Google Container Registry. You wrote a cloud build YAML configuration file with the following contents:
1 steps:
2 -name: ‘gcr.io/cloud-builders/docker’
3 args: [‘build’, ‘-t’, ‘gcr.io/[PROJECT_ID]/[IMAGE_NAME]’, ‘.’]
4 images: [‘gcr.io/[PROJECT_ID]/[IMAGE_NAME]’]

A - Execute gcloud builds push –config=[CONFIG_FILE_PATH] [SOURCE].
B - Execute gcloud builds submit –config=[CONFIG_FILE_PATH] [SOURCE].
C - -Execute gcloud builds submit –config=[CONFIG_FILE_PATH] –gcs-log-dir=[GCS_LOG_DIR] [SOURCE].
D - Execute gcloud builds run –config=[CONFIG_FILE_PATH] –gcs-log-dir=[GCS_LOG_DIR] [SOURCE].

A

C

This command correctly builds the container image, pushes the image to GCR (Google Container Registry) and uploads the build logs to Google Cloud Storage.–config flag specifies the YAML or JSON file to use as the build configuration file.

–gcs-log-dir specifies the directory in Google Cloud Storage to hold build logs.

[SOURCE] is the location of the source to build. The location can be a directory on a local disk or a gzipped archive file (.tar.gz) in Google Cloud Storage.

40
Q

You deployed a java application in a single Google Cloud Compute Engine VM. During peak usage, the application CPU is maxed out and results in stuck threads which ultimately make the system unresponsive, and requires a reboot. Your operations team want to receive an email alert when the CPU utilization is greater than 95% for more than 10 minutes so they can manually change the instance type to another instance that offers more CPU. What should you do?

A - Only One Answer

A

We want to use Google services. So that eliminates the two options where we Write a script. Why would we want to write a script when there is a Google service that does precisely that - with minimal configuration!!

Cloud logging does not log CPU usage. (Cloud monitoring does that) So that rules out the other option.

Ref: https://cloud.google.com/logging/

Link the GCP project to a Cloud Monitoring workspace. Configure an Alerting policy based on CPU utilization in Cloud Monitoring and trigger an email notification when the utilization exceeds the threshold. is the right answer.

A Workspace is a tool for monitoring resources contained in one or more Google Cloud projects or AWS accounts. In our case, we create a Stackdriver workspace and link our project to this workspace.

Ref: https://cloud.google.com/monitoring/workspaces

Cloud monitoring captures the CPU usage. By default, the Monitoring agent collects disk, CPU, network, and process metrics. You can also have the agent send custom metrics to Cloud monitoring.

Ref: https://cloud.google.com/monitoring/

You can then set up an alerting policy to alert with CPU utilization exceeds 90% for 15 minutes.

41
Q

Create a new service account, grant it the least viable privileges to the required services, generate and download a JSON key. Use the JSON key to authenticate inside the application.

A - Create a new service account, with editor permissions, generate and download a key. Use the key to authenticate inside the application.
B - Use the default service account for Compute Engine, which already has the required permissions.
C - Use the default service account for App Engine, which already has the required permissions.
D - Create a new service account, grant it the least viable privileges to the required services, generate and download a JSON key. Use the JSON key to authenticate inside the application.

A

D

The Compute Engine default service account is created with the Cloud IAM project editor role

The project editor role includes all viewer permissions, plus permissions for actions that modify state, such as changing existing resources. Using a service account that is over-privileged falls foul of the principle of least privilege.

42
Q

Your organization processes a very high volume of timestamped IoT data. The total volume can be several petabytes. The data needs to be written and changed at a high speed. You want to use the most performant storage option for your data. Which product should you use?

A - BigQuery
B - Cloud Datastore
C - Cloud Bigtable
D - Cloud Storage

A

C

Our requirement is to write/update a very high volume of data at a high speed. Performance is our primary concern, not cost.

Cloud Bigtable is Google’s flagship product for ingest and analyze large volumes of time series data from sensors in real-time, matching the high speeds of IoT data to track normal and abnormal behavior.

43
Q

You work for a multinational consumer credit reporting company that collects and aggregates financial information and provides a credit report for over 100 million individuals and businesses. The company wants to trial a new application for a small geography and requires a relational database for storing important user information. Your company places a high value on reliability and requires point-in-time recovery while minimizing operational cost. What should you do?

A - Store the data in a 2-node Cloud Spanner instance
B - Store the data in a multi-regional Cloud Spanner instance.
C - Store the data in Highly Available Cloud SQL for MySQL instance.
D - Store the data in Cloud SQL for MySQL instance. Ensure Binary Logging on the Cloud SQL instance.

A

D
Cloud spanner is a massively scalable, fully managed, relational database service for regional and global application data. Cloud spanner is expensive compared to Cloud SQL. We don’t require more than “one geographic location”, and we want to be cost-effective, so Cloud Spanner doesn’t fit these requirements. Furthermore, Cloud Spanner does not offer a “Point in time” recovery feature.

Cloud SQL can easily handle small sets of relational data and is cost-effective compared to Cloud Spanner. But This option does not enable point in time recovery, so our requirement to support point-in-time recovery is not met.

44
Q

Your security team has asked you to disable all but essential communication between the tiers. Your application requires instances in the Web tier to communicate with the instances in App tier on port 80, and the instances in App tier to communicate with the instances in DB tier on port 3306. How should you design the firewall rules?

A - 1. Create an ingress firewall rule that allows traffic on port 80 from all instances with serviceAccount_subnet1 to all instances with serviceAccount_subnet2.
A - 2. Create an ingress firewall rule that allows traffic on port 3306 from all instances with serviceAccount_subnet2 to all instances with serviceAccount_subnet3.

A

A

A - 1. Create an ingress firewall rule that allows traffic on port 80 from all instances with serviceAccount_subnet1 to all instances with serviceAccount_subnet2.

A - 2. Create an ingress firewall rule that allows traffic on port 3306 from all instances with serviceAccount_subnet2 to all instances with serviceAccount_subnet3.

45
Q

You have files in a Cloud Storage bucket that you need to share with your suppliers. You want to restrict the time that the files are available to your suppliers to 1 hour. You want to follow Google recommended practices. What should you do?

A - Create a JSON key for the Default Compute Engine Service Account. Execute the command gsutil signurl -t 60m gs:///.
B - Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -p 60m gs:///.
C - Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -m 1h gs:///*.
D - Create a service account with just the permissions to access files in the bucket. Create a JSON key for the service account. Execute the command gsutil signurl -d 1h gs:///**.

A

D

Note: All these command strings fail.

This command correctly specifies the duration that the signed url should be valid for by using the -d flag. The default is 1 hour so omitting the -d flag would have also resulted in the same outcome. Times may be specified with no suffix (default hours), or with s = seconds, m = minutes, h = hours, d = days. The max duration allowed is 7d.

Ref: https://cloud.google.com/storage/docs/gsutil/commands/signurl

46
Q

Your company has multiple GCP projects in several regions, and your operations team have created numerous gcloud configurations for most common operational needs. They have asked your help to retrieve an inactive gcloud configuration and the GKE clusters that use it, using the least number of steps. What command should you execute to retrieve this information?

A - Execute kubectl config get-contexts.
B - Execute kubectl config use-context, then kubectl config view.
C - Multiregional Cloud Storage bucket.
D - Execute gcloud config configurations describe.

A

kubectl config get-contexts displays a list of contexts as well as the clusters that use them.

A

47
Q
You are designing an application that lets users upload and share photos. You expect your application to grow really fast and you are targeting a worldwide audience. You want to delete uploaded photos after 30 days. You want to minimize costs while ensuring your application is highly available. Which GCP storage solution should you choose?
A - Cloud Datastore database.
B - Multiregional Cloud Storage bucket.
C - Cloud Filestore.
D - Persistent SSD on VM instances.
A

B

Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. We don’t need to set up auto-scaling ourselves. Cloud Storage autoscaling is managed by GCP. Cloud Storage is an object store so it is suitable for storing photos. Cloud Storage allows world-wide storage and retrieval so cater well to our worldwide audience. Cloud storage provides us lifecycle rules that can be configured to automatically delete objects older than 30 days. This also fits our requirements. Finally, Google Cloud Storage offers several storage classes such as Nearline Storage ($0.01 per GB per Month) Coldline Storage ($0.007 per GB per Month) and Archive Storage ($0.004 per GB per month) which are significantly cheaper than any of the options above.

48
Q

You are migrating a Python application from your on-premises data centre to Google Cloud. You want to deploy the application Google App Engine, and you modified the python application to use Cloud Pub/Sub instead of RabbitMQ. The application uses a specific service account which has the necessary permissions to publish and subscribe on Cloud Pub/Sub; however, the operations team have not enabled the Cloud Pub/Sub API yet. What should you do?

A - Grant roles/pubsub.admin IAM role to the service account and modify the application code to enable the API before publishing or subscribing.
B - Configure the App Engine Application in GCP Console to use the specific Service Account with the necessary IAM permissions and rely on the automatic enablement of the Cloud Pub/Sub API on the first request to publish or subscribe
C - Use deployment manager to configure the App Engine Application to use the specific Service Account with the necessary IAM permissions and rely on the automatic enablement of the Cloud Pub/Sub API on the first request to publish or subscribe.
D - Navigate to the APIs & Services section in GCP console and enable Cloud Pub/Sub API.

A

D

For most operational use cases, the simplest way to enable and disable services is to use the Google Cloud Console. You can create scripts; you can also use the gcloud command-line interface. If you need to program against the Service Usage API, we recommend that you use one of our provided client libraries.

Secondly, after you create an App Engine application, the App Engine default service account is created and used as the identity of the App Engine service. The App Engine default service account is associated with your Cloud project and executes tasks on behalf of your apps running in App Engine. By default, the App Engine default service account has the Editor role in the project, so this already has the permissions to push/pull/receive messages from Cloud Pub/Sub.

49
Q

Your company plans to store sensitive PII data in a cloud storage bucket. Your compliance department has asked you to ensure the objects in this bucket are encrypted by customer-managed encryption keys. What should you do?

A - In the bucket advanced settings, select Customer-supplied key and then select a Cloud KMS encryption key.
B - In the bucket advanced settings, select Google-managed key and then select a Cloud KMS encryption key.
C - In the bucket advanced settings, select Customer-managed key and then select a Cloud KMS encryption key.
D - Recreate the bucket to use a Customer-managed key. Encryption can only be specified at the time of bucket creation.

A

C

This option correctly selects the Customer-managed key and then the key to use which satisfies our requirement. See the screenshot below for reference.

Ref: https://cloud.google.com/storage/docs/encryption/using-customer-managed-keys#add-default-key