ACE-T3 Flashcards
You want to find a list of regions and the prebuilt images offered by Google Compute Engine. Which commands should you execute to retrieve this information? A - gcloud regions list. gcloud images list. B - gcloud compute regions list gcloud images list. C - gcloud regions list gcloud compute images list. D - gcloud compute regions list gcloud compute images list.
D
Both the commands correctly retrieve images and regions offered by Google Compute Engine
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/regions/list
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/images/list
Your company has an App Engine application that needs to store stateful data in a proper storage service. Your data is non-relational data. You do not expect the database size to grow beyond 10 GB and you need to have the ability to scale down to zero to avoid unnecessary costs. Which storage service should you use?
A - Cloud Dataproc
B - Cloud Datastore.
C - Cloud Bigtable.
D - Cloud SQL.
B
Cloud Datastore is a highly-scalable NoSQL database. Cloud Datastore scales seamlessly and automatically with your data, allowing applications to maintain high performance as they receive more traffic; automatically scales back when the traffic reduces.
You work for a leading retail platform that enables its retailers to sell their items to over 200 million users worldwide. You persist all analytics data captured during user navigation to BigQuery. A business analyst wants to run a query to identify products that were popular with buyers in the recent thanksgiving sale. The analyst understands the query needs to iterate through billions of rows to fetch the required information but is not sure of the costs involved in the on-demand pricing model, and has asked you to help estimate the query cost. What should you do?
A - Run the query using bq with the –dry_run flag to estimate the number of bytes returned by the query. Make use of the pricing calculator to estimate the query cost.
B - Run the query using bq with the –dry_run flag to estimate the number of bytes read by the query. Make use of the pricing calculator to estimate the query cost.
C - Execute the query using bq to estimate the number of rows returned by the query. Make use of the pricing calculator to estimate the query cost.
D - Switch to BigQuery flat-rate pricing. Coordinate with the analyst to run the query while on flat-rate pricing and switch back to on-demand pricing.
B
BigQuery pricing is based on the number of bytes processed/read. Under on-demand pricing, BigQuery charges for queries by using one metric: the number of bytes processed (also referred to as bytes read). You are charged for the number of bytes processed whether the data is stored in BigQuery or in an external data source such as Cloud Storage, Google Drive, or Cloud Bigtable. On-demand pricing is based solely on usage.
Ref: https://cloud.google.com/bigquery/pricing
A recent reorganization in your company has seen the creation of a new data custodian team – responsible for managing data in all storage locations. Your production GCP project uses buckets in Cloud Storage, and you need to delegate control to the new team to manage objects and buckets in your GCP project. What role should you grant them?
A - Grant the data custodian team Project Editor IAM role.
B - Grant the data custodian team Storage Object Admin IAM role.
C - Grant the data custodian team Storage Admin IAM role.
D - Grant the data custodian team Project Editor IAM role.
C
Grant the data custodian team Storage Admin IAM role.
This role grants full control of buckets and objects. When applied to an individual bucket, control applies only to the specified bucket and objects within the bucket.
Ref: https://cloud.google.com/iam/docs/understanding-roles#storage-roles
Your company produces documentary videos for a reputed television channel and stores its videos in Google Cloud Storage for long term archival. Videos older than 90 days are accessed only in exceptional circumstances and videos older than one year are no longer needed. How should you optimise the storage to reduce costs?
A - Use a Cloud Function to rewrite the storage class to Coldline for objects older than 90 days. Use another Cloud Function to delete objects older than 365 days from Coldline Storage Class. B - Use a Cloud Function to rewrite the storage class to Coldline for objects older than 90 days. Use another Cloud Function to delete objects older than 275 days from Coldline Storage Class. C - Configure a lifecycle rule to transition objects older than 90 days to Coldline Storage Class. Configure another lifecycle rule to delete objects older than 275 days from Coldline Storage Class. D - Configure a lifecycle rule to transition objects older than 90 days to Coldline Storage Class. Configure another lifecycle rule to delete objects older than 365 days from Coldline Storage Class.
D Object Lifecycle Management does not rewrite an object when changing its storage class. When an object is transitioned to Nearline Storage, Coldline Storage, or Archive Storage using the SetStorageClass feature, any subsequent early deletion and associated charges are based on the original creation time of the object, regardless of when the storage class changed.
Your company has migrated most of the data center VMs to Google Compute Engine. The remaining VMs in the data center host legacy applications that are due to be decommissioned soon and your company has decided to retain them in the datacenter. Due to a change in the business operational model, you need to introduce changes to one of the legacy applications to read files from Google Cloud Storage. However, your data center does not have access to the internet and your company doesn’t want to invest in setting up internet access as the data center is due to be turned off soon. Your data center has a partner interconnect to GCP. You wish to route traffic from your datacenter to Google Storage through partner interconnect. What should you do?
A - In the following example, the on-premises network is connected to a VPC network through a Cloud VPN tunnel. Traffic from on-premises hosts to Google APIs travels through the tunnel to the VPC network. After traffic reaches the VPC network, it is sent through a route that uses the default internet gateway as its next hop. The next hop allows traffic to leave the VPC network and be delivered to restricted.googleapis.com (199.36.153.4/30).
A (there is only one answer)
- In on-premises DNS configuration, map *.googleapis.com to restricted.googleapis.com, which resolves to the 199.36.153.4/30.
- Configure Cloud Router to advertise the 199.36.153.4/30 IP address range through the Cloud VPN tunnel.
- Add a custom static route to the VPC network to direct traffic with the destination 199.36.153.4/30 to the default internet gateway.
- Created a Cloud DNS managed private zone for *.googleapis.com that maps to 199.36.153.4/30 and authorize the zone for use by VPC network
Your company is migrating all applications from the on-premises data centre to Google Cloud, and one of the applications is dependent on Websockets protocol and session affinity. You want to ensure this application can be migrated to Google Cloud platform and continue serving requests without issues. What should you do?
A - So the next possible step is to Discuss load balancer options with the relevant teams.
A (There is only one answer)
Google HTTP(S) Load Balancing has native support for the WebSocket protocol when you use HTTP or HTTPS, not HTTP/2, as the protocol to the backend. Ref: https://cloud.google.com/load-balancing/docs/https#websocket_proxy_support
The load balancer also supports session affinity.
Ref: https://cloud.google.com/load-balancing/docs/backend-service#session_affinity
So the next possible step is to Discuss load balancer options with the relevant teams.
We don’t need to convert WebSocket code to use HTTP streaming or Redesign the application, as WebSocket support and session affinity are offered by Google HTTP(S) Load Balancing. Reviewing the design is a good idea, but it has nothing to do with WebSockets.
The deployment team currently spends a lot of time creating and configuring VMs in Google Cloud Console, and feel they could be more productive and consistent if the same can be automated using Infrastructure as Code. You want to help them identify a suitable service. What should you recommend?
A - Managed Instance Group (MIG).
B - Deployment Manager
C - Cloud Build
D - Unmanaged Instance Group.
B
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using YAML. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load-balanced, auto-scaled instance group. You can deploy many resources at one time, in parallel. Using the deployment manager, you can apply a Python/Jinja2 template to create a MIG/auto-scaling policy that dynamically provisions VM. And our other requirement of “dedicated configuration file” is also met. Using the deployment manager for provisioning results in a repeatable deployment process. By creating configuration files that define the resources, the process of creating those resources can be repeated over and over with consistent results. Google recommends we script our infrastructure and deploy using Deployment Manager.
Ref: https://cloud.google.com/deployment-manager
An intern joined your team recently and needs access to Google Compute Engine in your sandbox project to explore various settings and spin up compute instances to test features. You have been asked to facilitate this. How should you give your intern access to compute engine without giving more permissions than is necessary?
A - Grant Compute Engine Admin Role for sandbox project.
B - Create a shared VPC to enable the intern access Compute resources.
C - Grant Project Editor IAM role for sandbox project.
D - Grant Compute Engine Instance Admin Role for the sandbox project.
D
Compute Engine Instance Admin Role grants full control of Compute Engine instances, instance groups, disks, snapshots, and images. It also provides read access to all Compute Engine networking resources. This provides just the required permissions to the intern.
Ref: https://cloud.google.com/compute/docs/access/iam#compute.storageAdmin
You deployed your application to a default node pool on the GKE cluster and you want to configure cluster autoscaling for this GKE cluster. For your application to be profitable, you must limit the number of Kubernetes nodes to 10. You want to start small and scale up as traffic increases and scale down when the traffic goes down. What should you do?
A - To enable autoscaling, add a tag to the instances in the cluster by running the command gcloud compute instances add-tags [INSTANCE] –tags=enable-autoscaling,min-nodes=1,max-nodes=10.
B - Create a new GKE cluster by running the command gcloud container clusters create [CLUSTER_NAME] –enable-autoscaling –min-nodes=1 –max-nodes=10. Redeploy your application.
C - Update existing GKE cluster to enable autoscaling by running the command gcloud container clusters update [CLUSTER_NAME] –enable-autoscaling –min-nodes=1 –max-nodes=10.
D - Set up a stack driver alert to detect slowness in the application. When the alert is triggered, increase nodes in the cluster by running the command gcloud container clusters resize CLUSTER_Name –size .
C
The command:
# gcloud container clusters update
Updates an existing GKE cluster. The flag –enable-autoscaling enables autoscaling and the parameters –min-nodes=1 –max-nodes=10 define the minimum and maximum number of nodes in the node pool. This enables cluster autoscaling which scales up and scales down the nodes automatically between 1 and 10 nodes in the node pool.
You have a web application deployed as a managed instance group. You noticed some of the compute instances are running low on memory. You suspect this is due to JVM memory leak and you want to restart the compute instances to reclaim the leaked memory. Your web application is currently serving live web traffic. You want to ensure that the available capacity does not go below 80% at any time during the restarts and you want to do this at the earliest. What would you do?
A - Perform a rolling-action reboot with max-surge set to 20%.
B - Perform a rolling-action restart with max-unavailable set to 20%.
C - Perform a rolling-action replace with max-unavailable set to 20%.
D - Stop instances in the managed instance group (MIG) one at a time and rely on autohealing to bring them back up.
B
This option achieves the outcome in the most optimal manner. The restart action restarts instances in a managed instance group. By performing a rolling restart with max-unavailable set to 20%, the rolling update restarts instances while ensuring there is at least 80% available capacity. The rolling update carries on restarting all the remaining instances until all instances in the MIG have been restarted.
Ref: https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instance-groups/managed/rolling-action/restart
Your company has many Citrix services deployed in the on-premises datacenter, and they all connect to the Citrix Licensing Server on 10.10.10.10 in the same data centre. Your company wants to migrate the Citrix Licensing Server and all Citrix services to Google Cloud Platform. You want to minimize changes while ensuring the services can continue to connect to the Citrix licensing server. How should you do this in Google Cloud?
A - Deploy the Citrix Licensing Server on a Google Compute Engine instance with an ephemeral IP address. Once the server is responding to requests, promote the ephemeral IP address to a static internal IP address.
B - Deploy the Citrix Licensing Server on a Google Compute Engine instance and set its ephemeral IP address to 10.10.10.10.
C - Use gcloud compute addresses create to reserve 10.10.10.10 as a static internal IP and assign it to the Citrix Licensing Server VM Instance.
D - Use gcloud compute addresses create to reserve 10.10.10.10 as a static external IP and assign it to the Citrix Licensing Server VM Instance.
C
This option lets us reserve IP 10.10.10.10 as a static internal IP address because it falls within the standard IP Address range as defined by IETF (Ref: https://tools.ietf.org/html/rfc1918). 10.0.0.0/8 is one of the allowed ranges, so all IP Addresses from 10.0.0.0 to 10.255.255.255 belong to this internal IP range. Since we can now reserve this IP Address as a static internal IP address, it can be assigned to the licensing server in the VPC so that the application can reach the licensing server.
You want to use Google Cloud Storage to host a static website on www.example.com for your staff. You created a bucket example-static-website and uploaded index.html and css files to it. You turned on static website hosting on the bucket and set up a CNAME record on www.example.com to point to c.storage.googleapis.com. You access the static website by navigating to www.example.com in the browser but your index page is not displayed. What should you do?
A - In example.com zone, modify the CNAME record to c.storage.googleapis.com/example-static-website.
B - Delete the existing bucket, create a new bucket with the name www.example.com and upload the html/css files.
C - In example.com zone, delete the existing CNAME record and set up an A record instead to point to c.storage.googleapis.com.
D - Reload the Cloud Storage static website server to load the objects.
B
We need to create a bucket whose name matches the CNAME you created for your domain. For example, if you added a CNAME record pointing www.example.com to c.storage.googleapis.com., then create a bucket with the name “www.example.com”.A CNAME record is a type of DNS record. It directs traffic that requests a URL from your domain to the resources you want to serve, in this case, objects in your Cloud Storage buckets. For www.example.com, the CNAME record might contain the following information:
NAME TYPE DATA
www.example.com CNAME c.storage.googleapis.com.
Ref: https://cloud.google.com/storage/docs/hosting-static-website
Your company hosts a number of applications in Google Cloud and requires that log messages from all applications be archived for 10 years to comply with local regulatory requirements. Which approach should you use?
A - Create a Stackdriver account and a Stackdriver group in one of the production GCP projects. Add all other projects as members of the group. Configure a monitoring dashboard in the Stackdriver account
B - Create a Stackdriver account in each project and configure all accounts to use the same service account. Create a monitoring dashboard in one of the projects.
C-1. Enable Stackdriver Logging API
C-2. Configure web applications to send logs to Stackdriver
C-3. Export logs to Google Cloud Storage
D - Set up a shared VPC across all production GCP projects and configure Cloud Monitoring dashboard on one of the projects.
C
C-1. Enable Stackdriver Logging API
C-2. Configure web applications to send logs to Stackdriver
C-3. Export logs to Google Cloud Storage
Your company has deployed several production applications across many Google Cloud Projects. Your operations team requires a consolidated monitoring dashboard for all the projects. What should you do?
A - Create a single Stackdriver workspace and link all production GCP projects to it. Configure a monitoring dashboard in the Stackdriver account.
B - Create a Stackdriver account in each project and configure all accounts to use the same service account. Create a monitoring dashboard in one of the projects.
C - Set up a shared VPC across all production GCP projects and configure Cloud Monitoring dashboard on one of the projects.
D - Create a Stackdriver account and a Stackdriver group in one of the production GCP projects. Add all other projects as members of the group. Configure a monitoring dashboard in the Stackdriver account.
A
You can monitor resources of different projects in a single Stackdriver account by creating a Stackdriver workspace. A Stackdriver workspace is a tool for monitoring resources contained in one or more Google Cloud projects or AWS accounts. Each Workspace can have between 1 and 100 monitored projects, including Google Cloud projects and AWS accounts. A Workspace accesses metric data from its monitored projects, but the metric data and log entries remain in the individual projects. Ref: https://cloud.google.com/monitoring/workspaces
You want to list all the internal and external IP addresses of all compute instances. Which of the commands below should you run to retrieve this information?
A - gcloud compute instances list-ip.
B - gcloud compute networks list-ip.
C - gcloud compute instances list.
D - gcloud compute networks list.
C
gcloud compute instances list - lists Google Compute Engine instances. The output includes internal as well as external IP addresses.
Ref: https://cloud.google.com/sdk/gcloud/reference/compute/instances/list
A mission-critical application running in Google Cloud Platform requires an urgent update to fix a security issue without any downtime. How should you do this in CLI using deployment manager?
A - Use gcloud deployment-manager deployments update and point to the deployment config file.
B - Use gcloud deployment-manager deployments create and point to the deployment config file.
C - Use gcloud deployment-manager resources update and point to the deployment config file.
D - Use gcloud deployment-manager resources create and point to the deployment config file.
A
gcloud deployment-manager deployments update - updates a deployment based on a provided config file and fits our requirement.
Ref: https://cloud.google.com/sdk/gcloud/reference/deployment-manager/deployments/update
Your organization is planning to deploy a Python web application to Google Cloud. The web application uses a custom Linux distribution and you want to minimize rework. The web application underpins an important website that is accessible to the customers globally. You have been asked to design a solution that scales to meet demand. What would you recommend to fulfill this requirement? (Select Two)
A - Network Load Balance.
B - HTTP(S) Load Balancer.
C - App Engine Standard environment.
D - Managed Instance Group on Compute Engine.
B and D
HTTP(S) Load Balancing is a global service (when the Premium Network Service Tier is used). We can create backend services in more than one region and have them all serviced by the same global load balancer
Managed instance groups (MIGs) maintain the high availability of your applications by proactively keeping your virtual machine (VM) instances available. An autohealing policy on the MIG relies on an application-based health check to verify that an application is responding as expected. If the auto-healer determines that an application isn’t responding, the managed instance group automatically recreates that instance.
Your team uses Splunk for centralized logging and you have a number of reports and dashboards based on the logs in Splunk. You want to install Splunk forwarder on all nodes of your new Kubernetes Engine Autoscaled Cluster. The Splunk forwarder forwards the logs to a centralized Splunk Server. You want to minimize operational overhead. What is the best way to install Splunk Forwarder on all nodes in the cluster?
A - Use Deployment Manager to orchestrate the deployment of forwarder agents on all nodes.
B - Include the forwarder agent in a DaemonSet deployment.
C - Include the forwarder agent in a StatefulSet deployment.
D - SSH to each node and run a script to install the forwarder agent.
B
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes. So by configuring the pod to use Splunk forwarder agent image and with some minimal configuration (e.g. identifying which logs need to be forwarded), you can automate the installation and configuration of Splunk forwarder agent on each GKE cluster node.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset