Case Study 2 Flashcards

1
Q

Company overview
TerramEarth manufactures heavy equipment for the mining and agricultural industries They currently have over 500 dealers and service centers in 100 countries Their mission is to build
products that make their customers more productive

Solution concept
There are 2 million TerramEarth vehicles in operation currently, and we see 20% yearly growth Vehicles collect telemetry data from many sensors during operation A small subset of
critical data is transmitted from the vehicles in real time to facilitate fleet management The rest of the sensor data is collected, compressed, and uploaded daily when the vehicles
return to home base Each vehicle usually generates 200 to 500 megabytes of data per day.

Existing technical environment
TerramEarth’s vehicle data aggregation and analysis infrastructure resides in Google Cloud and serves clients from all around the world A growing amount of sensor data is captured
from their two main manufacturing plants and sent to private data centers that contain their legacy inventory and logistics management systems The private data centers have multiple
network interconnects configured to Google Cloud The web frontend for dealers and customers is running in Google Cloud and allows access to stock management and analytics

Business requirements
- Predict and detect vehicle malfunction and rapidly ship parts to dealerships for just-in-time repair where possible
• Decrease cloud operational costs and adapt to seasonality
• Increase speed and reliability of development workflow
• Allow remote developers to be productive without compromising code or data security
• Create a flexible and scalable platform for developers to create custom API services for dealers and partners
Technical requirements
• Create a new abstraction layer for HTTP API access to their legacy systems to enable a gradual move into the cloud without disrupting operations
• Modernize all Cl/CD pipelines to allow developers to deploy container-based workloads in highly scalable environments
• Allow developers to run experiments without compromising security and governance requirements
• Create a self-service portal for internal and partner developers to create new projects, request resources for data analytics jobs, and centrally manage access to the API endpoints
• Use cloud-native solutions for keys and secrets management and optimize for identity-based access
• Improve and standardize tools necessary for application and network monitoring and troubleshooting

Executive statement
Our competitive advantage has always been our focus on the customer, with our ability to provide excellent customer service and minimize vehicle downtimes After moving multiple
systems into Google Cloud, we are seeking new ways to provide best-in-class online fleet management services to our customers and improve operations of our dealerships Our 5-year
strategic plan is to create a partner ecosystem of new products by enabling access to our data, increasing autonomous operation capabilities of our vehicles, and creating a path to
[move the remaining legacy systems to the cloud

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

1/6
For this question, refer to the TerramEarth case study. You start to build a new application that uses a few Cloud Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function func_query.You want funcjjuery only to accept invocations from func_display. You also want to follow
Google’s recommended best practices What should you do?

A. Create a token and pass it in as an environment variable to func_display.When invoking func_query. include the token in the request. Pass the same token to
func_query and reject the invocation if the tokens are different

B. Make func_query ‘ Require authentication.’ Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query.Create an id token in func_display and include the token to the request when invoking func_query.

C. Make func_query ‘Require authentication’ and only accept internal traffic. Create those two functions in the same VPC. Create an ingress firewall rule for func_query to only allow traffic from func_display.

D. Create those two functions in the same project and VPC. Make func_query only accept internal traffic. Create an ingress firewall for func_query to only allow traffic
from func display Also, make sure both functions use the same service account tvt_vn/ebay

A

B. Make func_query ‘ Require authentication.’ Create a unique service account and associate it to func_display. Grant the service account invoker role for func_query.Create an id token in func_display and include the token to the request when invoking func_query.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

2/6
For this question, refer to the TerramEarth case study. You have broken down a legacy monolithic application into a few containerized RESTful microservices. You want to run those
microservices on Cloud Run. You also want to make sure the services are highly available with low latency to your customers. What should you do?

A. Deploy Cloud Run services to multiple availability zones Create Cloud Endpoints that point to the services Create a global HTTP(S) Load Balancing instance and attach the Cloud Endpoints to its backend

B. Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing instance.

C. Deploy Cloud Run services to multiple regions. In Cloud DNS. create a latency-based DNS name that points to the services.

D. Deploy Cloud Run services to multiple availability zones. Create a TCP/IP global load balancer Add the Cloud Run Endpoints to its backend service

A

B. Deploy Cloud Run services to multiple regions Create serverless network endpoint groups pointing to the services Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing instance.

“Cloud Run is a regional service.
To serve global users you need to configure a Global HTTP LB and NEG as the backend.
Cloud Run services are deployed into individual regions and to route your users to different regions of your service, you need to configure external HTTP(S) Load Balancing.
https://cloud.google.com/run/docs/multiple-regions

A network endpoint group (NEG) specifies a group of backend endpoints for a load balancer.
A serverless NEG is a backend that points to a Cloud Run, App Engine, or Cloud Functions service.
https://cloud.google.com/load-balancing/docs/negs/serverless-neg-concepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

3/6
For this question, refer to the TerramEarth case study. You are migrating a Linux-based application from your private data center to Google Cloud The TerramEarth security team
sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could
impact your migration What should you do? (Choose two.)

A. Open a support case regarding the CVE and chat with the support engineer.

B. Read the CVEs. from the Google Cloud Status Dashboard to understand the impact

C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact

D. Post a question regarding the CVE in Stack Overflow to get an explanation

E. Post a question regarding the CVE in a Google Cloud discussion group to get an explanation

A

A,C

A. Open a support case regarding the CVE and chat with the support engineer.

C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

4/6
For this question, refer to the TerramEarth case study. TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a “Site is unavailable” page as soon as possible You also want your Ops team to receive
a notification for the issue. You need to build a reliable solution for minimum cost. What should you do?

A. Create a scheduled job in Cloud Run to invoke a container every minute The container will check the application URL If the application is down, switch the URL to the “Site is unavailable” page, and notify the Ops team

B. Create a cron job on a Compute Engine VM that runs every minute The cron job invokes a Python program to check the application URL If the application is down, switch
the URL to the “Site is unavailable” page, and notify the Ops team.

C. Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the “Site is unavailable” page and notify the Ops team

D. Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the “Site is unavailable” page, and notify the Ops team.

A

C. Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the “Site is unavailable” page and notify the Ops team

“Cloud monitoring for Uptime check to validate the application URL and leverage pub/sub to trigger Cloud Function to switch URL”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

5/6
For this question, refer to the TerramEarth case study. You are building a microservice-based application for TerramEarth The application is based on Docker containers. You want to follow Google-recommended practices to build the application continuously and store the build artifacts. What should you do?

A. Configure a trigger in Cloud Build for new source changes Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash Push the images to the Container Registry

B. Configure a trigger in Cloud Build for new source changes. The trigger invokes build jobs and build container images for the microservices Tag the images with a version number, and push them to Cloud Storage

C. Create a Scheduler job to check the repo every minute. For any new change, invoke Cloud Build to build container images for the microservices Tag the images using the current timestamp, and push them to the Container Registry.

D. Configure a trigger in Cloud Build for new source changes Invoke Cloud Build to build one container image, and tag the image with the label latest ‘ Push the image to the
Container Registry

A

A. Configure a trigger in Cloud Build for new source changes Invoke Cloud Build to build container images for each microservice, and tag them using the code commit hash Push the images to the Container Registry

https://cloud.google.com/architecture/best-practices-for-building-containers#tagging_using_the_git_commit_hash

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

6/6
For this question, refer to the TerramEarth case study. TerramEarth has about 1 petabyte (PB) of vehicle testing data in a private data center You want to move the data to Cloud Storage for your machine learning team. Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data in a month. What should you do?

A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud

B. Configure the Storage Transfer service from Google Cloud to send the data from your data center to Cloud Storage

C. Make sure there are no other users consuming the 1Gbps link, and use multi-thread transfer to upload the data to Cloud Storage

D. Export files to an encrypted USB device, send the device to Google Cloud, and request an import of the data to Cloud Storage

A

A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to Google Cloud

“Transfer service for on-premises data is a software service that enables you to transfer large amounts of data from your data center to a Cloud Storage bucket. It is well suited for customers that are moving billions of files and 100s of TB of data in a single transfer. It can scale to network connections in the 10s of Gbps.

Transfer Appliance best for 10TB or greater and if it’ll take more than a week to upload your data”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly