case study PRACTICE Flashcards

1
Q

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR’s use of
Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

A

A. Verify EHR’s product usage against the list of compliant products on the Google Cloud compliance page.
B. Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for
securely deploying workloads to Google Cloud. You also need to ensure that only verified containers are deployed
using Google Cloud services. What should you do? (Choose two.)

A

A. Enable Binary Authorization on GKE, and sign containers as part of a CI/CD pipeline.

D. Configure Container Registry to use vulnerability scanning to confirm that there are no vulnerabilities before
deploying the workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

You need to upgrade the EHR connection to comply with their requirements. The new connection design must
support business-critical needs and meet the same network and security policy requirements. What should you
do?

A

A. Add a new Dedicated Interconnect connection.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

For this question, refer to the EHR Healthcare case study. You need to define the technical architecture for hybrid
connectivity between EHR’s on-premises systems and Google Cloud. You want to follow Google’s recommended
practices for production-level applications. Considering the EHR Healthcare business and technical requirements,
what should you do?

A

D. Configure two Dedicated Interconnect connections in one metro (City) and two connections in another metro,
and make sure the Interconnect connections are placed in different metro zones.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

For this question, refer to the EHR Healthcare case study. You are a developer on the EHR customer portal team.
Your team recently migrated the customer portal application to Google Cloud. The load has increased on the
application servers, and now the application is logging many timeout errors. You recently incorporated Pub/Sub
into the application architecture, and the application is not logging any Pub/Sub publishing errors. You want to
improve publishing latency.
What should you do?

A

C. Turn off Pub/Sub message batching.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses
on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put
external IP addresses on backend Compute Engine instances and that external IP addresses can only be
configured on frontend Compute Engine instances. What should you do?

A

A. Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend
Compute Engine instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

For this question, refer to the EHR Healthcare case study. You are responsible for designing the Google Cloud
network architecture for Google Kubernetes
Engine. You want to follow Google best practices. Considering the EHR Healthcare business and technical
requirements, what should you do to reduce the attack surface?

A

A. Use a private cluster with a private endpoint with master authorized networks configured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a
payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers, and
season ticket holders. You need to implement a custom card tokenization service that meets the following
requirements:
* It must provide low latency at minimal cost.
* It must be able to identify duplicate credit cards and must not store plaintext card numbers.
* It should support annual key rotation.
Which storage approach should you adopt for your tokenization service?

A

B. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

For this question, refer to the Helicopter Racing League (HRL) case study. Recently HRL started a new regional
racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user experience,
HRL has partnered with the Content Delivery Network provider, Fastly. HRL needs to allow traffic coming from all
of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are a member of the
HRL security team and you need to configure the update that will allow only the Fastly IP address ranges through
the External HTTP(S) load balancer. Which command should you use?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a
new version of their predictive capability application every Tuesday evening at 3 a.m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called
Airwolf. The security team wants to run Airwolf against the predictive capability application as soon as it is
released every Tuesday. You need to set up Airwolf to run at the recurring weekly cadence. What should you do?

A

C. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction accuracy
from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand and interpret
the predictions. What should you do?

A

A. Use Explainable AI

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective
approach for storing their race data such as telemetry. They want to keep all historical records, train models using
only the previous season’s data, and plan for data growth in terms of volume and information collected. You need
to propose a data solution. Considering HRL business requirements and the goals expressed by CEO S. Hawke,
what should you do?

A

C. Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on season.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud
infrastructure noted an exceptionally high number of
Compute Engine instances are allocated to do video encoding and transcoding. You suspect that these Virtual
Machines are zombie machines that were not deleted after their workloads completed. You need to quickly get a
list of which VM instances are idle. What should you do?

A

C. Use the gcloud recommender command to list the idle virtual machine instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their
existing backends on the other platforms?

A

A. Tests should scale well beyond the prior approaches

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a through
testing process for new versions of the backend before they are released to the public. You want the testing
environment to scale in an economical way. How should you design the process?

A

A. Create a scalable environment in GCP for simulating production load

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Mountkirk Games wants to set up a continuous delivery pipeline. Their architecture includes many small services
that they want to be able to update and roll back quickly. Mountkirk Games has the following requirements:
✑ Services are deployed redundantly across multiple regions in the US and Europe
✑ Only frontend services are exposed on the public internet
✑ They can provide a single frontend IP for their fleet of services
✑ Deployment artifacts are immutable
Which set of products should they use?

A

C. Google Kubernetes Registry, Google Container Engine, Google HTTP(S) Load Balancer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Mountkirk Games’ gaming servers are not automatically scaling properly. Last month, they rolled out a new
feature, which suddenly became very popular. A record number of users are trying to use the service, but many of
them are getting 503 errors and very slow response times. What should they investigate first?

A

B. Verify that the project quota hasn’t been exceeded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application
environments. Developers and testers can access each other’s environments and resources, but they cannot
access staging or production resources. The staging environment needs access to some services from production.
What should you do to isolate development environments from staging and production?

A

D. Create one project for development, a second for staging and a third for production

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet
their technical requirements.
Which combination of Google technologies will meet all of their requirements?

A

B. Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current
analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.
Which two steps should be part of their migration plan? (Choose two.)

A

A. Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.
B. Write a schema migration plan to denormalize data for better performance in BigQuery.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical
architecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games
business and technical requirements, what should you do?

A

D. Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible
Compute Engine instances.

22
Q

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the
future in order to take advantage of cloud and technology improvements as they become available. Which two
steps should they take? (Choose two.)

A

A. Store as much analytics and game activity data as financially feasible today so it can be used to train
machine learning models to predict user behavior in the future.
B. Begin packaging their game backend artifacts in container images and running them on Google Kubernetes
Engine to improve the ability to scale up or down based on game activity.

23
Q

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test
the analytics platform’s resilience to changes in mobile network latency. What should you do?

A

A. Deploy failure injection software to the game analytics platform that can inject additional latency to mobile
client analytics traffic.

24
Q

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technical
architecture for the database workloads for your company, Mountkirk Games. Considering the business and
technical requirements, what should you do?

A

D. Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery for
historical data queries

25
Q

For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk’s
technical requirement for storing game activity in a time series database service?

A

A. Cloud Bigtable

26
Q

For this question, refer to the Mountkirk Games case study. You are in charge of the new Game Backend Platform
architecture. The game communicates with the backend over a REST API.
You want to follow Google-recommended practices. How should you design the backend?

A

C. Create an instance template for the backend. For every region, deploy it on a multi-zone managed instance
group. Use an L7 load balancer

27
Q

You need to optimize batch file transfers into Cloud Storage for Mountkirk Games’ new Google Cloud solution. The
batch files contain game statistics that need to be staged in Cloud Storage and be processed by an extract
transform load (ETL) tool. What should you do?

A

B. Use gsutil to batch copy the files in parallel.

28
Q

You are implementing Firestore for Mountkirk Games. Mountkirk Games wants to give a new game programmatic
access to a legacy game’s Firestore database.
Access should be as restricted as possible. What should you do?

A

C. Create a service account (SA) in the legacy game’s Google Cloud project, add this SA in the new game’s IAM
page, and then give it the Firebase Admin role in both projects

29
Q

Mountkirk Games wants to limit the physical location of resources to their operating Google Cloud regions. What
should you do?

A

A. Configure an organizational policy which constrains where resources can be deployed.

30
Q

You need to implement a network ingress for a new game that meets the defined business and technical
requirements. Mountkirk Games wants each regional game instance to be located in multiple Google Cloud
regions. What should you do?

A

D. Configure Ingress for Anthos with a global load balancer and Google Kubernetes Engine.

31
Q

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google Cloud.
You want to streamline the process and follow
Google-recommended practices. What should you do?

A

A. Configure Workload Identity and service accounts to be used by the application platform.

32
Q

TerramEarth’s CTO wants to use the raw data from connected vehicles to help identify approximately when a vehicle in the field will have a catastrophic failure.
You want to allow analysts to centrally query the vehicle data.
Which architecture should you recommend?

A

(Images as option)

33
Q

The TerramEarth development team wants to create an API to meet the company’s business requirements. You
want the development team to focus their development effort on business value versus creating a custom
framework.
Which method should they use?

A

A. Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners

34
Q

TerramEarth plans to connect all 20 million vehicles in the field to the cloud. This increases the volume to 20
million 600 byte records a second for 40 TB an hour.
How should you design the data ingestion?

A

B. Vehicles write data directly to Google Cloud Pub/Sub

35
Q

You analyzed TerramEarth’s business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customer’s wait time for parts. You decided to focus on reduction of the 3 weeks
aggregate reporting time.
Which modifications to the company’s processes should you recommend?

A

C. Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine
learning analysis of metrics

36
Q

Which of TerramEarth’s legacy enterprise processes will experience significant change as a result of increased
Google Cloud Platform adoption?

A

B. Capacity planning, TCO calculations, opex/capex allocation

37
Q

TerramEarth’s 20 million vehicles are scattered around the world. Based on the vehicle’s location, its telemetry
data is stored in a Google Cloud Storage (GCS) regional bucket (US, Europe, or Asia). The CTO has asked you to run
a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run
this job on all the data.
What is the most cost-effective way to run this job?

A

D. Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region
bucket and use a Cloud Dataproc cluster to finish the job

38
Q

TerramEarth has equipped all connected trucks with servers and sensors to collect telemetry data. Next year they
want to use the data to train machine learning models. They want to store this data in the cloud while reducing
costs.
What should they do?

A

D. Have the vehicle’s computer compress the data in hourly snapshots, and store it in a GCS Coldline bucket

39
Q

Operational parameters such as oil pressure are adjustable on each of TerramEarth’s vehicles to increase their
efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency
of all 20 million cellular and unconnected vehicles in the field.
How can you accomplish this goal?

A

B. Capture all operating data, train machine learning models that identify ideal operations, and run locally to
make operational adjustments automatically

40
Q

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation,
TerramEarth is required to delete data generated from its
European customers after a period of 36 months when it contains personal data. In the new architecture, this data
will be stored in both Cloud Storage and
BigQuery. What should you do?

A

C. Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36
months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age
condition of 36 months.

41
Q

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud
Storage. You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.
Which two actions should you take?

A

A. Create a Cloud Storage lifecycle rule with Age: 30, Storage Class: Standard, and Action: Set to Coldline, and
create a second GCS life-cycle rule with Age: 365, Storage Class: Coldline, and Action: Delete.

42
Q

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for
the data warehouse for your company,
TerramEarth.
Considering the TerramEarth business and technical requirements, what should you do?

A

A. Replace the existing data warehouse with BigQuery. Use table partitioning.

43
Q

For this question, refer to the TerramEarth case study. A new architecture that writes all incoming data to
BigQuery has been introduced. You notice that the data is dirty, and want to ensure data quality on an automated
daily basis while managing cost.
What should you do?

A

D. Use Cloud Dataprep and configure the BigQuery tables as the source. Schedule a daily job to clean the data.

44
Q

For this question, refer to the TerramEarth case study. Considering the technical requirements, how should you
reduce the unplanned vehicle downtime in GCP?

A

A. Use BigQuery as the data warehouse. Connect all vehicles to the network and stream data into BigQuery
using Cloud Pub/Sub and Cloud Dataflow. Use Google Data Studio for analysis and reporting.

45
Q

For this question, refer to the TerramEarth case study. You are asked to design a new architecture for the ingestion
of the data of the 200,000 vehicles that are connected to a cellular network. You want to follow Googlerecommended practices.
Considering the technical requirements, which components should you use for the ingestion of the data?

A

B. Cloud IoT Core with public/private key pairs

46
Q

For this question, refer to the TerramEarth case study. You start to build a new application that uses a few Cloud
Functions for the backend. One use case requires a Cloud Function func_display to invoke another Cloud Function
func_query. You want func_query only to accept invocations from func_display. You also want to follow Google’s
recommended best practices. What should you do?

A

B. Make func_query ‘Require authentication.’ Create a unique service account and associate it to func_display.
Grant the service account invoker role for func_query. Create an id token in func_display and include the token
to the request when invoking func_query.

47
Q

For this question, refer to the TerramEarth case study. You have broken down a legacy monolithic application into a
few containerized RESTful microservices.
You want to run those microservices on Cloud Run. You also want to make sure the services are highly available
with low latency to your customers. What should you do?

A

B. Deploy Cloud Run services to multiple regions. Create serverless network endpoint groups pointing to the
services. Add the serverless NEGs to a backend service that is used by a global HTTP(S) Load Balancing
instance.

48
Q

For this question, refer to the TerramEarth case study. You are migrating a Linux-based application from your
private data center to Google Cloud. The
TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and
Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration.
What should you do? (Choose two.)

A

A. Open a support case regarding the CVE and chat with the support engineer.
C. Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact.

49
Q

For this question, refer to the TerramEarth case study. TerramEarth has a legacy web application that you cannot
migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application
goes down, you want the URL to point to a “Site is unavailable” page as soon as possible. You also want your Ops
team to receive a notification for the issue. You need to build a reliable solution for minimum cost. What should you
do?

A

C. Create a Cloud Monitoring uptime check to validate the application URL. If it fails, put a message in a
Pub/Sub queue that triggers a Cloud Function to switch the URL to the “Site is unavailable” page, and notify the
Ops team.

50
Q

For this question, refer to the TerramEarth case study. You are building a microservice-based application for
TerramEarth. The application is based on Docker containers. You want to follow Google-recommended practices to
build the application continuously and store the build artifacts. What should you do?

A

A. Configure a trigger in Cloud Build for new source changes. Invoke Cloud Build to build container images for
each microservice, and tag them using the code commit hash. Push the images to the Container Registry.

51
Q

For this question, refer to the TerramEarth case study. TerramEarth has about 1 petabyte (PB) of vehicle testing
data in a private data center. You want to move the data to Cloud Storage for your machine learning team.
Currently, a 1-Gbps interconnect link is available for you. The machine learning team wants to start using the data
in a month. What should you do?

A

A. Request Transfer Appliances from Google Cloud, export the data to appliances, and return the appliances to
Google Cloud.