Case Study 3 Flashcards

1
Q

Company overview
Helicopter Racing League (HRL) is a global sports league for competitive helicopter racing Each year HRL holds the world championship and several regional league competitions where
teams compete to earn a spot in the world championship. HRL offers a paid service to stream the races all over the world with live telemetry and predictions throughout each race.

Solution concept
HRL wants to migrate their existing service to a new platform to expand their use of managed Al and ML services to facilitate race predictions. Additionally, as new fans engage with the
sport particularly in emerging regions, they want to move the serving of their content, both real-time and recorded, closer to their users.

Existing technical environment
HRL is a public cloud-first company; the core of their mission-critical applications runs on their current public cloud provider Video recording and editing is performed at the race tracks,
and the content is encoded and transcoded where needed in the cloud. Enterprise-grade connectivity and local compute is provided by truck-mounted mobile data centers Their race
prediction services are hosted exclusively on their existing public cloud provider. Their existing technical environment is as follows:
- Existing content is stored in an object storage service on their existing public cloud provider.
- Video encoding and transcoding is performed on VMs created for each job
- Race predictions are performed using TensorFlow running on VMs in the current public cloud provider

Business requirements
HRL s owners want to expand their predictive capabilities and reduce latency for their viewers in emerging markets. Their requirements are:
- Support ability to expose the predictive models to partners.
• Increase predictive capabilities during and before races;
o Race results
o Mechanical failures
o Crowd sentiment
Increase telemetry and create additional insights
• Measure fan engagement with new predictions .
Enhance global availability and quality of the broadcasts.
Increase the number of concurrent viewers. Minimize operational complexity.
• Ensure compliance with regulations.
• Create a merchandising revenue stream.

Technical requirements
- Maintain or increase prediction throughput and accuracy.
Reduce viewer latency.
• Increase transcoding performance.
• Create real-time analytics of viewer consumption patterns and engagement.
Create a data mart to enable processing of large volumes of race data

Executive statement
Our CEO. S,Hawke, wants to bring high-adrenaline racing to fans all around the world. We listen to our fans, and they want enhanced video streams that include predictions of events
within the race (e g., overtaking). Our current platform allows us to predict race outcomes but lacks the facility to support real-time predictions during races and the capacity to process
season-long results

A

.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

1/6
For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers, and season ticket holders. You need to implement a custom card tokemzation service that meets the following requirements:
• It must provide low latency at minimal cost.
• It must be able to identify duplicate credit cards and must not store plaintext card numbers.
• It should support annual key rotation.
Which storage approach should you adopt for your tokemzation service?

A. Store the card data in Secret Manager after running a query to identify duplicates.

B. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode

C. Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances

D. Use column-level encryption to store the data in Cloud SQL

A

B. Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode

https://cloud.google.com/community/tutorials/pci-tokenizer

Deterministic output means that a given set of inputs (card number, expiration, and userID) will always generate the same token. This is useful if you want to rely on the token value to deduplicate your token stores. You can simply match a newly generated token to your existing catalog of tokens to determine whether the card has been previously stored. Depending on your application architecture, this can be a very useful feature. However, this could also be accomplished using a salted hash of the input values.

https://cloud.google.com/architecture/tokenizing-sensitive-cardholder-data-for-pci-dss
Firestore is the next major version of Datastore. Firestore can run in Datastore mode, which uses the same API as Datastore and scales to millions of writes per second,

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

2/6
For this question,refer to the Helicopter Racing League (HRL) case study Recently HRL started a new regional racing league in Cape Town, South Africa. In an effort to give customers in Cape Town a better user experience. HRL has partnered with the Content Delivery Network provider. Fastly HRL needs to allow traffic coming from all of the Fastly IP address ranges into their Virtual Private Cloud network (VPC network). You are a member of the HRL security team and you need to configure the update that will allow only the Fastly IP address ranges through the External HTTP(S) load balancer. Which command should you use?

A. gcloud compute security-policies rules update 1000 \

  • -security-policy from-fastly \
    —src-ip-ranges * \
    —action “allow”

B. gcloud compute firewall rules update sourceiplist-fastly \

—priority 1000 \
—allow tcp:443

C. gcloud compute firewall rules update hlr-policy \

—priority 1000 \
—target-tags=sourceiplist-fastly \
—allow tcp:443

D. gcloud compute security-policies rules update 1000 \

-security-policy hlr-policy \—expression –“evaluatePreconfiguredExpr(‘sourceiplist-fastly’)”\
—action “allow”

A

A. gcloud compute security-policies rules update 1000 \

  • -security-policy from-fastly \
    —src-ip-ranges * \
    —action “allow”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

3/6
For this question, refer to the Helicopter Racing League (HRL) case study. The HRL development team releases a new version of their predictive capability application every Tuesday evening at 3 a m. UTC to a repository. The security team at HRL has developed an in-house penetration test Cloud Function called Airwolf. The security team wants to run Airwolf against the predictive capability application as soon as it is released every Tuesday. You need to set up Airwolf to run at the recurnng weekly cadence. What should you do?

A. Set up Cloud Tasks and a Cloud Storage bucket that triggers a Cloud Function.

B. Set up a Cloud Logging sink and a Cloud Storage bucket that triggers a Cloud Function

C. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function

D. Set up Identity and Access Management (1AM) and Confidential Computing to trigger a Cloud Function

A

C. Configure the deployment job to notify a Pub/Sub queue that triggers a Cloud Function

“Answer C seems to be ok. Triggering Pub/Sub to invoke Cloud Functions seems to be relevant. Cloud Storage doesn’t make any sense. It would have been straight forward if Cloud Scheduler is mentioned in Option C instead of Deployment Job. But if you make a bit of research on deployment jobs, it’s pointing me to cron jobs which is making perfect sense.”

https: //cloud.google.com/appengine/docs/flexible/nodejs/scheduling-jobs-with-cron-yaml
https: //cloud.google.com/scheduler/docs/tut-pub-sub

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

4/6
For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction accuracy from their ML prediction models.They want you to use Google’s Al Platform so HRL can understand and interpret the predictions. What should you do?

A. Use Explainable Al
B Use Vision Al
C. Use Google Cloud’s operations suite
D Use Jupyter Notebooks.

A

A. Use Explainable Al

“AI Explanations helps you understand your model’s outputs for classification and regression tasks. Whenever you request a prediction on AI Platform, AI Explanations tells you how much each feature in the data contributed to the predicted result. You can then use this information to verify that the model is behaving as expected, recognize bias in your models, and get ideas for ways to improve your model and your training data.”

https://cloud.google.com/ai-platform/prediction/docs/ai-explanations/overview

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

5/6
For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective approach for storing their race data such as telemetry. They want to keep all historical records, train models using only the previous season’s data,and plan for data growth in terms of volume and information collected. You need to propose a data
solution. Considering HRL business requirements and the goals expressed by CEO S. Hawke, what should you do?

A. Use Firestore for its scalable and flexible document-based database Use collections to aggregate race data by season and event

B. Use Cloud Spanner for its scalability and ability to version schemas with zero downtime Split race data using season as a primary key.

C. Use BigQuery for its scalability and ability to add columns to a schema Partition race data based on season

D. Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use separate database instances for each season

A

C. Use BigQuery for its scalability and ability to add columns to a schema Partition race data based on season

https://cloud.google.com/architecture/mobile-gaming-analysis-telemetry

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

6/6
For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted after their workloads completed You need to quickly get a list of which VM instances are idle. What should you do?

A. Log into each Compute Engine instance and collect disk. CPU, memory, and network usage statistics for analysis.

B. Use the gcloud compute instances list to list the virtual machine instances that have the idle true label set

C. Use the gcloud recommender command to list the idle virtual machine instances

D. From the Google Console, identify which Compute Engine instances in the managed instance groups are no longer responding to health check probes

A

C. Use the gcloud recommender command to list the idle virtual machine instances

https://cloud.google.com/compute/docs/instances/viewing-and-applying-idle-vm-recommendations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly