Machine Learning in the Enterprise Flashcards

1
Q

How do you call a process of developing a pipeline for model (re)training?

A

Model operationalization

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the different types of inference?

A

Online prediction - API for real-time prediction
Streaming prediction - near real-time event-based predictions
Batch prediction - offline prediction in batches
Embedded prediction - prediction on different devices like mobile phone

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a Data Catalog?

A

Data Catalog is a data management service that can add additional metadata to the data coming from various sources like BigQuery, Cloud Storage, Dataplex, etc. It marks the data and make it discoverable and understandable to everyone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is Dataplex?

A

Enables you to centrally manage your data coming from various sources like data lakes, data warehouses and data marts. It consists of Lakes, Zones, and assets. You can control who has access to which data by logically organizing them into Zones/Lakes, making it easily discoverable, data lineage, automatic data quality checks, and automatic meta-data extraction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is Analytics Hub?

A

It is a central location where you can publish your data and subscribe to data from other publishers, easily ingest and use it in your project. Publishers pay for the storage of the data while subscribers pay for the analytics workload.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the different data preprocessing options on GCP?

A

BigQuery - recommended for structured data, performing transformations on the data and storing it in a “clean” dataset
Dataflow - recommended for unstructured data
Dataproc - for customers who already have their data preprocessing pipeline in Spark and Hadoop
Tensorflow Extended - if it is used for model pipelines

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is Dataprep?

A

UI on top of Dataflow. It has some nice graphs, quality checks, statistics, easy to use transformations on data… The most important feature is recipes that chain together different types of transformations (predefined or your own).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are some optimal and maximal values for batch size?

A

40-100 as an optimal value and 500 as a maximum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Is it better to start with a small or larger batch size?

A

Smaller

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is model parallelism?

A

When your model is too big to fit on one device, you have to split it it per layers or cluster nodes where portions of deep neural network would be split between nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How can you run your Python training application in a custom job, what options do you have for running your code?

A

You use pre-build container provided by Google for various frameworks or use a custom image.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How to work with large datasets that can not fit into memory during training?

A

Approaches could be to introduce streaming or load data into batches.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the required thing to do when the custom job is completed with model training?

A

You need to store the model in Cloud Storage, after training is completed, the VM is shut-down and if you don’t save the model for later usage it will be deleted.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can you use model artifacts stored in GCS directly in Vertex AI for prediction?

A

No, you have to first store it to Vertex AI Model Registry

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the different ways of packaging your code for custom training?

A
  • Store your code in a single python file and use it in a combination with pre-built container (good for prototyping)
  • Package your code in Python source distribution with a pre-built container
  • Create a custom Docker image and store in Artifact Registry
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is Cloud Storage FUSE?

A

It is a tool that enables you to mount GCS buckets to your code and access files/folders directly without downloading them. They are available under “/gcs/” root folder and it can be treated like a file system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the different ways of loading data for custom training?

A
  • Load data from Cloud Storage using Cloud Storage FUSE
  • Mount a NFS share
  • Use managed datasets
18
Q

If you need to access other Google resources from a custom training job, what kind of authentication is the recommended way to access them?

A

The recommended way is to use ADC (Authentication Default Credentials). Vertex AI automatically configures Custom Code Service Agent with predefined permissions but if you need a different set of permissions you can create a custom service account.

19
Q

What do you need to do to read files from GCS in Vertex AI custom jobs?

A

Nothing, they are automatically accessible on “/gcs/” path as GCS FUSE is automatically integrated.

20
Q

What will happen if a VM is shutdown/restarted during model training? How to approach this problem?

A

If VM restarts during model training, training progress will be lost and when it starts up again it will start the training from beginning. To resolve this, rule of the thumb is if your training lasts more than 4 hours make sure to:
1. Store training intermediate steps to GCS
2. When training starts, first check if training progress already exists

21
Q

How can you create a custom container image for training?

A

You can use autopackaging feature that automatically builds a Docker image, push it to Container Registry and creates CustomJob resource based on container image in a single command. This is not working for TrainingPipeline and HyperparameterTuningJob. Another option would be to manually create a Docker file.

22
Q

What is the difference between Custom Job and Training Pipeline?

A

CustomJob is a single execution of your custom training code while Training Pipeline orchestrates multiple Custom Job, hyperparameter tuning jobs, storing model to Model Artifact, etc.

23
Q

What type of search algorithms is Vertex AI Vizier supporting?

A

Grid - search each combination of hyperparameters
Random - random combination of params
Bayesian - decide which params to choose based on results of previous iterations (default)

24
Q

What is the baseline for skew and drift in model monitoring?

A

Skew - statistical distribution of feature values in training data
Drift - statistical distribution of feature values in the recent past

25
Q

What kind of models is Vertex AI model monitoring supporting?

A

AutoML tabular and custom tabular models

26
Q

What are the things you can monitor OOTB in Vertex AI Monitoring?

A

Skew and drift detection for prediction requests and feature attributions.

27
Q

What are the necessary steps to enable monitoring?

A
  1. Upload the model to Vertex AI Endpoint
  2. Configure a model monitoring specification
  3. Upload model monitoring specification to Vertex AI Endpoint
  4. Upload or automatic generation of input schema for parsing
  5. For feature skew, upload the training data for automatic generation of the feature distribution
  6. For feature attributions, upload the corresponding Vertex AI Explainability configuration
28
Q

Explain in detail how model monitoring works in GCP.

A

When model monitoring is enabled on Vertex AI Endpoint and input schema is configured prediction logs are stored in the Big Query table that will be used for skew and drift detection.

You have to configure monitoring intervals (how often you will check for skew and drift) and provide a sample rate (how many prediction requests should be logged and used for monitoring - a value between 0 and 1 where 0.5 means that you randomly take 50% of incoming prediction requests).

For skew detection, you have to upload a training data sample against which drift will be evaluated (no need for AutoML).

For drift detection of feature attributions, you have to provide a Vertex AI Explainability configuration (if not AutoML)

When skew or drift is detected based on a certain threshold that you configure, an alert is sent to the configured email(s).

29
Q

What is the purpose of input schema in model monitoring?

A

For the model to be able to parse prediction features there has to be an input schema. For Auto ML it will be created automatically, for custom models it will try to create it based on the first 1000 prediction requests, or you have to upload it manually. It works best with standard key: value pairs.

30
Q

What type of data/models is model monitoring supporting?

A

Only Auto ML tabular and tabular custom-trained models

31
Q

Should you monitor both training-serving skew and prediction drift at the same time?

A

It is possible, but you should prioritize. If training data is available to you, you should monitor training-serving skew. If not available, you should monitor prediction drift.

32
Q

Which type of features are supported in model monitoring?

A

Numerical and categorical features - monitors changes in the distribution of features

33
Q

How are feature distributions calculated in model monitoring?

A

Categorical by percentage/number of occurrences and numerical are first organized into bins and then the same as categorical.

34
Q

What happens if you don’t specify a target column in your training data in model monitoring?

A

It will take the last feature in the training data.

35
Q

What happens if you don’t provide input schema from model monitoring?

A

It will try to automatically detect schema from first 1000 prediction requests but it will stay in pending state until it is created.

36
Q

Is it always recommended to use Vertex AI Pipelines if you need to orchestrate a few steps in ML workflow?

A

If you are using Tensorflow which processes terabytes of structured and unstructured data it is recommended to use TFX.

37
Q

Explain the process of Vertex AI Pipeline creation

A
  1. Define components (functions) and their input and output that represent pipeline steps
  2. Chain them together in a pipeline and define the output of which component is the input of which component
  3. Compile pipeline which results in the creation of a .json or .yaml file that represents the pipeline
  4. Run the pipeline by referencing that file
38
Q

What options are available to create components in Vertex AI Pipelines?

A

You can create a custom component or use prebuilt Google components.

39
Q

What is the benefit of using Vertex AI Pipelines compared to Kubeflow pipelines?

A

Because it is a managed service, you don’t have to maintain a Kubernetes cluster.

40
Q

What kind of data is Vertex Explainable AI supporting?

A

Tabular and image data

41
Q

Is it possible to have lower latency online predictions if your application resides within the local network?

A

Yes, you can use a private endpoint with VPC where the communication will not leave the private network (go to the internet).

42
Q

What are the options to pass the data for batch prediction?

A

You can use the data from the data lake/warehouse or from the Vertex AI Feature Store.