Deploy and retrain a model (10–15%) Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What things does MLflow track?

A

Everything, divided into parameters, metrics, and artifacts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

If I want mlflow on azure what package should I install?

A

pip install mlflow azureml-mlflow

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How to use mlflow on a local device?

A

Use the MLflow tracking URI from the overview of the workspace, then do mlflow.set_tracking_uri = “MLFLOW-TRACKING-URI”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How to start a run in mlflow?

A

It acts like a wrapper, so import it then do
mlflow.set_experiment(experiment_name=<experiment>)</experiment>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How to use autolog in MLflow?

A

from xgboost import XGBClassifier

with mlflow.start_run():
mlflow.xgboost.autolog()

model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are some common function for custom logging in mlflow?

A

log_param(): single key-value parameter, log_metric(): single key-value metric, log_artifact(): logs a file, like an image, log_model(): logs a model as an MLflow model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How to use custom logging in MLflow?

A

from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score

with mlflow.start_run():
model = XGBClassifier(use_label_encoder=False, eval_metric=”logloss”)
model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
y_pred = model.predict(X_test)

accuracy = accuracy_score(y_test, y_pred)
mlflow.log_metric("accuracy", accuracy)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an endpoint?

A

An HTTPS endpoint to which you can send data and which will return a response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the two types of online endpoints within Azure Machine learning?

A

There are managed online endpoints and Kubernetes online endpoints

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are managed online endpoints?

A

Azure Machine Learning manages all the underlying infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are kubernetes online endpoints?

A

Users manage the kubernetes cluster which provides the necessary infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What four things do you need to deploy a model to a managed online endpoint?

A

Model assets, scoring script, environment, compute configuration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is automatically generated when you deploy MLFlow models to an online endpoint?

A

The scoring script and environment are automatically generated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is blue/green deployment?

A

It’s just a/b testing. Set 90% of the traffic for the proven endpoint and 10% for the new one and see how it performs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What do you use to create an online endpoint?

A

You use ManagedOnlineEndpoint with the name and auth_mode parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What do you need to deploy an MLflow model?

A

You either need local model files or a registered model in AML. You also need the instance_type and the instance_count

17
Q

How can you test an endpoint in the AML?

A

Go to endpoints and then test. Use your data and get the test result.

18
Q

How can you test an endpoint in Python SDK?

A

Send your data through ml_client.online_endpoints.invoke() with the name of the endpoint, the name of the deployment, and the json to be sent.

19
Q

What do you need to create a deployment on the endpoint?

A

Use ManagedOnlineDeployment with a name, the name of the endpoint, the model, the compute instance, and the number of instances

20
Q

What functions do you need to create a scoring script?

A

run() and init()

21
Q

What is special about a batch endpoint?

A

You can trigger it from Azure Synapse analytics or Azure Databricks. You can also integrate it with an existing pipeline

22
Q

What class is used to create a batch endpoint?

A

BatchEndpoint(name, description)

23
Q

How do you deploy an MLflow model to a batch endpoint?

A

Use BatchDeployment with extra specification parameters such as instance_count, max_concurrency_per_instance, mini_batch_size, output_action, output_file_name

24
Q

How do you invoke a batch endpoint?

A

You need an input() specifiying the path and asset type, then use ml_client.batch_endpoints.invoke()