Deploying machine learning models with Azure Machine Learning Flashcards

1
Q

What means inferencing in machine learning

A

In machine Learning, inferencing refers to the use of a trained model to predict labels for new data on which the model has not been trained. Often, the model is deployed as part of a service that enables applications to request immediate, or real-time, predictions for individual or small numbers of data observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is that AML uses for its deployment mechanism

A

Azure Machine Learning uses containers as a deployment mechanism, packaging the model and the code to use it as an image that can be deployed to a container in your chosen compute target

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What steps do you have to go through to deploy a real-time inferencing service

A
  1. Register a trained model
  2. Define an inference configuration
  3. Define a Deploymenet Configuration
  4. Deploy the model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Steps do you have to go through to deploy a real-time inferencing service:
Define an Inference Configuration

When the model is deployed, it is a service that consist of two things, namely:

A
  1. A script to load the model and return predictions for submitted data
  2. An environment in which the script will be run

You must therefore define a script and environment for the service

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

For an Inference Configuration you must create an entry script (sometimes referred to as the scoring script) for the service as a Python file. It must include two functions. Which ones?

A
  • init(): called when the service is initialized
  • run(raw_data): called when the new data is submitted to the service

Typically you use the init function to load the model from the model registry, and use the run function to generate predictions from the input data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

For an Inference Configuration you must create an entry script and an environment. What is done afterwards?

A

Defining a Deployment Configuration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What can you use for testing within the real-time inferencing service?

A

For testing you can use the Azure Machine Learning SDK to call a web service through the run method of a WebService object that references the deployed service.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly