Path4.Mod1.b - Training Models with Scripts - Specifying an Environment for a Command Job Flashcards
When specifying an Environment to your Command
job, you can use any one of these
- Workspace Environments: Pre-built Environments that come with every Azure ML Workspace
- Docker Environments: Existing Docker Images that already contain packages needed to run your script
- Custom Environments (conda.yml): Same concept as noted before
Describe the Workspace Environment
Workspaces come with prebuilt Environments. To use one for a job, set the environment
parameter to your desired Environment’s name as a string value.
Code to get all Workspace Environments
To see of list of those Environments:
envs = ml_client.environments.list()
Describe the Docker Environment
You can create an Environment instance using a Docker image containing pre-installed packages/dependencies. Ideal when the built-in ones don’t have what you need.
Describe code for creating a Docker Environment in your Workspace
from azure.ai.ml.entities import Environment // Create an Environment using a public Docker image URI env_docker_image = Environment( image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04", name="docker-image-example_env", description="Environment created from a Docker image.", ) ml_client.environments.create_or_update(env_docker_image)
Describe why a Docker Enviroment may not be suitable for your needs and the alternative
The Docker image might lack dependencies your training process or pipeline may need. In that case you’d need to create a custom Environment.
Describe Custom Environments and how they are defined
A Custom Environment can be built by specifying a conda.yml
file, which lists all your requirements for running yoru pipeline. An example from our Jupyter Notebook (by way of the %%writefile
command):
%%writefile src/conda-env.yml name: basic-env-cpu channels: - conda-forge dependencies: - python=3.10 - scikit-learn - pandas - numpy - matplotlib