Path6.Mod2.c - Deploy and Consume Models - Invoke and Troubleshoot Batch Endpoints, Debug Pipelines Flashcards
Added learning: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipeline-failure?view=azureml-api-2
Example code for invoking a Batch Endpoint for a Pipeline Job
A Pipeline Job that expects an Input
instance parameter pointing to the dataset you want to score:
input = Input(type=AssetTypes.URI_FOLDER, path="azureml:new-data:1") job = ml_client.batch_endpoints.invoke( endpoint_name=endpoint.name, inputs={ "input_dataset": input, } )
Example code for invoking a Batch Endpoint that serves a Model deployment
if your endpoint serves a Model deployment, use the short form that supports a single input:
input = Input(type=AssetTypes.URI_FOLDER, path="azureml:new-data:1") job = ml_client.batch_endpoints.invoke( endpoint_name=endpoint.name, input=input, )
Where to troubleshoot the Batch Pipeline Job
Since Batch endpoints generally run pipeline jobs, you’re going to have a bunch of Child Jobs plus your main pipeline Job, hence you’ll go to each individual Child Job details > Outputs + Logs tab, then the invidual log files, as well as the Outputs + Logs tab for the main job:
jer jprogov jres
The three log files used for troubleshooting Batch Pipeline Jobs
The two functions that have their errors logged from your Scoring script.
-
job_error.txt
: Summarize errors from your script. -
job_progress_overview.txt
: High-level information about the number of mini-batches processed so far. -
job_result.txt
: Errors from calling theinit()
andrun()
function in the Scoring Script.
u_l,s_l
Where to find the code logs
Outputs + Logs > user_logs
folder > std_log.txt
s_l
Where to find Azure ML general logs
Outputs + Logs > system_logs
folder
Where to view the status of your Job
Under Jobs and Experiments views (any Authoring pages won’t have Job status)