Amazon SageMaker | Model Hosting Flashcards
What algorithms does Amazon SageMaker use to generate models?
Model Hosting
Amazon SageMaker | Machine Learning
Amazon SageMaker includes built-in algorithms for linear regression, logistic regression, k-means clustering, principal component analysis, factorization machines, neural topic modeling, latent dirichlet allocation, gradient boosted trees, sequence2sequence, time series forecasting, word2vec, and image classification. Amazon SageMaker also provides optimized MXNet and Tensorflow containers. In addition, Amazon SageMaker supports your custom training algorithms provided through a Docker image adhering to the documented specification.
Can I access the infrastructure that Amazon SageMaker runs on?
Model Hosting
Amazon SageMaker | Machine Learning
No. Amazon SageMaker operates the compute infrastructure on your behalf, allowing it to perform health checks, apply security patches, and do other routine maintenance. You can also deploy the model artifacts from training with custom inference code in your own hosting environment.
How do I scale the size and performance of an Amazon SageMaker model once in production?
Model Hosting
Amazon SageMaker | Machine Learning
Amazon SageMaker hosting automatically scales to the performance needed for your application using Application Auto Scaling. In addition, you can manually change the instance number and type without incurring downtime through modifying the endpoint configuration.
How do I monitor my Amazon SageMaker production environment?
Model Hosting
Amazon SageMaker | Machine Learning
Amazon SageMaker emits performance metrics to Amazon CloudWatch Metrics so you can track metrics, set alarms, and automatically react to changes in production traffic. In addition, Amazon SageMaker writes logs to Amazon Cloudwatch Logs to let you monitor and troubleshoot your production environment.
What kinds of models can be hosted with Amazon SageMaker?
Model Hosting
Amazon SageMaker | Machine Learning
Amazon SageMaker can host any model that adheres to the documented specification for inference Docker images. This includes models created from Amazon SageMaker model artifacts and inference code.