Amazon Machine Learning | Generating Predictions Flashcards
Does Amazon Machine Learning need to make a permanent copy of my data to create machine learning models?
Generating Predictions
Amazon Machine Learning | Machine Learning
No. Amazon Machine Learning need only read-access to your data to find and extract the patterns within it, and store them within ML models. ML models are not copies of your data. When accessing data stored in Amazon Redshift or Amazon RDS, Amazon Machine Learning will export the query results into an S3 location of your choice, and then read these results from S3. You will retain full ownership of this temporary data copy, and will be able to remove it after the Amazon Machine Learning operation is completed.
Once my model is ready, how do I get predictions for my applications?
Generating Predictions
Amazon Machine Learning | Machine Learning
You can use Amazon Machine Learning to retrieve predictions in two ways: using the batch API or real-time API. The batch API is used to request predictions for a large number of input data records—it works offline, and returns all the predictions at once. The real-time API is used to request predictions for individual input data records, and returns the predictions immediately. The real-time API can be used at high throughput, generating multiple predictions at the same time in response to parallel requests.
Any ML model built with Amazon Machine Learning can be used through either the batch API or real-time API—the choice is yours, and depends only on your application’s requirements. You typically use the batch API for applications that operate on bulk data records, and the real-time API for interactive web, mobile and desktop applications.
How fast can the Amazon Machine Learning real-time API generate predictions?
Generating Predictions
Amazon Machine Learning | Machine Learning
Most real-time prediction requests return a response within 100 MS, making them fast enough for interactive web, mobile, or desktop applications. The exact time it takes for the real-time API to generate a prediction varies depending on the size of the input data record, and the complexity of the data processing “recipe” associated with the ML model that is generating the predictions
How many concurrent real-time API requests does Amazon Machine Learning support?
Generating Predictions
Amazon Machine Learning | Machine Learning
Each ML model that is enabled for real-time predictions is assigned an endpoint URL. By default, you can request up to 200 transactions per second (TPS) from any real-time prediction endpoint. Contact customer support if this limit is not sufficient for your application’s needs.
How quickly can Amazon Machine Learning return batch predictions?
Generating Predictions
Amazon Machine Learning | Machine Learning
The batch prediction API is fast and efficient. The time it takes to return the batch prediction results depends on several factors, including (a) the size of the input data, (b) the complexity of the data processing “recipe” associated with the ML model that is generating the predictions, and (c) the number of other batch jobs (data processing, model training, evaluation, and other batch processing requests) that are simultaneously running in your account, among others. By default, Amazon Machine Learning executes up to five batch jobs simultaneously. Contact customer support if this limit is not sufficient for your application’s needs.