Testing Flashcards
3 Software testing types:
- Unit tests - test the functionality of a single price of code (like a function)
- Integration tests - test how 2 or more units work together
- End to end tests - tests the entire system
Best practices for testing (5)
- Automate your testing
- Make sure the tests are reliable, fast, and go through code review as the rest of the code. (Buggy test are the worst thing)
- Enforce that tests must pass before merging
- When you find production bugs convert them to tests
- Follow the testing pyramid
What is the testing pyramid?
Right more unit tests>integration>e2e
70/20/10
The unit tests are faster, more reliable, and easier at isolating the failures.
Solitary testing
Doesn’t rely on data from other units, so you make up the data and test it.
Test coverage
Shows how many lines of code are being tested
Good for finding areas that are not tested.
But can be misleading because it doesn’t check the quality of the tests which is what we really care about.
Test driven development: 3 laws
1) You may not write any production code until you have written a failing unit test
2) You may not write more of a unit test than is sufficient to fail. Nor compiling is failing
3) You may not write more production code than is sufficient to pass the currently failing unit test.
Testing in production, why and how?
Why - anyways most bugs will not be caught before, it’s inevitable. So you might as well build a system that will monitor the errors fast and clearly so you can fix them once it’s out.
How:
- Canary deployments - roll it out to a small percentage of users (1%…) So not all will get the bug
- A/B testing - for more statistical tests if you know what you care about
- Real user monitoring - like the actual behavior
- Exploratory testing - not set up in advance
CI/CD
Testing done by saas, testing the code once it is pushed. As a cloud job.
Best free and easy one is GitHub actions
Testing only machine learning model and not the system is not enough, why?
The model itself is just a small piece of the system, which includes:
Training system-model-prediction system - serving system - prod data - labeling system - storage and preprocessing system - and back to the start
So each one of these steps should be tested and monitored
Infrastructure tests - unit test for training code. Goal and How ?
Goal: avoid bugs in the training pipeline
How:
- Unit test like any other code
- Add a single batch/epoch tests that check performance after a small run on tiny dataset
- Run frequently during development
Integration test - test the step between the data system and the training system
Goal: make sure training is reproducible
How:
Take a piece of the dataset and Run a trainng run :)
Then check to make sure that the performance remains consistent
Consider pulling a sliding window of window (the data of the last week…)
Run it periodically
Functionality tests - unit test for the prediction code.
What is the goal and how do you achieve it?
Goal: avoid stuff in the code that mess up the prediction
How:
- Unit test the code like any other
- Load pre trained model and test prediction on few examples
- Run frequently during development
Evaluation tests, goal and how?
Goal:
make sure a new model is ready to go into production
How:
- evaluate your model on all of the metrics, datasets, and slices that you care about
- compare the new model to the old one and your baselines
- understand the performance envelope (how is the model working on different groups? What types of data will cost the model to not preform well) of the model
- run every time you have a new candidate model
Robustness metrics (part of evaluation testing) goal and 4 tests:
Goal:
Understand the performance envelope, ie where would you expect the model to fail?
- Feature importance (if the value of a strong feature is not what you will expect it will not work well)
- Sensitivity to staleness - train the model on old data, then test it on a moving window with the time moving forward and plot the results to see how fast it decline. How long will it take the model to become stale
- sensitivity to drift - if possible, measure the sensitivity to different types of drifts, so you can check if they are happening in production
- correlation between model performance and business metrics - how did it impact the business metrics that we care about
Shadow tests - prediction system to service systems goals(3) and how(3)?
Goal:
1) Detect production bugs before they hit users
2) Detect inconsistency between the offline and online models
3) Detect issues that appear on production data
How:
1)Run the model in production system along side the old model but don’t return the predictions to users
2) Save the data and run the offline model
3) Inspect the prediction distribution for old vs new and offline vs online