Testing Flashcards

1
Q

3 Software testing types:

A
  1. Unit tests - test the functionality of a single price of code (like a function)
  2. Integration tests - test how 2 or more units work together
  3. End to end tests - tests the entire system
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Best practices for testing (5)

A
  1. Automate your testing
  2. Make sure the tests are reliable, fast, and go through code review as the rest of the code. (Buggy test are the worst thing)
  3. Enforce that tests must pass before merging
  4. When you find production bugs convert them to tests
  5. Follow the testing pyramid
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the testing pyramid?

A

Right more unit tests>integration>e2e
70/20/10

The unit tests are faster, more reliable, and easier at isolating the failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Solitary testing

A

Doesn’t rely on data from other units, so you make up the data and test it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Test coverage

A

Shows how many lines of code are being tested
Good for finding areas that are not tested.
But can be misleading because it doesn’t check the quality of the tests which is what we really care about.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Test driven development: 3 laws

A

1) You may not write any production code until you have written a failing unit test
2) You may not write more of a unit test than is sufficient to fail. Nor compiling is failing
3) You may not write more production code than is sufficient to pass the currently failing unit test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Testing in production, why and how?

A

Why - anyways most bugs will not be caught before, it’s inevitable. So you might as well build a system that will monitor the errors fast and clearly so you can fix them once it’s out.

How:

  1. Canary deployments - roll it out to a small percentage of users (1%…) So not all will get the bug
  2. A/B testing - for more statistical tests if you know what you care about
  3. Real user monitoring - like the actual behavior
  4. Exploratory testing - not set up in advance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

CI/CD

A

Testing done by saas, testing the code once it is pushed. As a cloud job.

Best free and easy one is GitHub actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Testing only machine learning model and not the system is not enough, why?

A

The model itself is just a small piece of the system, which includes:
Training system-model-prediction system - serving system - prod data - labeling system - storage and preprocessing system - and back to the start

So each one of these steps should be tested and monitored

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Infrastructure tests - unit test for training code. Goal and How ?

A

Goal: avoid bugs in the training pipeline

How:

  1. Unit test like any other code
  2. Add a single batch/epoch tests that check performance after a small run on tiny dataset
  3. Run frequently during development
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Integration test - test the step between the data system and the training system

A

Goal: make sure training is reproducible

How:

Take a piece of the dataset and Run a trainng run :)
Then check to make sure that the performance remains consistent
Consider pulling a sliding window of window (the data of the last week…)
Run it periodically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Functionality tests - unit test for the prediction code.

What is the goal and how do you achieve it?

A

Goal: avoid stuff in the code that mess up the prediction

How:

  1. Unit test the code like any other
  2. Load pre trained model and test prediction on few examples
  3. Run frequently during development
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Evaluation tests, goal and how?

A

Goal:
make sure a new model is ready to go into production

How:

  • evaluate your model on all of the metrics, datasets, and slices that you care about
  • compare the new model to the old one and your baselines
  • understand the performance envelope (how is the model working on different groups? What types of data will cost the model to not preform well) of the model
  • run every time you have a new candidate model
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Robustness metrics (part of evaluation testing) goal and 4 tests:

A

Goal:
Understand the performance envelope, ie where would you expect the model to fail?

  • Feature importance (if the value of a strong feature is not what you will expect it will not work well)
  • Sensitivity to staleness - train the model on old data, then test it on a moving window with the time moving forward and plot the results to see how fast it decline. How long will it take the model to become stale
  • sensitivity to drift - if possible, measure the sensitivity to different types of drifts, so you can check if they are happening in production
  • correlation between model performance and business metrics - how did it impact the business metrics that we care about
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Shadow tests - prediction system to service systems goals(3) and how(3)?

A

Goal:

1) Detect production bugs before they hit users
2) Detect inconsistency between the offline and online models
3) Detect issues that appear on production data

How:
1)Run the model in production system along side the old model but don’t return the predictions to users

2) Save the data and run the offline model
3) Inspect the prediction distribution for old vs new and offline vs online

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

A/B testing - goal and how

A

Goal:
Test how the rest of the system will react - how will the users and business metrics will react

How:
Start by “canarying” model on a tiny fraction of data.

Consider using a more statistical principle split

Compare the two cohorts

17
Q

Labeling tests - goal and how(5)

A

Goal:
Catch poor quality labels before the corrupt your model

How

1) Train and certify the labelers
2) Aggregate labels of multiple labelers
3) Assign labelers a trust score based on how often they are wrong
4) Manually spot check the labels from your labeling service
5) Run a previous model and check the biggest disagreements

18
Q

Expectations test - unit tests for data

Goal and how

A

Goal:
Catch data quality issues before they reach your pipeline

How:
Define rules about properties of each of you data tables at each stage in you data cleaning and preprocessing
Run them when you run batch data pipeline jobs

19
Q

Build up the testing gradually, start there:

A
  1. Infrastructure tests (the code etc)
  2. Evaluation tests (like the performance on slices)
  3. Expectation tests