Calibration Flashcards

Suitable

1
Q

What is the mathematical expression for the model?

A

y = f(x ; theta)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the mathematical expression for the data?

A

{x_i, yhat_i}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the mathematical expression for the misfit?

A

S(theta) = sum of squares between model and data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain ad-hoc calibration.

A

Manual parameter selection to obtain a good fit. Can be inefficient and we have to retrace our steps. Heavily reliant on the starting point (expert knowledge) Qualitative ‘goodness of fit’ is different for everyone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain parameter space

A

All possible parameters as an N-D space. A misfit can be calculated for each step and creates a surface which we can find the minimum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is parameter sensitivity

A

The derivative of the model with respect to the parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain the steps of gradient descent

A

Find the minimum objective function by travelling ‘downhill’.
1. Initial guess at theta
2. Compute downhill travel direction
3. Travel downhill for new estimate
4. Repeat 1-3 until little change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the calibration gotchas?

A

Calibrate model with wrong physics if it has lots of parameters. If a parameter is spatially varying, a model can be well calibrated in some places and poorly in others. Multiple parameter sets can fit data equally well, sometimes required regulation. Sometimes data is only sufficient to calibrate a parameter combination (superparameter).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does the misfit function measure and how is it constructed?

A

It quantifies the difference between the model and the data. It is the sum of squares between the data and model evaluated at the same space/time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Why does changing a model parameter change the misfit?

A

Changing a model parameter changes the mode. Because the misfit is a function of the data and the model, the misfit will change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the advantages of ad-hoc calibration

A

You learn something about the model after each iteration. It is also cost-efficient.
It requires expert knowledge to determine a starting point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Gradient descent is a method to find better fitting models, but is it guaranteed to find the best fitting model?

A

It is possible to find a local minimum instead of global which would not be the best fitting model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Describe how the objective function surface and parameter space are related to each other.

A

Parameter space is all parameters as an N-D space. The objective function surface is created by calculating the objective function for each set of parameters within the N-D space and laying it over top.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does the concept of diminishing returns apply to model calibration?

A

With iterative processes such as gradient descent, it is worth considering a threshold of the objective function, otherwise the calibration process can continue forever with little return.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Why do models become harder to calibrate as they have more parameters?

A

It is possible to calibrate it with the wrong physics if there are a lot of parameters. The sensitivity also increases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is model runtime a factor in calibration?

A

Resource allocation, computational complexity, diminishing returns

17
Q

Is it possible for two different models to have the same misfit? How should we decide which is better?

A

Yes, use regularisation to choose the better one. Add a penalty to the calculation of the misfit that penalises for non-smoothness