Calibration Flashcards
Suitable
What is the mathematical expression for the model?
y = f(x ; theta)
What is the mathematical expression for the data?
{x_i, yhat_i}
What is the mathematical expression for the misfit?
S(theta) = sum of squares between model and data
Explain ad-hoc calibration.
Manual parameter selection to obtain a good fit. Can be inefficient and we have to retrace our steps. Heavily reliant on the starting point (expert knowledge) Qualitative ‘goodness of fit’ is different for everyone.
Explain parameter space
All possible parameters as an N-D space. A misfit can be calculated for each step and creates a surface which we can find the minimum.
What is parameter sensitivity
The derivative of the model with respect to the parameters
Explain the steps of gradient descent
Find the minimum objective function by travelling ‘downhill’.
1. Initial guess at theta
2. Compute downhill travel direction
3. Travel downhill for new estimate
4. Repeat 1-3 until little change
What are the calibration gotchas?
Calibrate model with wrong physics if it has lots of parameters. If a parameter is spatially varying, a model can be well calibrated in some places and poorly in others. Multiple parameter sets can fit data equally well, sometimes required regulation. Sometimes data is only sufficient to calibrate a parameter combination (superparameter).
What does the misfit function measure and how is it constructed?
It quantifies the difference between the model and the data. It is the sum of squares between the data and model evaluated at the same space/time.
Why does changing a model parameter change the misfit?
Changing a model parameter changes the mode. Because the misfit is a function of the data and the model, the misfit will change.
What are the advantages of ad-hoc calibration
You learn something about the model after each iteration. It is also cost-efficient.
It requires expert knowledge to determine a starting point.
Gradient descent is a method to find better fitting models, but is it guaranteed to find the best fitting model?
It is possible to find a local minimum instead of global which would not be the best fitting model.
Describe how the objective function surface and parameter space are related to each other.
Parameter space is all parameters as an N-D space. The objective function surface is created by calculating the objective function for each set of parameters within the N-D space and laying it over top.
How does the concept of diminishing returns apply to model calibration?
With iterative processes such as gradient descent, it is worth considering a threshold of the objective function, otherwise the calibration process can continue forever with little return.
Why do models become harder to calibrate as they have more parameters?
It is possible to calibrate it with the wrong physics if there are a lot of parameters. The sensitivity also increases.