Reader Flashcards
Fuzzy Logic model
Model with discriptions on basis of fuzzy logics, values between yes and no(maybe)
Heuristic method
Method to reach objective which is non precisely known in an explorative and continuously evaluating manner
Jacobian matrix
matrix of PD’s from individual residues to model parameters.
Auxiliary variable
variable whose value is not dependent on its value at a previous value of independent variable
Residual
difference between the model results and field measurements
Soft-hybrid model
data oriented model in which physical concepts are included
Dynamic model
describes changes over time
Objective
- Domain&location
- Reason for model
- Questions to be answered
- Scenarios
Quality req. regarding:
- Answers for questions
- Analyses to be caried out
- Model itself
- Calibration and when/if necessary
Which data is needed to make?:
- Schematization data
- Input data
- Parameters
- Data for scenarios
Which data is needed to analyse
- Measurements
- Knowledge on parameters
- Statistical distr.
About the data
- What data is available
- Where can it be found
- Is it digital
- Approx. values
- How to deal with, serious outliers/missing values
- Quality
- Who’s responsible
System definition:
- Sum up relevant parts
- Describe mutual relationships(processes)
- Describe relationships between system components and the environment(everything not part of system
Conceptual model:
- Describe structure
a. Inputs,state,other vars - Type of model
- Relations between vars
- Assumptions
- Verify
Choosing existing model:
- Available hardware, OS, expertise, time
- Modellers pref
- Clients wishes/reqs
- Available licensing
Discretizations
0/1/2/3D
Numerical/analytical?
Numerical always available, but cost more time.
Verify
- Check prescriptions
- Dimensions/unit analysis
- Run a sample model
- Check spatio schematization
Analyze, global
o Run with standard input(known output)
o Global behaviour
o Verification of mass balance
o Robustness test
Analyze, sensitivy analysis
o Analytical
o Individual
o Classical(linearized)
o Response surface method
o Monte carlo
o Regionalized sensitivity
Steps in Analyze
- Global
- Sensitivity analysis
- Identification
- Calibration
- Uncertainty
- validation
Fully curve linear
Advantages:
* Allow cell stretching along the river main channel
Disadvantages:
* High resolution in sharp inner bends
* Not possible to locally refine or coarsen the grid
*Staircase representation along closed boundaries
Useage:
*River dominated areas(min 8 grids in cross direction)
Triangular grid
Advantages:
*Easy to generate
*Flexible in shape and resolution
Disadvantages:
* Not possible to stretch the grid cell in the flow direction resulting in:
-Low orthogonality
- Small time step
Useage:
*Wind dominated areas unless wind has dominant direction
- avoid large number of transition
Orthogonality, grids
Orthogonal grids:
*results in higher model accuracy
*reduce computational time
Orthogonality, criteria
*The corners of two adjacent grid cells are situated on a common circle
*The line segment that connects the circumcenters of two adjacent cells(flowlink) intersect orthogonally with the interface btetween them(netlink)
True definition orthogonality
The sine of the angle between a flowlink and netlink, perfect when angle =90degrees,
-stirve for a value between 82 and 90
Smoothness
Measured by ratio of neighbouring grid cell dimensions (Surface area). Should be less than 1.1 in area of interest and may be up to 1.4 further away.
Aspect ratio
Measured by the ratio of grid cell dimensions in X and Y direction. Should be in
the range of 0.5 – 2. In case of one-directional flow phenomena, larger values
can be accepted in that direction (up to 5).
Stability
For 1D courant number
Grid effects, bathymetry accuracy
Largest effect
Resolution determines discharge capacity
Locations of grid lines determines discharge partitioning
Grid effects, numerical friction
Coarsening causes dampening of the discharge wave, same result as increasing bed friction
Grid effects, numerical viscosity
Grid along the main flow has the lowest numerical viscosity. High numerical viscosity has same effect as increasing bed friction
Calibration
Involves minimizing the error between prediction and observation by altering model parameters (1995
Validation
Involves verifying whether the calibrated model parameters also produce minimal error between prediction and observation in different scenarios
One-at-a-time
Testing one variable at a time, disadvantage, no interaction between parameter changes is observed
Latin Hypercube Sampling
12 numbers in random order on 9 columns look at the interaction with each other
Stratisfied sampling
interpoleren tussen punten en als laatste het midden pakken van 8 naar 16 punten
1 8 random punten
2 8 random en 2 random punten in subintervals
3 16 punten random in de sub intervals
4 16 punten elk in midden sub intervals
5 17punten op alle hoeken van de subintervals
Calibration for rivers most uncertainpoints
- Summerbed roughness/ main channel friction
- used to compensate for errors in input data, model set-up and grid generated effects
- parameter adapted until model results are close to measurements
Location dependency
*Multiple roughness trajectories along the longitudinal direction
* Roughness trajectory determined by locations of observation points
* Uniform roughness per trajectory
Problematic in case of modelling the river’s morphology
discharge dependency
Calibration performed for various discharge levels
* Result: discharge-dependent summer bed roughness
Still fixed values per
discharge → Memory of the system not included
Sensitivity analysis
The study of how uncertainty in the model output can be apportioned to
different sources of uncertainty in the model input
Calibration procedure Dutch river models
Location dependent:measurestations
Discharge:Based on return period
*Periodof relatively constant discharge used to calibrate
*Interpolation betweenthe discharge ranges
* Constant calibration factor outside the discharge ranges
Uncertainty analysis
The study of assessing the uncertainty in the model output
Sensitivity analysis
The study of how uncertainty in the model output can be apportioned to different sources of uncertainty in the model input
Why sensitivity
- Uncover technical errors in the model
- Establish priorities for research
- Simplify models
Local sensitivity
focuses on how a small perturbation near an input space value x0 = (x 1x n ) influences the value of Y = f(x0).
Global sensitivity
focuses on the variance of model output Y and more precisely on: how the input variability influences the variance output. It enables us to determine which parts of the output variance are due to the different inputs.
Sensitivity distribution?
Unknown ? -> uniform
Known? -> normal
SA of single parameter?
+Easy to generate
-Clusters and gaps may exist
-N must be large enough to overcome this problem
-Computed mean and variance of model output Y are uncertain and shrink if N increases
Stratitfied over random
*Overcomes the problem of cluster and gaps
*A smaller N can be used compared to random sampling to reach convergence in model output
Sensitivity analysis method
+Simple and informative way
- Challenging in situations with many input factors: how to rank the factors rapidly and automatically without having to look at many separate scatterplots
Liniear regression?
Least square method and apply weigths
What is surrogate modeling?
Simplification of the original (high-fidelity) model
Why surrogate modeling?
-If computational time of the original (high fidelity) model is too long
-If many model runs have to be performed (e.g. extensive global sensitivity analysis)
-If the original model can be simplified without loss of accuracy
Types of surrogate modelling?
*Lower fidelity physically based surrogate models
*Response surface models (i.e. Data driven models/Machine learning/Deep learning/AI
Lower fidelity physically based surrogate models
*Still based on the original input data
*Still based on the physical processes of the original model
Any suggestions how to set up a LF surrogate model?
*Coarser spatial grid size
*Larger temporal grid size (time step)
*Less strict numerical convergence tolerances
*Reduction of model complexity(e.g. 3D –> 2D –>1D)
*Simplification/ignorance of some physics (e.g. Full Momentum eq –> Diffusive wave eq , simplified turbulence)
Response surface surrogates
*Support vector machines
*Polynomials(fitten)
*Artificial Neural networks(trainen kat of hond geen olifant)
*Kriging
Artificial Neural Network (ANN)
*Capable of identifying complex nonlinear behaviour between input and output
*Can handle incomplete and noisy data
Recurrent neural network
*Used to process sequential data: time
series
-Recurrent layers
-Information of the current and past state is used to predict the next time step
-Weights determine importance of the temporal relation
*Very useful for floodforecasting predictions
*No spatial correlations are included (e.g. each 2D grid cell is a separate output node)
Why surrogate for flood forecasting?
2DH takes very long, not possible to do everything upfront and too slow when needed. So use 1D T to simulate water levels and discharges within the river system
*When predicted water levels exceed a certain threshold, local authorities are informed
*Potential dike breach or wave overflow: use of a database with 2DH model output of predetermined flood scenarios
0D
Based on simplified hydraulic concepts that do not attempt to represent the complex dynamic flood generation processes using mathematical equations.
Example: the Height Above Nearest Drainage (HAND) model:
Based on Digital Elevation Model (DEM)
Looks at nearest lowest drainage point and flows everything towards it
*Topography is normalized according to local relative heights alongside the drainage network
*Inundation extent is determined by selecting the cells with HAND values < water depth
+Orders of magnitude faster, producing near real time simulations of flood extents and depths
+Captures the drainage direction accurately for valley like regions
-Only maximum inundation extents and water depths are generally predicted
- Do not provide reliable results in case of complex dynamic interactions
-Does not work in river deltas
-Cannot be used to perform a sensitivity analysis
ANN ups and downs
*Upstream discharge wave as input layer
Water depth per 2D grid cell as output layers
+Can predict time series of inundation extents and water depths
+Can produce output of highly complex nonlinear systems
-Sufficient training data is required. Very time consuming especially in highly uncertain and random situations (e.g. dike failure)
-ANN can only be applied for the specific conditions it was trained for
3di
GIS based
consrv. mass, momentum
3d depth averaged subgrid base model
shifts with velocities in x and y direction and waterlevel, reduces computational costs
no loss in DEM due to subgrid
3.2% of computational cost just as accurate as highresolution