8. Stochastic programming Flashcards
How can we deal with uncertainty?
- Sensitivity analysis
- What-if analysis
- Scenario analysis
- Stress tests
Fig. example stochastic modell
Stochastic programming models
- parameters may be uncertain
- probability distribution of random parameters is assumed to be known
- two-stage recourse model:
1. First-stage decisions (x) to be taken before the random parameter realizes
2. Random parameter realization (ξ(ω))
3. Second-stage decisions (y(ω, x)) to be taken after the random parameter realization
Fig. scenario tree (ex. 3-2-3)
Fig. scenario tree (ex. 3-1-1)
How to generate scenarios?
- Historical data
- Simulation based on a mathematical/statistical model
- Expert opinion (subjective)
- A combination of the above options
How to generate scenarios?
- Historical data
- Simulation based on a mathematical/statistical model
- Expert opinion (subjective)
- A combination of the above options
Steps to deal with uncertainty
1 Decision model and stochasticity
2 Scenario tree
3 SP model
How many scenarios to consider to satisfy the stability requirement?
1 In-sample stability analysis
2 Out-of-sample stability analysis
In-sample stability analysis steps
1 Generate K scenario trees of different types and sizes
2 Solve the SP for different scenario trees and obtain solutions x∗_k,z∗_k, k = 1, . . . ,K
3 Select the smallest scenario cardinality on which the objective function value becomes stable
Out-of-sample stability analysis
evaluate how the solutions obtained through the in-sample stability analysis perform in the benchmark scenario tree
Steps:
- For each scenario tree k ∈ K:
1. Get the optimal values for the decision variables x_k obtained in tree k
2. Evaluate how this solution x_k performs in the SP with the benchmark scenario tree (it requires setting x = x*_k in SP)
- Select the smallest cardinality converging to the “true” (the one of the benchmark tree) solution
Expected value of perfect information (EVPI)
measures the maximum amount a decision-maker would be ready to pay for complete information about the future
EVPI = SP − WS (WS - SP if maximization problem)
Here-and-now solution
optimal objective function value of the SP
Wait-and-see solution (WS)
solution we would get if we could postpone
the decision in the future
1. Solve a SP for each scenario
→ Obtain the optimal decision variables values x(ξ) and objective function values z(x(ξ), ξ))
2. Take the expected value of the optimal objective function values
Value of stochastic solution (VSS)
advantage a decision-maker gets by solving a stochastic program instead of a deterministic one
VSS = EEV − SP (SP - EEV if maximization problem)
expected result of using the EV solution (EEV)
measures how bad a decision based on the deterministic consideration of the parameter is
1. Solve the expected value problem (EV): replace the random variable by its expected value and solve the SP
2 Compute the EEV (impose x = x^(ξ^) in the SP)
Relations and bounds EVPI and VSS
WS ≤ SP ≤ EEV (EEV ≤ SP ≤ WS if maximization problem)
EVPI = 0 and VSS = 0
EEV = EV
- Optimal solutions do not depend on the value of the random parameter
- Modeling the problem as a SP has no sense
EVPI = 0 but VSS ≠ 0
WS = SP
Example: for all scenarios subproblems, the same optimal solution and equal to the SP one
EVPI ≠ 0 but VSS = 0
EEV = SP
Loss of using the skeleton solution (LUSS)
- Solve the EV problem (obtain first-stage variable solutions x(ξ))
- Fix at lower bound all first-stage variables which are their lower bound in the EV solution and solve the obtained SP
→ Obtain first-stage solutions xˆ
imposing x = ˆx in the SP and evaluating the objective of the SP
-> LUSS = ESSV - SP
Relations and bounds LUSS
For any SP:
VSS ≥ LUSS ≥ 0
SP ≤ ESSV ≤ EEV (EEV ≤ ESSV ≤ SP if maximization problem)
LUSS = 0
ESSV = RP: perfect skeleton solution
condition x_j = x̄j(ξ^), j ∈ J is satisfied by the SP solution x*
even without being enforced by a constraint
0 < LUSS < VSS
there may be some variables not chosen by the EV solution,
which should be selected in the SP
LUSS = VSS
EEV = ESSV
The SP, when not allowed to use/change the variables in J , chooses not to change the value of any of the remaining variables (xˆ = x̄(ξ^_))
Loss of upgrading the deterministic solution (LUDS)
Can the deterministic solution be upgraded to become good or optimal for the SP?
1. Solve the EV problem (obtain first-stage variable solutions x̄(ξ^))
2. Add the constraint x ≥ x̄(ξ^) to the SP and solve it
→ Obtain first-stage solutions xˇ
imposing x = ˇx in the SP and evaluating the objective in the SP
-> LUDS = EIV − SP
Relations and bounds LUDS
For any SP:
VSS ≥ LUDS ≥ 0
SP ≤ EIV ≤ EEV (EEV ≤ EIV ≤ SP if maximization problem)
LUDS = 0
EIV = RP: deterministic solution is perfectly upgradeable
The condition x ≥ x̄(ξ^_) is satisfied by the SP solution x* even without being enforced by constraints
0 < LUDS < VSS
deterministic solution is partially upgradeable
There exists one (at least) component i ∈ {1, . . . , n} such that x*_i < x̄_i, then xˇi = x̄_i