8. Stochastic programming Flashcards

1
Q

How can we deal with uncertainty?

A
  • Sensitivity analysis
  • What-if analysis
  • Scenario analysis
  • Stress tests
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Fig. example stochastic modell

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Stochastic programming models

A
  • parameters may be uncertain
  • probability distribution of random parameters is assumed to be known
  • two-stage recourse model:
    1. First-stage decisions (x) to be taken before the random parameter realizes
    2. Random parameter realization (ξ(ω))
    3. Second-stage decisions (y(ω, x)) to be taken after the random parameter realization
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Fig. scenario tree (ex. 3-2-3)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Fig. scenario tree (ex. 3-1-1)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How to generate scenarios?

A
  • Historical data
  • Simulation based on a mathematical/statistical model
  • Expert opinion (subjective)
  • A combination of the above options
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How to generate scenarios?

A
  • Historical data
  • Simulation based on a mathematical/statistical model
  • Expert opinion (subjective)
  • A combination of the above options
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Steps to deal with uncertainty

A

1 Decision model and stochasticity
2 Scenario tree
3 SP model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How many scenarios to consider to satisfy the stability requirement?

A

1 In-sample stability analysis
2 Out-of-sample stability analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In-sample stability analysis steps

A

1 Generate K scenario trees of different types and sizes
2 Solve the SP for different scenario trees and obtain solutions x∗_k,z∗_k, k = 1, . . . ,K
3 Select the smallest scenario cardinality on which the objective function value becomes stable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Out-of-sample stability analysis

A

evaluate how the solutions obtained through the in-sample stability analysis perform in the benchmark scenario tree
Steps:
- For each scenario tree k ∈ K:
1. Get the optimal values for the decision variables x_k obtained in tree k
2. Evaluate how this solution x
_k performs in the SP with the benchmark scenario tree (it requires setting x = x*_k in SP)
- Select the smallest cardinality converging to the “true” (the one of the benchmark tree) solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Expected value of perfect information (EVPI)

A

measures the maximum amount a decision-maker would be ready to pay for complete information about the future
EVPI = SP − WS (WS - SP if maximization problem)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Here-and-now solution

A

optimal objective function value of the SP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Wait-and-see solution (WS)

A

solution we would get if we could postpone
the decision in the future
1. Solve a SP for each scenario
→ Obtain the optimal decision variables values x(ξ) and objective function values z(x(ξ), ξ))
2. Take the expected value of the optimal objective function values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Value of stochastic solution (VSS)

A

advantage a decision-maker gets by solving a stochastic program instead of a deterministic one
VSS = EEV − SP (SP - EEV if maximization problem)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

expected result of using the EV solution (EEV)

A

measures how bad a decision based on the deterministic consideration of the parameter is
1. Solve the expected value problem (EV): replace the random variable by its expected value and solve the SP
2 Compute the EEV (impose x = x^(ξ^) in the SP)

17
Q

Relations and bounds EVPI and VSS

A

WS ≤ SP ≤ EEV (EEV ≤ SP ≤ WS if maximization problem)

18
Q

EVPI = 0 and VSS = 0

A

EEV = EV
- Optimal solutions do not depend on the value of the random parameter
- Modeling the problem as a SP has no sense

19
Q

EVPI = 0 but VSS ≠ 0

A

WS = SP
Example: for all scenarios subproblems, the same optimal solution and equal to the SP one

20
Q

EVPI ≠ 0 but VSS = 0

A

EEV = SP

21
Q

Loss of using the skeleton solution (LUSS)

A
  1. Solve the EV problem (obtain first-stage variable solutions x(ξ))
  2. Fix at lower bound all first-stage variables which are their lower bound in the EV solution and solve the obtained SP
    → Obtain first-stage solutions xˆ
    imposing x = ˆx in the SP and evaluating the objective of the SP
    -> LUSS = ESSV - SP
22
Q

Relations and bounds LUSS

A

For any SP:
VSS ≥ LUSS ≥ 0
SP ≤ ESSV ≤ EEV (EEV ≤ ESSV ≤ SP if maximization problem)

23
Q

LUSS = 0

A

ESSV = RP: perfect skeleton solution
condition x_j = x̄j(ξ^), j ∈ J is satisfied by the SP solution x*
even without being enforced by a constraint

24
Q

0 < LUSS < VSS

A

there may be some variables not chosen by the EV solution,
which should be selected in the SP

25
Q

LUSS = VSS

A

EEV = ESSV
The SP, when not allowed to use/change the variables in J , chooses not to change the value of any of the remaining variables (xˆ = x̄(ξ^_))

26
Q

Loss of upgrading the deterministic solution (LUDS)

A

Can the deterministic solution be upgraded to become good or optimal for the SP?
1. Solve the EV problem (obtain first-stage variable solutions x̄(ξ^))
2. Add the constraint x ≥ x̄(ξ^
) to the SP and solve it
→ Obtain first-stage solutions xˇ
imposing x = ˇx in the SP and evaluating the objective in the SP
-> LUDS = EIV − SP

27
Q

Relations and bounds LUDS

A

For any SP:
VSS ≥ LUDS ≥ 0
SP ≤ EIV ≤ EEV (EEV ≤ EIV ≤ SP if maximization problem)

28
Q

LUDS = 0

A

EIV = RP: deterministic solution is perfectly upgradeable
The condition x ≥ x̄(ξ^_) is satisfied by the SP solution x* even without being enforced by constraints

29
Q

0 < LUDS < VSS

A

deterministic solution is partially upgradeable
There exists one (at least) component i ∈ {1, . . . , n} such that x*_i < x̄_i, then xˇi = x̄_i