Lecture 23/24 Flashcards
aleatory uncertainty
natural randomness in the phenomena we are dealing with
epistemic uncertainty
inaccuracy in our understanding and our models for predicting reality
Three basic options
- ignore the uncertainty
- can allow for uncertainty using intuition
- can adopt a scientific approach, use mathematical laws of probability and base decisions on internal estimates
politics of uncertainty
conceding uncertainty might be perceived as being inconsistent with being an expert
Type I error
false positive
Type II error
false negative
reduce type I errors
increase level of confidence
reduce type II errors
descriptive testing
s.o.n.
state of nature
- true value of an uncertain variable - cannot be determined with absolute confidence
A decision is sensitive if
- one test is done and the decision is different for different test results
- more than one test is done and the decision changes as new test results come to hand and the probability distribution for the state of nature will be updated
theta
represents the state of nature
z
represents the result of a test to obtain more information about the state of nature
P(theta)
prior probability distribution (contains prior knowledge about state of nature)
P(z|theta)
the likelihood of test result z, given the state of nature theta
P(theta | z)
posterior probability distribution (contains updated information about state of nature theta)
posterior analysis
involves calculating posterior probabilities for the state of nature, given an experiment has been done and the result is known, and then deciding what action to take
preposterior analysis
involves deciding whether an experiment should be done and which exeriment
four stages of preposterior analysis
- the test
- result
- action
- state of nature
Allais’ Paradox
lottery ticket case studies
- phenomenon is not “irrational”,but is simply the result of the way people value the possible outcomes
Requirements for a robust numerical measure of preference
- reflects the decision makers subjective preferences
- provides a scale preserving the order of expected values
Daniel Bernoulli
- assumed utility of extra wealth inversely proportional to total assets
- proposed a scale based on the logarithm of total assets
Buffon
- proposed a scale based on the reciprocal of total assets
Cramer
- used a scale based on the square root of total assets
von Neumann and Morgenstern
standard gamble
standard gamble
- enabled subjectivity to be taken into account, while fulfilling the requirement for preserving the ordering of expected values
- based on a particular form of a decision tree, with one action leading to a certain outcome and the other to a gamble or lottery
‘risk neutral’ person
bases all decisions on expected utility
‘risk averse’ person
will be uncomfortable with a small p* (i.e. large probability of big loss)
‘not risk averse’ person
will be comfortable with a small p* (i.e. small probability of a big win)
Criteria for Making Decisions
- Criterion of Pure Pessimism
- Criterion of Pure Optimism
- Criterion of Regret
Criterion of Pure Pessimism
(or maxi-min criterion)
- identify for each action the minimum utility
- choose the action with the largest minimum utility
Criterion of Pure Optimism
(or maxi-max criterion)
- identify for each action the maximum utility
- choose the action with the largest maximum utility
Criterion of Regret
- identify the regret for each action and state of nature
- identify for each action the maximum regret
- choose the action with the smallest maximum regret
reliability
the probability that a component (or system) will function properly
two basic types of systems
series and parallel
P[S ^ s]
= 1 - P[S ^ f]
Series system
P[S ^ s] = P[C1 ^ s] * P[C2 ^ s] * … * P[Cn ^ s]
Parallel systems
- level of redundancy = (n-k) where k is minimum number of properly functioning components for the system to function properly
- system fails if more than (n-k) fail
Birnbaum
suggested that to maximise the improvement in network reliability (R) one should improve the link a with the highest Reliability Importance (RI)
RI
Reliability Importance
RI = dR / dr(a)
RI for series
RI(1) = r(2)
RI(2) = r(1)
hence if link 1 is more reliable than link 2, RI(2) > RI(1)
and should improve link 2
RI for parallel
RI(1) = 1 - r(2)
RI(2) = 1 - r(1)
hence if link 1 is more reliable than link 2, RI(1) > RI(2)
and should improve length 1 - counter intuitive
Henley and Kuamoto
suggested that to maximise improvement in network reliabilty, one should improve link a with highest Criticality Importance (CI)
CI
Criticality Importance
CI = RI * (r / R)
CI for two links in series
CI(1) = r(2) * ( r(1) / R) CI(2) = r(1) * ( r(2) / R) since R = r1 * r2 CI(1) = CI(2) = 1 CI provides no help in deciding which length to strengthen
CI for two links in parallel
CI(1) - C(2) = ( r(1) - r(2) ) / ( r(1) + r(2) - r(1)r(2) )
suggsts one should improve the more reliable or stronger link