Decision Theory Flashcards

1
Q

Explain why utility functions are a strictly more expressive preference model than the goals used in deterministic planning

A

They are more expressive because the solution depends on the agent’s preference towards certain worlds; there is more than one solution. Whereas, the goals used in deterministic planning does not care about the preference as long as a solution is found

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the difference between random variable and decision variables

A

Random variables are variables we can not control and are determined by the world.
Eg. accident, weather etc…

Decision variables are variables we can choose to happen in the world.
Eg. The robot wears pads, takes the longer route etc..

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

In words, what is the agent’s expected utility for making a single decision?

A

The agent’s expected utility for making a single decision depends on the utility value of that decision and the probability of that world where the decision is made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is a sequential decision problem different from a single decision problem? Why does it make sense that in the “single decision” framework, we can still talk about problems ( such as the example given in the slides) where there is more than one decision variable.

A

A single decision problem is when agents know what actions they can carry out, the effect of each action is described as a probability distribution over outcomes and an agent’s preferences are expressed by utilities of outcomes. We can still talk about problems where there is more than one decision variable by combining one or more decisions and treat them as a single macro decision to be made before acting. Such as (WearPads, Whichway).

A sequential decision problem is when at each stage, the actions are available to the agent, information is or will be available to the agent when it has to act, the effects of each action and the desirability of these effects are available. In other words, the agent has to take actions without knowing what the future brings.

In short 
Repeat: 
1. Make Observations
2. Decide on an action
3. Carry out the action
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In words, define a policy for a sequential decision problem

A

A policy specifies what an agent should do under each circumstance.
The agent makes an observation of the world.
The agent makes a decision depending on what is has observed
The agent executes the decision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In words, what makes a policy “optimal” in a sequential decision problem

A

The higher the expected utility value in a sequential decision problem the more “optimal” it will be.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How is a decision variable D eliminated when VE is applied to a sequential decision problem..?

A

To eliminate that decision node, it chooses the values for the decision that result in the maximum utility, creating a new factor on the remaining variables and a decision function for the decision variable being eliminated…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly