Agent Program Designs Flashcards
1
Q
Simple reflex
A
- Doesn’t take history into account
- Condition-action rules
- Simple to implement but limited intelligence
- Success depend on observability
2
Q
Model reflex
A
- History taken into account -> previous internal state is combined with current state to update
- Helps deal with partial observability
- Requires knowledge to encode in agent to create model of world
3
Q
Goal based
A
- Selects appropriate action to achieve particular desired state
- Decision making can be complicated (when dealing w/long sequences) -> search & planning might be required
- Goal-based agents more flexible to modification than reflex-based agents
4
Q
Utility based
A
- Make use of utility-function to compare ‘desirability’ -> many actions might satisfy a goal, but which is more desirable ?
- Where outcome of goals is uncertain, utility enables agent to evaluate goal importance against likelihood of success.