CS3033 - Artificial Intelligence Flashcards

1
Q

What is Moravec’s Paradox?

A

High-level (“human”) reasoning requires little computation while sensor motor reasoning is comparatively much harder. AKA Things humans find easy machines find hard and vice versa`

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is rational behavior?

A

Doing the “right thing”. The right thing: that which is expected to maximise goal achievement, given the available information

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an agent?

A

An entity that perceives and acts independently. AKA takes input and performs actions based on it. A rational Agent will attempt to perform the action that maximises goal achievement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Multiagent system

A

A multiagent system is one that consists of a number of agents, which interact with one-another. In the most general case, agents will be acting on behalf of users with different goals and motivations. To successfully interact, they will require the ability to cooperate, coordinate and negotiate with each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a reactive system

A

A reactive system is one that maintains an ongoing interaction with its environment, and responds to changes that occur in it (in time for the response to be useful); i.e. in real time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define Accessability/Observability

A

An accessible or fully observable environment is one where an agent can obtain complete, accurate, up-to-date information about the environment’s state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define a Deterministic environment

A

A deterministic environment is one where each action has a single guaranteed effect. There is no uncertainty about the state that will result from performing an action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define Episodic enviroment

A

An episodic environment in one in which the next action taken by the agent does not depend on the previous action. E.g. detecting defects on an assembly line

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define Sequential enviroment

A

An agents decision could affect all future decisions made by the agent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define static vs dynamic

A

A static environment is one where the time where an agent decides what action to take, the environment does not change. Semi-dynamic environments are when the environment does not change by the agent’s performance score does. Ex: chess clock

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discrete vs Continuous

A

A discrete environment can can be abstracted into countable variables measuring the qualities of the environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Known vs Unknown

A

A known environment: all outcomes from all actions are given. Unknown: the agent will have to learn as it goes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define a simple reflex agent

A

Selects an action based on the current precept. History is not considered. Only works in a fully observable enviroment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Define model-based reflex agents

A

Maintains an internal state using a transition and sensor module.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Goal-based agent

A

Extends model-based reflex agent. Stores the goal state and uses it to inform it’s next action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Utility-based agent

A

Will choose the action that maximises the expected utility of the action outcomes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is an uninformed search algorithm?

A

An algorithm that does not rely on a heuristic function, so it exhausts the state space finding for the goal state. BFS being an example.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How is a search problem formulated. (Four steps)

A

Goal formulation, problem formulation, search, execution. Think back to programming assessment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How do we measure how good a search algorithm is? (four points)

A

Completness, Optimality, time complexity, space complexity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Explain Breadth First Search

A

Start from root node, explore all neighbouring nodes and expand them. BFS is complete in graph search if state space is finite but can get stuck on infinite loop in tree search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Explain Uniform Cost Search

A

It acts same as BFS but expands the node with the lowest path cost (cost to get from start node to node), this is done using a priority queue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Explain Depth First Search

A

Explore the deepest node, backtrack when a dead end is reached. Variations include include Depth Limited Search and Iterative Deepening Depth-first Search.

23
Q

Explain Iterative Deepening Depth First search

A

Combines benefits of BFS and DFS, by acting as a DFS with a gradually increasing limit until goal is found. It’s optimal and complete.

24
Q

What is an informed search algorithm?

A

An algorithm that uses a heuristic function to make informed decisions when selecting a state to move to. It aims to move in the direction of the goal state and get closer to it at each move.

25
Q

Explain the difference between Greedy-Best and A* search

A

Greedy best makes use of the heuristic function only so the estimated cost is the heuristic cost f(n) = h(n), while A* uses the sum of the cost to reach the current state from the initial state plus the estimated cost to reach the goal state from the current state, namely f(n) = g(n) + h(n)

26
Q

Explain heuristic admissibility?

A

Never overestimates the cost

27
Q

Explain heuristic dominance?

A

A measure of which heuristic is more efficient

28
Q

Define local search and its advantages

A

Does not keep track of paths. Only cares about the final state, not the path to get there. Does not search the entire space

Advantages:
Low memory cost
Good for infinite or very large state spaces

Disadvantages:
Not optimal

29
Q

Explain Hill Climbing algorithm?

A

It aims to move in the direction with the steepest ascent on the state space graph. Issues arise when the state space graph contains plateaus, ridges, and local maxima since the algorithm may terminate prematurely. Solutions are to allow sideways moves, stochastic hill climbing and random restart hill climbing.

30
Q

Explain Simulated Annealing?

A

Apply a random walk to hill climbing. Have a tolerance to move to a less than optimal state but decrease this tolerance at each iteration in hope of finding a global maxima. It is both efficient and complete.

31
Q

Explain Local Beam Search?

A

The algorithm works the same as a BFS, except that it will select the best k states from the states generated for each node. Since each node expanded is based on the previous k expanded nodes, there’s a risk that the search is clustered in a specific state space region. Stochastic LBS is a solution.

32
Q

What are the 4 types of crossovers in genetic algorithms? Briefly explain.

A

1-point crossover : choose a random point on parents and swap their tails around
n-point crossover : choose n random crossover points and swaps along those points and attaches alternating pairs
uniform crossover: assign heads to one parent and tail to other, then do a coin flip to assign each gene and assign its inverse to the parent
mutations : pick the best portions from two genes and hope to get a better gene out.

33
Q

Explain the difference between crossover and mutation?

A

Crossover is exploratory and combines information from two parents, mutation is exploitative and introduces new information in the genes.

34
Q

Why do Local Search algorithms fail in continuous state spaces?

A

Because there’s an infinite branching factor for each node expansion. To fix: use a discretization method to abstract a continuous space into a discrete one.

35
Q

Explain the two variations of multi-agent environments?

A

Agents can be cooperative or competitive

36
Q

What are the key ingredients in Game Theory (Three factors)

A
37
Q

What are the two standard representations

A
38
Q

What is a dominant strategy in game theory. (Prisoners dilemma)

A
39
Q

What is the Nash Equilibrium in Game Theory

A
40
Q

How can transitions be modelled in a non-deterministic search (Sucky suck)

A
41
Q

Describe adversarial search and MINMAX strategy

A
42
Q

What are two strategies reduce complexity in adversarial search

A
43
Q

What are the 5 components of a constraint satisfaction problem

A
44
Q

What are the three types of solutions to a CSP

A
45
Q

Name an example of Incremental standard search formulation in CSPs and explain it

A
46
Q

What are three improvements to the performance of backtracking search

A
47
Q

Explain local search for CSPs. Explain Min-conflict heuristic, plateau search and constraint weighing,

A
48
Q

What are the five requirements of a knowledge based agent

A
49
Q

Explain inference via enumeration

A
50
Q

Explain forwards and backwards chaining

A
51
Q

What are the three failure causes due to large domains. (Probabilities over time)

A
52
Q

What is decision theory

A
53
Q
A