Midterm 1 Flashcards

Material up until first order logic

1
Q

What are the four AI categories?

A
  1. Acting Humanly: The Turing Test
  2. Thinking Humanly: Cognitive Modeling
  3. Thinking Rationally: “Laws of Thought” (notation and rules of derivation for thoughts)
  4. Acting Rationally: The Rational Agent (making the correct inference)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an Agent?

A

anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are human agent and robotic agent sensors and actuators?

A

Human:

  • sensors: eyes, ears, etc.
  • actuators: hands, legs, mouth etc.

Robot

  • sensors: cameras, infrared range finders, microphones, pressure sensors etc.
  • actuators: various motors etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Provide the percepts and actions that would be associated with a smart vacuum

Also explain what agent functions and programs are/do.

A

Percepts (percept sequence): [location, content] i.e. [A, Dirt]
Actions: Left, Right, Clean

agent function maps from percept histories to actions.
f: P -> A
agent program runs on the physical architecture to produce f

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Def: Rational agent

A

chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date

= exploration, learning, autonomy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Agent structure =

A

program + architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 5 types of agents?

A
  1. Simple Reflex Agents: acts according to a rule whose condition matches the current state, as defined by the percept EX. if car-in-front-is-braking then initiate brake
  2. Model-based Reflex Agents: agent maintains an internal state that depends on the percept history. UPDATE-STATE responsible for creating new internal state description. EX. Autonomous car going from A to B with a state for gas
  3. Goal-based Agents: agent tries to get as close to the goal as possible and makes predictions about the impact of its actions on the world. EX. at a road junction, auton vehicle can go left, right, or straight. Correct decision depends on where trying to get to (need to have goal information that describes situations that are desirable)
  4. Utility-based Agents: Multiple solutions, some more efficient than others. Utility function quantifies how happy the agent is (internal). When multiple goals provide a way in which the likelihood of success can be weighed against the importance of the goals
  5. Learning Agents: composed of learning element to make improvements, a performance element, to select external actions, and problem generator to suggest actions that will lead to new and informative experiences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why do we need to explore the state space? (do searches)

A

to see which path gets us to the goal state and with minimum cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the two main types of searches?

A
  1. Uniformed (aka “blind search”): only use information available in the problem definition
  2. Informed (heuristic): use problem-specific knowledge beyond the definition of the problem itself
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the four factors strategies are evaluated by?

A

Completeness: is the algorithm guaranteed to find a solution if there is one?
Optimality: Does the strategy find the optimal solution?
Time complexity: How long does it take to find a solution?
Space complexity: How much memory is required to perform the search?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the properties of Breadth First Search?

A

Complete?: Yes, if the branching factor is finite
Optional?: Yes, if actions all have same cost (explores all options closest to initial space first and works its way to least optimals solution)
Time/ Space: O(b^d)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the Pseudocode for Breadth first search?

A

Look at notes…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the properties of Uniform Cost Search?

A

Complete: yes, (search every node if cose is positive at every step, every node searched if you don’t find the solution)
Optimal: yes, (total cost C* - initial to solution path cost)
Time/ space: O(b^(C/e+1) - (C = initial to path cost, e = smallest path cost we have (worst case))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the Pseudocode for Uniform Cost Search?

A

look at notes…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are the properties of Depth First Search?

A

Complete: No (fails for infinate depth), yes, for limited depth
Optimal? No
Time: O(b^m)
Space: O(bm) O(bd)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
Depth limited search: (nodes at depth L treated as if they have no successors) 
If L < d: 
If L > d:
Time: 
Space:  
How do we set L?
A

If L < d: in complete
If L > d: complete, sub-optimal (i.e. d=20 (sol’n), L=50)
Time: O(b^L)
Space: O(bL)

one way is domain knowledge

17
Q

What is the iterative deepening search sudo code?

A

check notes

18
Q

What are the properties of iterative-deepening search?

A

complete: yes
optimal: yes
time: O(b^d)
space: O(bd)

19
Q

What is the bidirectional search? and what is the motivation for using it?

A

When goal known, two searches carried out simultaneously, one forward from inital state, one backward from goal

motivation: O(b^(d/2)) + O(b^(d/2)) &laquo_space;O(b^d)

20
Q

What is the Best-First search> and what is its evaluation function?

A

Identicle to uniform-cost except that the general evaluation function f(n) is used

its evaluation function for n -> h(n) = 0 is
f(n) = g(n) + … + h(n)
want to minimize remaining cost to the goal

21
Q

What is the Greedy Best-First Search? and what is its evaluation function?

A

expands the node that appears closest to the goal, therefore, evaluates node using only heuristic function
f(n) = h(n)

22
Q

What are the properties of Greedy Best-First Search?

A

Complete: yes, in finite spaces
Optimal: not neccessarily, other routes might be shorter
Time: O(b^m) -> worst case = BFS (could be reduced with proper heuristic/ evaluation functions
Space: O(b^m) -> worst, most likely keep everything

23
Q

What is the A* search? what is its evaluation function? and is it complete/ optimal?

A

similar to uniform-cost but different function:
f(n) = g(n) + h(n)

both optimal and complete under certain circumstances (depending on f(n))

24
Q

What are some viable heuristics for the 8-puzzle problem?

A

h1 = count number of misplaced (incorrect) tiles (initially 8 - goal is to minimize this), if a move does not result in correct placement we will not value that move

h2 = sum of distances of tiles with respoect to their correct locations

25
Q

What are local search algorithms? Adv?

A

operating using a single current node (rather than multiple paths) and generally move only to neighbors of that node. Usually paths followed not retained

Adv:

  • use little memory (constant amount)
  • often find reasonable solutions in large or infinite (continuous) state spaces for which systematic algorithms are unsuitable
26
Q

What is the Hill-Climbing search?

A

Most basic local search
aka “Greedy Local Search”
The algorithm does not maintain a search tree, only current value of the objective function
No look ahead beyond the immediate neighbors of the current state

27
Q

What are Greedy local search issues? and What are solutions to these issues?

A

issues:
- local maxima
- plateau (sholder)
- ridges

Solutions:

  • random restart (keep restarting the search with random initial locations until a goal is found)
  • problem reformulation (represent the problem in a different way such that the search space does not contain these problematic scenarios)
28
Q

If the probability of finding a solution in a single try with a greedy search is P, then what is the expected number of restarts needed to find a solution?

Does this many restarts guarantee a solution?

How can we guarantee a solution when we use random restarts?

A

1/P because Px1/P = 1

No (this is the expected #)

keep track of restarts that didn’t work (ensure we end up somewhere different)

29
Q

What is the simulated annealing search?

A

tries to escape local maxima by using combining methods including random walk (moving to a successor chose uniformly at random from the set of successors - complete but extremely inefficient)
uses a random search that accepts changes that increase objective function f as well as some that decrease it

30
Q

Simulated annealing uses a control parameter T which starts out high and gradually decreases towards 0. What happens if T increases vs decreases? Where A -> B is a bad move

A

Probability(move A->B) = e^(f(B) - f(A))/T

T increases = (f(B) - f(A))/T approaches 0 therefore more likely to accept a bad move (e^0 = 1)

T decreases -> less likely to accept a bad move

31
Q

What is the Local Beam Search?

A
  • starts with k randomly generated states
  • if the goal is found, search stops
    otherwise selects the k best successors and repeats
32
Q

What is the variant of beam search, stochastic beam search>

A

does not choose k best successors, instead, chooses k successors randomly

33
Q

What is adversarial search?

A

a multi-agent search

- used in competitive environments

34
Q

What is the pseudocode for MiniMax Search?

A

look at notes..

35
Q

What are the properties of MiniMax search?

A

complete: yes, if there is a terminal state
optimal: yes, if both agents play optimally
Time O(b^m) -> needs to do a full tree search to find all terminal nodes
Space: O(m) -> once have the min path we just need to keep that path stored