More Test1 Review Flashcards

1
Q

What are the 4 categories of AI?

A
  • Thinking humanly
  • Thinking rationally
  • Acting humanly
  • Acting rationally
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the definition of AI?

A

Intelligence demonstrated by machines, in contrast to natural intelligence displayed by humans/animals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is “thinking humanly?”

A

Trying to make a computer program which mimics the human brain (cognitive modeling).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is “acting rationally?”

A

An rational agent which acts so as to achieve the best outcome or, whenever there is uncertainty, the best expected outcome. Also described as “doing the right thing.” (rational agent approach)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is “acting humanly?”

A

Mimicking human behavior. The Turing test is an example. It’s an “operational” definition of intelligence (blink reflex, Turing test).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is “thinking rationally?”

A

The attempt to build machines which are based on logical rules (syllogisms) that govern beliefs/behavior. (laws of thought).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Machine learning.

A

The science of getting computers to act without being explicitly programmed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Nested hierarchy of AI, machine learning, representation learning, and deep learning.

A

Outermost to innermost: AI, machine learning, representation learning, deep learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

2 factors that deterred AI.

A

Lack of enough data & lack of sufficient computing power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Definition of rationality.

A

Rational agents select an action that is expected to maximize its performance measure, given evidence provided by the percept sequence and built-in knowledge.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

PEAS

A

Performance, Environment, Actuator, Sensor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Performance (PEAS) definition

A

The performance measure that defines the criteria of success.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Environment (PEAS) definition

A

The agent’s prior knowledge of the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Actuator (PEAS) definition

A

The actions that the agent can perform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Sensor (PEAS) definition

A

The agent’s percept sequence to date

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

5 structures of intelligent agent

A

Simple reflex agents, model-based reflex agents, goal-based agents, utility-based agents, learning agents.

17
Q

Simple reflex agent definition.

A

Select actions on the basis of the current percept, ignoring the rest of the percept history.

18
Q

Model-based reflex agent definition.

A

The agent uses a “model” of the world to guide its actions. A model is knowledge about how the agent’s world works.

19
Q

Goal-based agent definition.

A

The agent has information about a goal it is supposed to achieve. It uses this goal and information about the results of possible actions in order to choose actions which achieve that goal.

20
Q

Utility-based agent definition.

A

Agent uses a utility function which maps a state or sequence of states onto a real number, which describes the associated degree of “happiness” from performing the action.

21
Q

Learning agent definition.

A

Starts with some basic knowledge and is then able to act and adapt autonomously, through learning, to improve its own performance.

22
Q

Goal vs utility

A

Goal: may seek to get from point A to point B, and succeeds when it gets there. Utility: get from point A to point B with additional specified criteria with trade-offs (shortest time, minimum fuel expenditure, etc).

23
Q

BFS completeness, time & space complexity, and optimality.

A

Time: b^(d+1)
Space: b^(d+1)
Complete? Yes (if b is finite)
Optimal? Yes if all steps have the same cost, but not optimal in general

24
Q

Uniform-cost search completeness, time & space complexity, and optimality.

A

Time: b^[C/e]
Space: b^[C
/e]
Complete? Yes if step cost > e
Optimal? Yes

25
Q

DFS completeness, time & space complexity, and optimality.

A

Time: b^m
Space: bm
Complete? No
Optimal? No

26
Q

Depth-limited search completeness, time & space complexity, and optimality.

A

Time: b^l
Space: bl
Complete? Yes if l >= d
Optimal? No

27
Q

Iterative deepening DFS completeness, time & space complexity, and optimality.

A

Time: b^d
Space: bd
Complete? Yes
Optimal? Yes

28
Q

Bidirectional search time & space complexity

A

O((bd)/2)

29
Q

A* search completeness, time & space complexity, and optimality.

A

Time: n^2
Space: Keeps all nodes in memory
Complete? Yes unless there are infinitely many nodes
Optimal? Yes

30
Q

BFS uses a ____ and DFS uses a ____

A

Queue, stack

31
Q

Greedy search completeness, time & space complexity, and optimality.

A

Time: b^m
Space: b^m
Complete? No, but complete in finite space with repeated state checking.
Optimal? No

32
Q

A search problem consists of

A

Initial state, set of actions, goal test, path cost

33
Q

graph search vs tree search

A

graph search = infinite # of nodes without a circle. Tree search = finite nodes without a circle.

34
Q

Naive solution for TSP

A

Start & end at city 1, generate all (n-1)! permutations of cities, calculate the cost of every permutation & keep track of the minimum cost permutation, return the permutation with the minimum cost.