AI Flashcards

1
Q

Acting humanly

A

Turing test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Thinking humanly

A

cognitive modelling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Thinking rationally

A

Law of thoughts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Acting rationally

A

Rational agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Major components of Ai

A

Knowledge, reasoning, language understanding, learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What does rationality depend on?

A

Performance measure
Everything agent has perceived so far
Built in knowledge about the environment
Actions that can be performed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Autonomous agent

A
  1. Do not rely entirely on built-in knowledge about the environment
  2. Adapt to the environments through experiment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Simple reflex agent

A

Condition-action rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Autonomous vehicle-obey laws

A

Under goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Goal/based agents consideration

A

State
Information about envi
Consequnce of action

Goals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Utility based agent consideration

A

State
Information about envi
Consequnce of actiion

Utility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Types of envi

A

Accesible- complete info abt world
Deterministic- what u do now affects ltr
Episodic- not affected by past
Static- envi does not change
Discrete- limited number of percepts n actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Design of problem-solving agent

GPS A

A

Goal formulation

Problem formulation

Search process
No knowledege (uninformed search)
Knowlede (informed)

Action execution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Route finding probs

A

Routing in computer network
Robot navigation
Automated trvel advisory
Airline planning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Touring probs

A

Travelling salespersob prob
Shortest tour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Specifucation of well defined probs

IC GAS

A

Initial state
Action set
State space
Goal test predicate
Cost function

17
Q

Evaluation of strategies

A

Completeness
Time complexity
Space complexity
Optimality

18
Q

Search that are optimal n complete

BUBI

A

Breadth first
Uniform cost
Iterative deepening

Bidirectional

19
Q

Search that are not optimal n complete

A

Depth first
Depth limited

20
Q

G(n)

A

Path cost fn

21
Q

H(n)

A

Hueristic fn- cheapest cost

22
Q

A* search

A

Greedy n uniform cost

Return best first search

23
Q

Constraint satisfavtion prob games

A

8-queens
Cryptarithmetic puzzle
Sudoku
Minesweeper

24
Q

Def of csp

A

State of prob defined by an assignment of values to some or all variable

Legan or consistent- Assignment does not violate any constraints

Solution to csp is an assignment with every variable given a value and assignment satisfy all constraints

25
Q

Think map

Most constrained variable
Is minimum remaining value

A

Most limited in remaining steps

26
Q

Least constraining values

A

Maximum flexibility for subsequent manoveour

27
Q

Games as search problems

A

Abstraction
Uncertainty
Complexity
Games limited in time

28
Q

Minimax assumption

A

Max own utility
Min other’s utility

29
Q

Calculation of utility in minimax

A
Dog
Too
Rare

A

Asses utility at each terminal
Determine best utility of parents of terminal state
Repeat till root reached

30
Q

Imperfrct decision in minimax

A

Time/space constraints

Replace utility fn by an estimated desirabilty of position (evaluation fn)

Partial tree search

31
Q

Evaluation funvtion

A

Return estimate of expected utility of game

32
Q

Expected utility

A

Utility function
Outcome prob

33
Q

EU

A

Sum of probxu

34
Q

Markov decision

A

Value and policy iteration

Components:
Action,a
Transition model
Reward function

35
Q

Why discoubt factor used?

A

Sooner rewards count more than later reward

Total utility stay bounded

Helps algorithm converge

36
Q

Bellman eqn

A

Shows relationshio between utilities of successive states

R(s)+gamma(sum of prob*u)

37
Q

Advantage of temporal diff prediction

A

No need model of envi, only experience

Can Learn before knowing final outcome( less memory)

Can Learn from incomplete sequnces

38
Q

Animal

A

Pig