Midterm 2 Flashcards

1
Q

Hume & causality

A

•causal connections are product of observation
- spatial/temporal contiguity
- temporal succession
- constant conjunction
•relation between experiences not between facts (IN MIND)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Where does causal knowledge emerge from?

A

non causal input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Causal inference

A

infer causal relations from patterns of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Why is causal inference difficult?

A
  • probabilistic and incomplete data
  • small samples
  • different models can generate same data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Dominant theory of causal relations

A

people estimate the strength of causal relations on the basis of covariation between events

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Contingency tables

A

Represent outcomes of numerous trials in which cause C is present/absent and effect E is present/absent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Delta-P rule

A
ΔP = P(E|C) - P(E|~C)
P(E|C) = 1/(1+2)
P(E|~C) = 3/(3+4)
when ΔP + --> C = generative cause
when ΔP - --> C = preventative cause
when ΔP = 0 --> C = independent of E (non causal)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Common causality mistakes

A
  • often people only compare when cause is present

- only compare when effect is present

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Motivated Reasoning experiment

A
  • liberal democrats more likely to correctly identify results by data in crime decreases condition
  • conservative republicans more likely to correctly identify results by data in crime increases condition
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Simplicity in understanding causes

A

Occam’s razor: simpler explanation is better (parsimony)

- causal structure (like contingency) must be inferred from input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Alien Disease Experiment

A

Alien with symptoms S1 and S2

  • either has Tritchet’s Syndrome (S1+S2), Morad’s D (S1), Humel I (S2)
  • most people said alien had Tritchet’s
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Alien Disease Experiment with probability info

A

majority still choose D1 even though D2 & D3 is mathematically more likely
- people need disproportionate evidence in favour of complex explanation before it can rival simpler one

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Deductive Reasoning

A
  • conclusion follows logically from premises

- conclusion guaranteed to be true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Inductive Reasoning

A
  • conclusion likely based on premises

- involves degree of uncertainty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Deductive Inference Rules

A

1) if premises are true, conclusion is true
2) premises provide conclusive evidence for conclusion
3) impossible for premises to be true and conclusion false
4) logically inconsistent to assert premises but deny conclusion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Modus Ponens

A

if p then q
p
therefore q

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Modus Tollens

A

if p then q
~q
therefore ~p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Wason Selection task

A
"if card has vowel on one side then it has an even number on the other"
E K 4 7 
most people say to flip E and 4
correct answer actually E and 7
(apply modus tollens)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

More Concrete version of Wason Task

A

if person is drinking beer then person must be over 21
‘drinking beer’ ‘drinking coke’ ‘16 yo’ ‘22 yo’
people find the correct answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Syllogistic Reasoning

A
all A are B (major premise)
all B are C (minor premise)
therefore all A are C (conclusion)
- logical validity of conclusion is determined entirely after accepting the premises as true
- often subject to belief bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

ideological belief bias in syllogistic reasoning

A

liberals are better at identifying flawed arguments supporting conservative beliefs and vice versa

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Mental Models

A
  • postulated by Craik
  • models constructed in working memory as a result of perception, comprehension of discourse, or imagination
  • mental representations
  • can underlie reasoning
  • used to formulate conclusions & test strength of conclusions
  • alternative to view that depends on formal rules of inference
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What do mental models represent?

A

They represent explicitly what is true but not what is false

–> unexpected consequence = illusory inferences (belief bias)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

mental model (Frenchmen & gourmets example)

A

all frenchmen are gourmets
some gourmets are wine drinkers
people say: some frenchmen are wine drinkers
construct model consistent with both premises
replace with Italians in last premise:
no one draws the same conclusion, different mental model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Wason’s 2-4-6 task

A

have to find the rule, given 2 4 6 is an ascending sequence

70% offer incorrect rule on first announcement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Dual Goal Wason task

A

correct and incorrect sequences labelled as DAX and MED
60% induced rule correctly
–> people do better when contrasting two viable alternatives

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What’s special about thinking?

A
  • structure-sensitive
  • -> reasoning, etc. depends on capacity to represent and manipulate relational knowledge
  • flexible in way in which knowledge is accessed
  • -> apply old knowledge to new situations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Relational thinking across species: Match-to-sample task

A

B or C more like A?

chimpanzees answer differently than humans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Relational/Analogical Inference

A
  • Inductive in nature
  • analogical inference: generalizing properties/relations from one domain to another
  • analogical transfer: solving problem in one domain based on solution in another domain (ex: fortress/radiation problem)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Gick/Holyoak radiation problem

A

control: no base problem no hint, 20%
base problem no hint: 30%
base problem + hint: 75%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Analogical transfer steps

A

Recognition (identify possible analog or base domain)
Abstraction (abstract general principle from base problem)
Mapping (apply principle to target)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Analogical inference

A

knowledge about base domain can be used to reason about target domain
–> structure mapping

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Relations

A
  • can be represented as a proposition which specifies which element fill the roles of the predicate
  • can be nested within other relations (higher-order relations)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

structured relational representations

A

attribute: big(sun)
lower-order relation: bigger(sun, planets)
higher-order relation: CAUSE[bigger(sun, planets), revolves around(planet, sun)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

analogy

A

when two conceptual domains share relational similarity

  • one-to-one mapping: sun –> nucleus
  • parallel connectivity: sun –> nucleus & planets –> electrons
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

constraints on analogical mapping

A

systematicity: deeply nested relational structures make better analogies

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Manipulation of irrelevant superficial features (WWII vs. Vietnam)

A

subjects’ preferred policy was significantly more interventionist when scenario contained WWII features than Vietnam features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Structural Alignment

A
  • helps people align objects based on relational positions rather than superficial similarity
  • surface or structural similarity?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Relational reasoning in children

A

when given triads that showed relational pattern across different dimensions, have difficulty recognizing similar pattern

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Theory of progressive alignment

A

comparison of highly similar before less similar items fosters re-representation of relevant relations
–> children more able to recognize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Near vs. Far transfer

A

near transfer: apply knowledge from a closely related base domain to the target domain (ex: water pump to steam engine)
far transfer: apply knowledge from seemingly distant domain to base target (ex: velcro)

42
Q

Formal systems

A

system of axioms (propositions assumed true) + inference rules (allow for other conclusions to be derived)

43
Q

Completeness

A

either it or its negation can be proved

44
Q

Consistency

A

no statement in it such that both it and its negation are derivable
- if inconsistent then complete

45
Q

Logicism

A
  • Frege
  • provide logical foundations for mathematics
  • -> for ontological and epistemic reasons
  • major flaw in system -> paradox showed inconsistent
46
Q

Barber Paradox

A

barber shaves everyone that doesn’t shave themselves

but who shaves the barber?

47
Q

Russell’s paradox

A

involves self-reference

different sets that contain themselves

48
Q

Gödel’s incompleteness theorem

A

any consistent axiomatic system strong enough to carry out much of arithmetic is incomplete

  • -> some thought this showed mechanism is false
  • -> not true because that would mean humans can always see whether something is consistent or not
49
Q

Problem theory

A
• general theory of problem solving as search through a space
• 4 elements:
- initial state
- goal state
- operators
- path constraints
50
Q

State Spaces and Search

A

initial state: where problem solving begins
goal state: what you want to reach
operators: actions to be taken that serve to alter current state
path constraints: ex, finding solution in least possible steps

51
Q

Problem space

A

set of all states that can potentially be reached by applying the available operators

52
Q

Search Trees

A

paths of initial state to goal state

53
Q

Search strategies considerations

A
completeness (does it always find the goal state)
optimality (shortest path)
time complexity (how long)
space complexity (keeping track of which states you've visited)
54
Q

Factors that affect time and space complexity

A

B (branching factor/breadth)

D (depth in tree of goal state)

55
Q

Brute force search strategy

A
  • systematically consider all possible action sequences to find a path
  • only uses info available in problem definition
  • problem: exponentially increasing with depth, NP-hard (combinatorial explosion)
    advantages: guaranteed to find a solution, good for simple problems
56
Q

Breadth-first search

A
  • try shortest paths first
    (basically go through all the options)
    finds shortest path
57
Q

Depth-first search

A

explore states in order

conserves memory

58
Q

Heuristic search techniques

A
  • focus on promising areas
  • uses evaluation function to score states in tree
    advantages: good for complex problems with large search spaces
59
Q

Hill-Climbing

A

always choose the next state with the lowest score

BUT search may halt without success in a local minimum

60
Q

Best-first search

A

brute force search but prioritize lower score states

61
Q

Forward vs. Backward search

A

forward (self explanatory)
–> applying operators to generate new state
backwards : allows to eliminate useless or spurious paths
–> finding operators to produce current state

62
Q

Means-ends analysis

A
  • mix of forward and backward search
  • search is guided by detection of differences between current and goal states
    1. compare current and goal state
    2. select operator that would reduce differences
    3. set new subgoal if operator cannot be applied
    4. return to step 1
63
Q

Sussman Anomaly

A

in process of achieving new sub-goal, might entail reversing/undoing a goal it had already achieved

64
Q

STRIPS “Stanford Research Institute Problem Solver”

A
  • simple reasonably expressive planning language
  • actions connect before and after world states
  • SHAKEY the robot
  • -> simplicity of kinds of states can SHAKEY can represent limit its problem-solving capabilities
65
Q

Frame Problem

A

only effects operator has on world are those specified by ‘add’ and ‘delete’ lists
–> in real world planning, hard assumption to make; can never be certain of extent of effects of an action

66
Q

Constraint Satisfaction Problems (CSPs)

A

states and goal test conform to a standard, structured and simple representation
set of variables
set of constraints
goal

67
Q

Neural Networks

A
  • alternative to traditional processing models
  • aka PDP (parallel distributed processing) or connectionist model
  • biological plausibility
    (unit/node = neuron)
68
Q

Key components of a unit

A
  1. set of synapses (INPUTS) brings activations from other neurons
  2. processing unit sums up inputs, applies activation function
  3. output line transmits result to other neurons
69
Q

Units

A

activation: activity of unit
weight: strength of connection between two units
learning: changing weight

70
Q

Total input of units

A

sum of activation of j times the weight between i and j

71
Q

Perceptron

A

one layer of input neurons feeding towards one output layer of McGulloch-Pitts neurons with full connectivity
- can compute any linear function

72
Q

Boolean AND

A

sum > 1.5 to activate

73
Q

Boolean OR

A

sum > 0.5 to activate

74
Q

Boolean NOT

A

sum > -0.5

75
Q

Multi-layered networks

A

activation flows from input units –> hidden units –> output units
weights determine how input patterns mapped to output patterns

76
Q

Backpropagation

A

common weight-adjustment algorithm

77
Q

Two learning methods of Hebbian learning

A
  • unsupervised: network tries to discern regularities in input patterns
  • supervised: input is associated with correct output and network’s job is to learn this input-output mapping (ex: NETtalk)
78
Q

localist representation

A

each unit represents one item (ex: phoneme outputs in NETtalk)

79
Q

distributed representation

A

each unit involved in representation of multiple items

  • efficient
  • even if come units don’t work, info still preserved
80
Q

Catastrophic Interference

A

training for new rule increases error on old rule

81
Q

Concurrent training

A

all items to be learned included in single training set

82
Q

Sequential training

A

first learn one rule then the next

–> catastrophic interference

83
Q

Deep neural networks

A
  • many hidden layers
  • capture more regularities in data and generalize better
  • activity can flow from input to output and vice-versa
84
Q

Generative Adversarial Net (GAN)

A

generator: learns to generate plausible data
discriminator: learns to distinguish generator’s fake data from real data

85
Q

Can a general purpose algorithm outperform specialized algorithms in a task?

A

No

86
Q

AI set

A

set of tasks that people and animals are good at

87
Q

Machine Learning

A

study of algorithms that:

  • improve their performance P
  • at some task T
  • with experience E
88
Q

Traditional programming vs Machine learning

A

data/program –> computer –> output

data/output –> computer –> program

89
Q

Tasks best solved by machine learning

A
  • recognizing patters
  • generating patterns
  • recognizing anomalies
  • prediction/recommendation
90
Q

overfitting problem (regression)

A

perfect fit to sample but not good for making predictions

basically fitting noise

91
Q

test set method

A
  1. randomly choose 30% of data to be test set
  2. remainder is training set
  3. perform regression on training set
  4. estimate future performance with test set
    * imposes penalty for unnecessary complexity
92
Q

Classification

A
learn f(x) to predict y given x
y is categorical
model learns criterion
93
Q

linear classifiers

A
  • linear function to separate classes

- does not always work well

94
Q

k-Nearest neighbours (KNN)

A

test items assigned to class most common among k nearest neighbours

95
Q

Clustering

A

given x1, x2, …
output hidden structure underlying x’s
ex: grouping individuals by genetic similarity
number of clusters dictated by K

96
Q

K

A

too big: creates artificial boundaries within real data clusters
too small: disjoint groups of data are forced together

97
Q

Dimensionality reduction

A
  • determine source signals given only mixture

ex: blind source separation aka cocktail problem

98
Q

Assumptions in dimensionality reduction

A
  • source signals are statistically independent

- -> independent component analysis method

99
Q

Spurious correlation

A

to avoid drawing inference from spurious correlation –> employ test set method

100
Q

Machine Learning Problems

A

SUPERVISED UNSUPERVISED
DISCRETE CONTINUOUS
classification/categorization clustering
regression dimensionality reduction