Prelim 1 Flashcards
What is a “homunculus”?
A “homunculus” is a little human. In cognitive science, it refers to an argument that accounts for a phenomenon in terms of the very phenomenon that it is supposed to explain, which results in an infinite regress.
Example #1:
Bert: How do eyes project an image to your brain?
Ernie: Think of it as a little guy in your brain watching the movie projected by your eyes.
Bert: Ok, but what is happening in the little guy in your head’s brain?
Ernie: Well, think of it as a little guy in his brain watching a movie…
What is the problem with appealing to a homunculus, as an explanation of mental faculties?
Unilluminating because it simply appeals to another homunculus to explain the problem. Infinite regress.
How do functionalism and decompositionism each contribute to a better explanation than appealing to a homunculus?
Functionalism -> Mental states are constituted by their causal (functional) role, not by their material make-up
Ex: what makes something memory (say) is the function it has in the overall system, not that it is the hippocampus
So if two systems have the same function, then they are the same mental capacity
Decompositionism -> Each mental capacity is built up out of other, less intelligent capacities (Reigning solution to the problem of the homunculus)
ex -> langauge comprehension
Thus, both theories provide an alternative to appealing to a homunculus by defining a certain capacity by less intelligent capacities as well as its functional role instead of saying that a capacity is controlled by a homunculus
example of decomposition w/ language comprehension
To be able to understand one sentence you have analyze grammar, understand each word
Everything can be reduced
Classical reductionism
Special sciences capture truths that can ultimately be restated in terms of lower level sciences, terminating at physics.
Ex: So, Psychology is reduced by Neuroscience just in case the laws of psychology are derivable from those of neuroscience.
Mind-brain identity theory (MBIT)
The general claim of MBIT is that for every type of mental state (e.g. episodic memory), there will be a corresponding type of brain state (some configuration of neurons with certain pattern of activity or organization)
Types of mental states = types of brain states. (E.g.memories = certain types of neural structures, which can be found in human brains)
Functionalism
Mental states are constituted by their causal (functional) role, not by their material make- up; Basic idea of functionalism is that what makes something memory (say) is the function it has in the overall system
Rejection of MBIT
Multiple realizability
When the same type of thing at the higher level (e.g. memory) can be implemented or “realized” in multiple ways at the lower level (e.g. in mammalian brains and in avian brains)
Example:
Every creature in the universe that has memory has to have a hippocampus?
At a minimum, it seems rash to conclude that birds don’t have memory just because they don’t have the same brain structure we do.
More strongly, might think that it’s obvious that birds have memory, so any theory that identifies memory with a brain region is a bad theory.
Context sensitivity
Two individuals of the same type at the lower level can be individuals of different types at a higher level, depending on the context.
The same kind of physical object can be a second-hand gear in one context and a minute-hand gear in another context.
Just having a physical description of the gear, without the context, doesn’t tell you whether the gear is a minute-hand gear.
Why might functionalism make reductionism unlikely to be true?
In psychology, we want to characterize mechanisms by the functions they serve.
As a result, according to anti-reductionism, we are bound to end up with physically dissimilar mechanisms doing the same job (as in the case of memory in birds/humans).
But the anti-reductionist wager is that just as there is no simple one-one mapping between the kinds of molecular biology and the kinds of biochemistry, we can expect no simple one-one mapping between the categories of psychology and the categories of neuroscience.
Validity
An argument is valid if and only if it is impossible for its premises to be true and its conclusion false
Truth gets preserved – with a valid argument you never go from true premises to false conclusions
Is this a sound and/or valid argument?
Sound + valid argument (true premises)
Is this argument sound and/or valid?
Truth preserving… valid, even though false premises and conclusion; so not sound
How does the computer model provide an explanation for how a physical system could be rational?
The mind can operate in a truth preserving way because:
- beliefs are represented in symbol sequences
- like a computer, these symbols have physical properties that the mind/brain manipulates into other symbols (that also have physical properties of course)
- the physical manipulations of the symbols are arranged so that they are truth preserving
Naturalism
how can a physical system be organized so that its causal processes ensure that if it believes something true, those causal processes will lead to other true beliefs (and not false beliefs)
How do turing machines work
The symbols are physical objects with physical characteristics (size, shape, etc.). But the symbols can be interpreted as representing various things.
The rules that govern the physical system (the syntax) are designed such that our interpretation of the symbols (the semantics) will be truth preserving.
What the head does is determined by:
1. the token it finds in the square it’s scanning (e.g., 1, 0) 2. the internal state it is currently in (q0, q1, …)
These factors determine:
1. what token to write on the present square
2. the motion of the head (left, right, halt)
3. what internal state (q0, q1, …) to be in for the next step
How can the computer model of the mind explain the productivity of thought?
productivity of thought - there is an infinite number of thoughts that we are capable of thinking
As we’ve seen, computers are symbol manipulating devices. And there are rules that govern how symbols can be manipulated (e.g., replace “Luke” with “someone”)
This paves the way for productivity. For computers can operate on recursive rules
Roughly, a rule is recursive if the category in the “if” part of the rule (e.g, NOUN) also appears in the “then” part of the rule.
What are the elements of neural networks?
Connectionism (emphasizes fact that the knowledge is contained in the connections)
Neural nets (emphasizes similarity to actual neural networks)
Parallel distributed processing (emphasizes fact that much of the processing is not serial but simultaneous)
+ Deep learning when lots of inner layers
McCulloch & Pitts neuron (pre-wired neural nets)
– Inputs binary (1, 0)
– Each input multiplied by associated weight (e.g., x1 * w1)
– Sum the products of each pair of input * weight (xi * wi)
– Threshold: Neuron fires if that sum exceeds threshold (threshold number can be fraction)
– Output binary: If threshold is exceeded, 1; otherwise, 0
Can capture basic logical relations (AND, OR, NOT) using these kinds of neurons.
AND gate
Rosenblatt’s perceptron (learning neural network)
adds concept of changing weights to the M-P model
This requires calculating the error:
1. Error = target ([1]) – actual output ([0]). 1 – 0 = 1.
Updating also requires specifying a learning rate “alpha”
- Alpha = how much are we going to change the weights in response to the error. We’ll say .5.
New weight = old weight + ([learning rate*error] * value of input) =
w1(new) = w1(old) + ([alpha * (target-output)] * x1)
Do the same process for adjust the weight for x2
Repeat this process for each row of the truth table until you get your desired output
limitations to perceptron model
Neuron is just a single layer, so it can only do certain logic gates (and, or, not); you can’t do exclusive-or/XOR; way to overcome: testing with a multiple-layer neuron
Interactive Activation model
Discovered by Hubel & Wiesel
Certain neurons in visual cortex are highly specialized
Some would selectively respond if organism was presented with vertical line while some were selectively respond with a diagonal line
reproduces the word superiority effect because when there’s a word, there is more activation for the letter unit
Basically since you can have connections to words and letters in both directions, seeing the words creates the creation to the letter, making it easier to remember
how inputs can activate connections
What we can learn from the Interactive Activation model
- Neural net models can naturally capture graded performance (relative speed and accuracy in identifying a letter in a word) given multiple factors being processed in parallel.
- The model shows how the computation of a perceptual representation of an input (a word) might involve simultaneous processing at multiple levels of abstraction (feature, letter, word)
- Although one might have thought that people’s recognition of letters depended on rules about the correct spelling, McClelland says that the model shows that this need not be so
This is shown be the fact that non-words (i.e., strings that violate orthographic rules) can facilitate recognition
General intelligence (learned) vs. instinct (innate)
General intelligence allows us to solve problems and draw on lots of different information
Animals don’t have intelligence they have instinct
“New Look” perception psychology
Top down – intelligence in perception; what you think affects what you perceive
Ex: Context affects perception: perceive word as “cat” even though it is not a letter “a”
Domain Specificity
module only can be turned on by certain types of inputs
EX 1 : emotions → spider (produces more fear) vs. infinity pool, statistically infinity pool is more dangerous EX 2 : facial recognition → facial recognition is not being turned on when looking at an upside down face, THUS face recognition can only be turned on by certain inputs. → we need exactly the right format to turn on
Mandatoriness
cannot control if a module applies to a given input. If input fits, module fires
→ this is the reverse of DS
EX 1 : stroop effect, it’s hard to name what color the font is (blue); vision is faster than reading EX 2 : hollow face illusion → since the features of the face are in the right position the face will automatically correct itself (you will see a face even if it is not structurally correct)
Informational Encapsulation
The processing within the module cannot be influenced by information from higher-level cognition. Modules can only access information within its own database. All by itself.
→ Insulated both from what you want to be true, and from your background beliefs
EX 1 : visual illusions, you know the truth but you cant stop yourself from seeing it because the module changes for you. Background knowledge is not getting in there and changing the way you see it EX 2 : lexical access, bug insect vs. bug microphone. You used context to figure out what the meaning
Explain the lexical disambiguation example
When a word has multiple meanings, you select the most plausible meaning based on the context
“Rumor had it that, for years, the government building had been plagued with problems. The
man was not surprised when he found several spiders, roaches, and other bugs in the corner
of his room.
You would assume bugs in this context refers to the insects, not a microphone
But under the hood, the process itself actually seems to be encapsulated. The fact that the right interpretation is rationally obvious does not get into the mechanism that is involved in going to the lexicon. That mechanism activated both the rational interpretation and the terrible interpretation
The virtues of vertical (modular) and horizontal faculties
Vertical -> different modules (i.e. early vision module, lexical access module)
Horizontal -> decision made from vertical modules
Marr’s 1st level
LEVEL 1: Computational/(ecological) descriptions : the what and why
Guiding Questions: What is the goal of the system?
→ “what is the goal of the computation, why is it appropriate, and what is the logic of the strategy by which it can be carried out?”
What it does and why, why does it do that instead of other functions → to answer you can use context, what makes the most sense
EX. cash register, spider, and fly