missing content from week 1 - 4 Flashcards

1
Q

Analogue representations

A

are the opposite of symbolic representations - analogue representations share properties in common with the thing they represent (e.g., mental images).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Donder’s reaction choice experiment

A

showed that mental responses (in this case, perceiving the light & deciding which button to push) cannot be measured directly but must be inferred from behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Structuralism

A

suggests that our overall experience is determined by combining basic elements of experience called sensations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Propositional representations/propositional logic system:

A

A symbolic code to express the meaning of concepts, and the relationships between concepts.
® For example, the image of a cat on a rug can be expressed in natural English language as “the cat is on the rug”. The underlying propositional representation of the relationship between concepts can be expressed as a proposition – on(cat, rug); where ‘on’ is the predicate stating the relationship between two semantic arguments.
® A predicate expresses the relationship between elements. An argument expresses the subject/object elements.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Propositional representations/propositional logic system drawback

A

Drawbacks of this system include the fact that it doesn’t specify how abstract symbols used (e.g. ‘C’ for Canary) came to be associated with any of their meanings; the fact that it doesn’t specify how operations came to reflect their specific functions; and the fact it relies on prior knowledge for inference & generalisation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Connectionism/Connectionist system:

A

® Assumes a system of nodes that represent categories/concepts/abstract representations that we want to ascribe features to.
® Uses a system of fully crossed weights connecting the input to the output.
® Input features are multiplied through their various weights and activate each of the outputs according to the strength of those randomly connected weights.
® Weights are updated using the error signal (i.e. feedback) to organize the weights, such that over the course of learning the features of the input (e.g. the Canary being described) gradually become more and more associated exclusively with the output (e.g. a ‘C’ symbol for Canary).
- In this way, the connectionist model learns the representation for each collection of features

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Perceptrons of Connectionism/Connectionist system:

A

® The idea that there is a series of input layers (representing stimulation in the environment) which are connected to output layers through a series of weights. Whenever the input layers are turned on, they are multiplied by the weights and are then integrated to form an output representation.
® This approach is different from propositional logic, because knowledge is built from the bottom up – it is contained in the connections & associations between inputs.
® However, perceptrons cannot solve non-linearly separable problems, indicating that it is not a very accurate model of human thought

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Neural networks of contectionnism

A

Can learn some problems where outputs cross over in a non-linearly separable way.
® The problem of learning a non-linearly separable problem was largely solved by adding more layers (e.g. a hidden layer) to the neural network.
® However, models of this type still had problems demonstrating human-like performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Deep networks:

A

® Deep learning models/neural network models can easily classify images and enable language translation in many respects.
® The primary difference between the perceptrons and the deep learning networks is that a deep learning network has many, many layers (often hidden) through which
information is fed and is trained on very large data sets (and then tested on smaller
data sets).
® Models like this have been able to learn how to play video games, and have had success performing tasks once considered to only be within the domain of humans. Modern AI is the application of deep learning networks like this which have many layers.
® These models no longer rely on a specification of features (e.g. colour, size). Instead, these models rely on the ability to be presented with a pixelized image as input, which is directly input into the network and passes through a series of layers which represent some similarity to the human visual context. This decomposes the image into features based on combinations of features, and then these features can be directly associated with outputs (e.g. the term ‘Canary’).
® Representations made using deep networks can be embedded into an even larger network (e.g. ‘The canary next to the doctor is singing’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Production Systems Model:

A

® Stands for Adaptive Control of Thought – Rational model
® Built on previous models (ACT, ACT*)
® A productions systems framework that acts like its own functional programming language
® A complete architecture model of cognition
® Theories of specific tasks can be built into the ACT-R framework
® Using the productions system model, mental rotation takes place by decomposing the figure into overlapping parts, and then rotating these parts via production rules to confirm or disconfirm alignment.
® Because the complexity of an object is correlated with the number of sub-parts (more complex objects require more sub-parts to consider), this model can explain why eye-tracking data also correlates with response time, because of the fixation on the number of parts.
® According to this model, people encode the rotated image, then store it in their working memory, then encode the target image, and then while maintaining visual attention on the target image execute a process which incrementally rotates the image towards the target image by a constant amount. After each rotation step, theamount of disparity between the two images is reviewed to determine whether they are sufficiently close enough to stop the process.
® The implementation of the model also predicts this increase in response time as a function of the difference in rotation between the two objects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly