semantic networks Flashcards

1
Q

what is meant by the connectionist approach

A

we learn about the world through the interactions between units and their relative weights which is modified through an error prediction mechanism with experience

the connections are originally random but is modified by active inputs: item and its relation which change weights of a hidden layer which adjust output

slow learning process

pattern of activity show what the model has learnt

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what does a model of semantic knowledge need to be able to explain

A

the hierarchical structure of knowledge, typicality effects and basic level naming effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

how are concepts represented

A

activity patterns in hidden layers:
modulated relative weights through experience updating prediction errors producing an activity pattern distributed across the network specific to an item

similar concepts, similar patterns: the extent to which activity patterns are similar reflect the similarity of the objects/concepts as well as their differences

this can be represented through multidimensional scaling in an internal state space where the location of an item corresponds to its activity pattern and its distance from another item corresponds to the similarity of their activity patterns

within broad areas there are gradually more subtle refinements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

how are concepts structured?

A

hierarchical structure in development and loss:

children progress from broad categories to more refined ones as they learn about the world

this is because we go from a start state of 0 knowledge with small random weights producing meaningless activity patterns which are updated with exposure first with broad distinct /shared properties which compound across experience, eventually to finer grained distinctions producing an overall hierarchical structure

the reverse pattern is seen in semantic dementia where fine detail is lost first

this is because noise adds small random values to each activity pattern disrupting the locations of specific representations such that there is a differential effect where detailed information is lost but general information is retained

prototypes and basic level names:

locations of activation fall within a category - varying in distance (based on their similarity to it) from a central tendency: the prototype

parents use basic level names more, so they are higher frequency, more exposure strengthens weights such that connections are stronger and relatively robust to damage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what is the computational theory of semantic knowledge

A

Effects of coherent covariation - what drives learning of semantics
Regular co-occurrence of sets of properties across different objects
High level statistics that are true of the world - consistently shared properties
Concept similarity in multidimensional space - how we represent semantics
Similarity between distributed activation patterns
Dimensions: units in the network provide flexibility to represent items in terms of their similarities and differences
Knowledge structure via learning - model of learning
Iterative process of error driven statistical learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what are the strengths of the model

A

Local storage
Independent store accessed by index - e.g dictionary
Common feature of psychological theories
Uni-directional access: can only find meaning from a word not a word from its meaning
Not an accurate representation of how we actually store memory

Fault tolerance
Damage to a single unit - is not enough to disrupt a system unlike in local storage systems
Consistent with a graceful degradation seen in dementia

Content addressable storage
Access information by content
Forming connections based on associations of objects with their features
Able to associate in both directions
Any feature of memory can be accessed via any part of its corresponding stored knowledge
E.g multiple phrases will activate the concept of ‘covid’

Rules and exceptions
Rich graded representations featuring similarities and differences
E.g in language
Triggers dual-route theories in psychology
Connection models provide an alternative where there is a single route in a system that can learn both rules and exceptions within a single learning mechanism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what are the weaknesses

A

The model is oversimplified
Built-in items and features
What does it mean to have the concept of a bird or feathery thing in the first place
Told the right answer
Always comparing its output with prediction error - is this realistic
More to concepts than similarity
E.g causality: feathers and wings → flies
How is knowledge actually used (can it extend beyond how knowledge is stored)
How are concepts combined?
How do we draw inferences?
How do we spot similarities?
How do we draw analogies?
Learning methods aren’t plausible (AI)
Backpropagation isn’t biologically realistic (can’t be implemented in a brain)
Error signal not localised to the connection it is supposedly adjusting the weight of is not plausible
Too slow (Marcus & Keil)
Psychological argument
Too fragile (McCloskey, 1991)
People learn new things without overwriting old knowledge
Lead to the development of complementary learning system
They’re not really cognitive models
Only learn simple statistics (Borsnboom)
Statistical learning model isn;t a psychological model
Too underconstrained (Massaro, 1988)
Cognition is rule-based (Fodor & Pylyshyn,
Connectionist models don’t learn rules
Not compatible with language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly