Glasperlenspiel Flashcards

1
Q

ACT-R – auto-associator

A

Both have a sub-symbolic level concerning activations. ACT-R: spreading activation, causing
chunks or production rules to be more readily available. Auto-associator: weights between nodes increase by learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

ACT-R – catastrophic degradation

A

If one production rule in ACT-R is false, the whole system is defective: cannot find a solution or makes a mistake, much like catastrophic degradation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ACT-R – competitive learning

A

ACT-R: best-fitting production rule is chosen, activated and its position is strengthened.
Competitive learning: the node with highest activity level (most use compared to other nodes)’s associations are strengthened.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

ACT-R – connectionism

A

Both have a sub-symbolic level concerning activations. ACT-R: spreading activation, causing
chunks or production rules to be more readily available. Connectionism: weights between nodes increase by learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

ACT-R – delta rule

A

Both use successive steps to move towards their goal state. ACT-R: solution of sub-goals on
the goal stack incrementally reducing the difference between current/ goal state. Delta rule:
learning and error reduction used to move from current to desired state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

ACT-R – graceful degradation

A

If one production rule in ACT-R is false, the whole system is defective: cannot find a solution
or makes a mistake, the opposite of graceful degradation, in which the system can still
function although one of its parts is broken/ faulty/ not working as it should

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

ACT-R – hippocampus

A

Hippocampus plays a major role in memory formation. ACT-R’s main components are
procedural and declarative memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

ACT-R – human error

A

ACT-R can be used in order to compensate for human error (such as over-/ underestimation of
risks, biases, etc.) due to advanced processing beyond human capabilities. Some ACT-Rs are
already better at disease diagnosis than humans

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

ACT-R – secondary process cognition

A

ACT-R: works to achieve goals from the goal stack in a pragmatic way, rather than being
creative. Secondary process cognition: goal-orientated, focused, tries to logically solve
problems, rather than being creative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

ACT-R – swarm intelligence

A

Similarly to swarm intelligence, ACT-R is composed of many small units and production
rules to achieve a higher goal, and requires local interactions between members to achieve
higher intelligence. Simple, cue-based rules are followed to create complex behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Appraisal theory – automation

A

Appraisal theory: describes how emotions evolve through comparing individual needs to
external demands. Automation (and judging the level of automation required): compares
individual human/ corporation needs and external demands of the working environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Auto-association – delta rule

A

Auto-associators aim to produce the same output than the input that they received. Learning
occurs through use of the delta rule, wherein the change in weight of the connections is
determined by the difference between desired and obtained level of activation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Auto-association – pattern association

A

Auto-associators function through pattern association: one stimulus is associated with the
other by presenting the two stimuli simultaneously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Auto-association – recurring network

A

Auto-associators can receive feedback on the input/ outputs they are creating thanks to
recurrent networks, which sends activation from output units back to input neurons within the
same laye

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Auto-associator – collective intelligence

A

Features of both include graceful degradation and fault tolerance: ability of a system to keep
functioning despite a part of it not functioning optimally

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Auto-associator – constraint satisfaction

A

Auto-associators trained with the delta rule change synaptic weights according to internal
input and external output: creates the same effect as parallel constraint satisfaction, wherein
the goal is to satisfy internal and external input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

BCI – cognitive enhancer

A

Both are used to amplify human strengths. BCI: assist humans in developing motor capacities.
Cognitive enhancer: used to amplify pre-existing human skills

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

BCI – dynamic systems

A

BCI is a type of dynamic system, controlled by pattern association, which turns brain activity
into semantic controls which the machine can read, essentially converting user intent into
device action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

BCI – human error

A

BCI is the use of brain signals to control external devices. By detecting an ERN signal in the
EEG signal that is used to control the external device, you can see that the agent is about to
make an error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

BCI – neuro-ergonomics

A

Neuro-ergonomics: degree of machine automation is varied depending on operator needs. The
same principle could be applied to BCI, wherein the device can be more or less automated
depending on the mental workload of the individual (and thus their increased probability of
human error)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

BCI – pattern association

A

BCI uses pattern association – learning by associating one stimulus with another by
presenting them at the same time – in order to be able to connect brain signals to device
movements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Computational power – rule-based learning

A

The success of rule-based learning in machines depends on the computational power of the
system: increased computation power will increase likelihood of efficient learning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Connectionism – multi-agent systems

A

Connectionism: nodes in a network working together to achieve a goal, parallel activity of
multiple processors. Multi-agent systems: behaviour produced by the sum of its subsystems
modules’ contributions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Creativity – competitive learning

A

Creativity: process of blind variation and selective retention, wherein the “creative” (novel
and useful) idea which is retained and developed on. Competitive learning: the best adapted
and most active node (“winning node”) is strengthened while the other nodes’ connections are
weakened or forgotten

25
Q

CRUM – connectionism

A

CRUM: computational-representational understanding of mind, which assumes that the mind
has structures analogous to those in computer programs, which function by way of
connectionism: basic units passing information through connections of varying strengths

26
Q

Delta rule – attractor

A

Both the delta rule and the attractor work towards a certain preferred output, at which point
they are considered stable. Delta rule: settling (no more learning). Attractor: reaching an
attractor state (not permanent, but stable for the time being)

27
Q

Delta rule – connectionism

A

Learning through the delta rule occurs via the principle of connectionism: basic units passing
information on through connections of varying strengths

28
Q

Delta rule – neo-Hebbian learning

A

Both use the same principle of weight change in their connections, which results in learning

29
Q

Delta rule – recurring network

A

The delta rule can function through use of recurring network, wherein activation from output
units is fed back to input neurons in the same layer

30
Q

Delta rule – sub-symbolic learning

A

Delta rule: learning occurs by adjusting values in order to minimise error. Sub-symbolic
learning: learning occurs by adjusting the weights between connections

31
Q

Distributed representation – fault tolerance

A

Distributed systems can still operate when separate elements fail due to their properties of
fault tolerance

32
Q

Dynamic systems – cusp catastrophe

A

Cusp catastrophe is a branch of bifurcation theory in the study of dynamic systems. The cusp
catastrophe explores the behaviour of a dynamic system when faced with a splitting factor
which could create divergent behaviour – two behavioural surfaces at one point of the control
surface

33
Q

Emotions – connectionism

A

The HOTCO model for emotions gives each unit/ neuron an emotional valence, which,
similarly to connectionism, spreads through a network, resulting in overall emotional
assessment

34
Q

Emotions – creativity

A

Emotions: allow individuals to switch between reactive or deliberative processes. Creativity:
forces the switch between divergent and convergent thinking, manifested through the salience
network switching between default mode and central executive networks

35
Q

Emotions – cusp catastrophe

A

Emotions can be a normal or splitting factor in a cusp catastrophe model, as emotions can lead
to sudden behavioural changes

36
Q

Emotions – distributed representation

A

Emotions demonstrate distributed representation through their pattern of activation across
many neurons

37
Q

Emotions – dynamic system

A

ITERA computational model of emotion: bidirectional causal connections between emotions
and behavioural intentions: constantly changing and evolving, like a dynamic system.
Emotional changes are multicausal (different appraisal variables), as are phase transitions in
dynamic systems

38
Q

Emotions – human error

A

Emotions and human error are both based on a dynamic interaction between the actor and the
environment. According to the EMA model, emotions are the result of a causal interpretation
of the environment – which changes as they interact with it – and the self. Human error is the
result of humans incorrectly interacting with their environment

39
Q

Emotions – predictions

A

The ITERA computational model of emotion predicts reactions based on the overall
coherence of an emotional summary based on individual emotional nodes, consistent with
computational predictive models

40
Q

Gradient descent – hill climbing

A

Gradient descent: seeks to find most profitable/ less costly solution: the local minimum. Hill
climbing: seeks to find, step-by-step, the most profitable solution

41
Q

Gradient descent – the oasis problem

A

Once gradient descent has found the local minimum, it is content to stay there instead of
searching for a perhaps lower local minimum: this is the oasis problem

42
Q

Hippocampus – auto-association

A

Auto-association occurs in the CA3 region of the hippocampus

43
Q

Hippocampus – competitive learning

A

Competitive learning occurs in the CA1 region of the hippocampus, using lateral inhibition to
feedforward models to allow sparseness to remove redundancy and noise

44
Q

Hippocampus – pattern association

A

Pattern association occurs in the hippocampus, specifically the subiculum, between CA1
(competitive learning) and the entorhinal cortex

45
Q

Hippocampus – GAGE

A

GAGE and the computational model of the hippocampus are based on real-world examples
and give insight on the functioning on the brain, and are neurologically plausible as well as
computationally applicable

46
Q

Machine learning – singularity

A

The singularity principle is the point at which technological (machine) learning will be so
advanced that humans will no longer be able to predict or understand what comes next

47
Q

Machine learning – Turing test

A

The goal of machine learning is to pass the Turing test

48
Q

Predictions – cusp catastrophe

A

self-explanatory

49
Q

Self-organising system – automation

A

Neither self-organising systems nor highly automated machine require external input in order
to change/ modify/ improve their behaviour

50
Q

Self-organising system - unsupervised learning

A

Neither require external specifications in order to change/ evolve/ modify themselves

51
Q

Simulated annealing – gradient descent

A

Simulated annealing: heating up of the system and moving it in less promising directions to
seek more promising minimum (local or global), whereas gradient descent seeks to just stay in
the local minimum

52
Q

Swarm intelligence – butterfly effect

A

Swarm intelligence operates by the simple rules rule: minor changes in individual rules can
radically alter group behaviour. This is similar to the butterfly, wherein apparently small
changes can cause a radical change in the larger picture

53
Q

Swarm intelligence – cusp catastrophe

A

Catastrophic jump: changes in the independent variable will pass a threshold and result in a
large change for the behavioural variable in the cusp catastrophe model. Comparable to bees
(swarm intelligence) searching for a new hive: it starts with a few, and then the number grows
until it passes a threshold and they all move

54
Q

Swarm intelligence – distributed representation

A

Swarm intelligence: a large group of individual agents use their local interactions to
accomplish a bigger goal, similarly to distributed representation, wherein a group of local
neurons interact in order to represent a concept in its entirety

55
Q

Swarm intelligence – graceful degradation

A

If one of the members of the swarm is not able to function properly, the other members will
sustain the system regardless, similarly to the process of graceful degradation, wherein the
system is able to keep functioning despite one of its parts not functioning

56
Q

Swarm intelligence – Hebbian learning

A

Swarm intelligence: the more agents participate in the same action, the more it gets
strengthened (e.g. ants walking on a path strengthens the pheromone trail, turning it into the
dominant path). Hebbian learning: “what fires together, wires together”: the more often two
nodes are simultaneously active, the stronger their connection will be

57
Q

Swarm intelligence – simulated annealing

A

Swarms (e.g. ants) which end up in a suboptimal pathway (local minimum) can use simulated
annealing in the form of oscillating states (ants breaking off from the group to search for new
paths) in order to attempt to reach an optimal path (global minimum)

58
Q

Swarm intelligence – sub-symbolic learning

A

Ants, the main example of swarm intelligence, use sub-symbolic learning through the
pheromones that they release, and symbolic learning through learning from other ant’s direct
behaviour