Task 4-BCI, connectionism, activation functions, graceful degradation, delta rule Flashcards

BCI, connectionism, activation functions, graceful degradation, delta rule

1
Q

BCI

A
  • CONNECTS THE BRAIN WITH A COMPUTER / PROTHESIS ETC BY SIGNAL ACQUISITION (OF THE BRAIN) & SIGNAL PROCESSING
  • CAN BE INVASIVE AND NON-INVASIVE
  • ASSISTIVE BCI: AIMS TO REPLACE LOST FUNCTIONS (E.G.
    COCHLEAR IMPLANT, ARM PROTHESIS)
  • REHABILITATIVE BCI: AIM TO FACILITATE RESTORATION OF BRAIN FUNCTION

feature extractor= transforms raw signals(EEG) into readable signal
control interface: gtarmsforms the signals into semantic commands
device controller: executes these demand s

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Connectionism

A
  • EMPHASIZES IMPORTANCE OF CONNECTIONS AMONG
    NEURONLIKE STRUCTURES IN A MODEL (INTERCONNECTED
    NETWORKS OF SIMPLE UNITS) WHICH WORK PARALLEL TO EACH
    OTHER
  • CONSISTS OF DIFFERENT LAYERS: INPUT, HIDDEN AND OUTPUT
    UNITS
  • LOCAL REPRESENTATION: EACH UNIT IS INDEPENDENTLY
    ASSOCIATED WITH ONLY ONE REPRESENTED THING (SIMPLE,
    “GRANDMOTHER-CELL) -> SYMBOLIC, units, links
    DISTRIBUTED REPRESENTATION: SPECIFIC KNOWLEDGE IS REPRESENTED BY ACTIVITY OF DIFFERENT UNITS (MORE COMPLEX) LIMITATIONS: ADDITION OF NEW INFO CAN CAUSE
    LOSS OF OLD INFO; LEARNING CANNOT BE IMMEDIATE -> SUB SYMBOLIC, feedforward, recurrent network

Practical applicability: backprpagation techniques have been used by engineers for prediction of stressors on materials

Connectionist models are always models of learning
Decision making : you can store decisions by learning 2 different outcomes and recalling them

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

ACTIVATION FUNCTIONS

A

THRESHOLD) LINEAR (/), SIGMOID (S) AND BINARY THRESHOLD (I)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Delta rule

A

METHOD USED TO CALCULATE THE DIFFERENCE BETWEEN ACTUAL AND DESIRED OUTPUT (ERROR) AND CHANGE THE

A rule for changing the weight of the connection, which will in turn change the activity level of i (Δwij = [ai (desired) – ai (obtained)] aj), where Δwij is the change in weight of the connection from unit j to unit i to be made after learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

graceful degradation

A

ABILITY TO PRODUCE REASONABLE APPROXIMATIONS TO THE
CORRECT OUTPUT FOLLOWING A DAMAGE TO SINGLE UNITS

loss of a few units ist detremental

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Generalisation

A

if recall cue is similar to the pattern, the output will produce similar response

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Fault tolerance:

A

Even if pattern is incomplete or damaged, output will recognize it

network is robust against errors in representation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

prototype extraction

A

: if more than 1 pattern is possible, output will produce pattern with the most activation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Autoassociator

A

Replaces at output the same pattern that was presented at input

Learning occurs when the weights change, so that the internal input to each unit matches the external input

Resistant to incomplete and noisy pattern (and cleans them up)
Uses the delta rule

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Difference autoassociator to pattern associator:

A

Recurrent connections that give feedback
Same input pattern at output
Doesnt need external teacher

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

traditional hebbian learning

A

Traditional: doesnt specify how much a connection should increase and the exact conditions that need to be met for increase

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Neo Hebbian learning

A
\: solves this traditional problem
Mathematical equations (dynamical differential)weights change in their strength
Nodes = instar/outstar
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Differential Hebbian learning:

A

solves the problem that connection only increase in strength
Connections change according to the difference between the nodes activation and the incoming stimulus signal: change can be positive/ negative/ null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Drive reinforcement theory

A

solves the problem that the change is only linear = sigmoid curve
considers recent history of stimuli, e.g. recent trials of learning (=temporal memory)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Hippocampus

A

During behavior, memories are stored in Hippo
During sleep these memories are consolidated into neocortex by synchronous bursts (Hebbian Learning) Autoassociative memory: recall a memory with just a cue (subcomponent of the memory unit)

DG=>Competitive learning (sparse memories)
CA3=> recurrent connections, autoassociation
CA1=> Competition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

distributed representation

A

attributes of concepts distributed through network => good representational and neurobiological power

Networks that learn how to represent concepts or propositions in more
complex ways and distribute over complex neuronlike structures

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

bachpropagation

A

calculates error between desired and actual level of activation, therefore changes weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

local representation

A

Neuronlike structures are given an identifiable interpretation in terms of
specifiable concepts + propositions

a kind of NN neutrons are specific concept like “apple” limiting representational power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

pattern associator

A

often works with Hebb rule, learns to associate one pattern with another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

sub symbolic

A

knowledge spread over units , learning as change of connection between chunks and production rules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

symbolic learning

A

learning of new chunks and production rules by e.g. compilation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Both representations can be used to perform PARALLEL CONSTRAINT SATISFACTION

A

distributed and local representation

23
Q

parallel constraint satisfact8on

A

processing simultaneously satisfies numerous constraints

each input is setting constraints on the final state, a design is reached when there is reasonable fit between the input and teh output

used with problem solving

Example: When putting together a school schedule one needs to take into account various constraints imposed by classroom availability and the preferences of professors and students

24
Q

relaxation

A

aim of the network is compelded after repeatedly activating until u til it reaches stable level=> learning has eben a achieved

constraints can be satisfied in parallel by repeatedly passing activation among all the units, until after some number of cycles of activity all units have reached stable activation levels

process is called RELAXATION

25
Q

problem solving

A

Problem Solving

Example: Is Alice outgoing or shy?
Concepts are represent by units
The problem has both positive constraints, such
as between “likes parties” and “outgoing”, and
negative constraints, such as between “outgoing”
and “shy”
Positive constraints are represented by excitatory
connections
Negative constraints are represented by inhibitory connections

An external constraint can be captured by linking units representing elements that satisfy the external constraint and is linked to it either positively or negatively
For example, the external constraint could be that we know that Alice likes programming and likes parties

A problem solution consists of when a group of units is activated by the set containing outgoing, while correctively deactivating the set containing shy

Consequently, outgoing will win over shy because outgoing is more directly connected to the external information that Alice likes parties

Constraints can be satisfied in parallel by repeatedly passing activation among all the units, until after some number of cycles of activity all units have reached stable activation levels process is called RELAXATION
SETTLING – achieving stability

26
Q

planning

A

Constructing plans is usually a more sequential process understand in terms of rules or analogies rather than parallel processing

uses rule based systems

27
Q

decision

A

We can understand decision making in terms of parallel constraint satisfaction The elements of a decision are Actions and Goals

Actions that facilitate a goal have positive constraints and negative restraints come from incompatible relations
positive internal constraints come from facilitation relations: if an action facilitates a goal, then the action and goal tend to go together
The external constraint on decision making comes from goal priority, in which some goals are inherently, desirable, providing a positive constraint
Once the elements and constraints have been specified for a particular decision problem, a constraint network can be formed
Units represent the various options and goals, and pluses and minuses indicate the excitatory and inhibitory links that embody the fundamental constraints
Analogy can also be useful in decision making, since a past case where something like A helped to bring about something like B may help one to see that A facilitates B But reasoning with analogies may itself depend on parallel constraint satisfaction

28
Q

explanation

A

Explanations should be understood as activation of Prototypes encoded in networks
Example: Understanding why a particular bird has a long neck can come via activation of a set of nodes representing swan, which includes the prototypical expectation that swans have long necks

Units representing pieces of evidence are linked to a special evidence unit that activates them, and activation spreads out to other units

29
Q

learning

A

Learning
There are two basic ways in which learning can take place in a connectionist model: Adding new units or changing the weights on the links between them
Work to date concentrates on weight learning, as is demonstrated in the Hebbian Learning, in which a link between A and B gets stronger with subsequent activation

A technique called Backpropagation adjusts the weights that connect the different units by assigning weight randomly, determining errors and propagating backwards
Assign weights randomly to the links between units
Activate input units based on features of what you want the network to learn about

Spread activation forward through the network to the hidden units and then to the output units
Determine errors by calculating the difference between the computed activation of the output units and the desired activation of the output units. For example, if activation of quiet and studies hard activated jock, this result would be an error Propagate errors backward down the links, changing the weights in such a way that the errors will be reduced
Eventually, after many examples have been presented to the network, it will correctly classify different kinds of students
Disadvantages:
Requires supervision
Tends to be slow, requiring many hundreds or thousands of examples

30
Q

language

A

Word recognition can be understood as a parallel constraint satisfaction problem by representing hypotheses about what letters and words are present

example with cat Relaxing the network can pick the best overall interpretation of a word

31
Q

cm psychological p

A

Connectionist models have furnished explanations of many psychological phenomena, but also suggested new ones

Backpropagation techniques have simulated many psychological processes

32
Q

cm neuro p

A

the artificial networks are similar to brain structure in that they have simple elements that excite and inhibit each other.

Real neural networks are much more complicated and complex than the units in artificial networks, which merely pass activation to each other
Furthermore, in local representations each unit has a specifiable conceptual or propositional interpretation, but neurons do not have such local interpretation

Artificial units leave out the chemical parts, like neurotransmitter

We can think of each artificial unit as representing neuronal group, a complex of neurons that work together to play a processing role
Many local networks use symmetric links between units, whereas synapses connecting neurons are one- way

While Hebbian learning does occur in the brain, backpropagation

33
Q

cm practical application

A

Connectionist models of leaning and performance have had some interesting educational applications, for example knowledge required for reading

Reading is a kind of parallel constraint satisfaction where the constraints simultaneously involve spelling, meaning and context

Backpropagation techniques have been used to assist engineers in predicting the stresses and strains of materials needed for buildings

Connectionist models are widely used in intelligent systems

For example, in training networks to recognize bombs, underwater objects, and handwriting
interpret the results of medical tests and predict the occurrence of disease

34
Q

ASSISTIVE BCI SYSTEMS

A

substitute lost functions, enable control of robotic devices or provide functional electrical stimulation

35
Q

REHABILITATIVE BCI SYSTEMS

A

restore brain function and/or behaviour by manipulation of self-regulation of neurophysiological activity

36
Q

Cortical Resource Allocation

A

Variable-resolution representations in the sensory cortex: spatial resolution is highest at the center of gaze Plasticity

37
Q

NEURAL INTERFACE SYSTEM (NIC)

A

translates neuronal activity into control signals for assistive devices

38
Q

grandmother cell

A

example of local representation in perception

Neurons selectively respond to more and more complex attributes, so there might be ‘grandmother cell’ which are so specific as they fire in recognition of your own grandmother
Hypothesis is fundamentally unsound rejected

39
Q

NEURAL NETWORKS

A

Not all is lost if there is any deterioration in stimulus signal or loss of individual units

• REDUNDANCY – although some info might be lost, enough is still available to get the message across
Gradual deterioration in performance of a distributed system

40
Q

weight

A

Weight = strength of connection

41
Q

what can connectionist models do for us

A

An auto associator network can be trained to respond to collections of patterns with varying degree of correlation between them

When the input patterns being learned are highly correlated, the network can generate the central tendency or prototype that lies behind them, another form of spontaneous generalization

A single auto associator network can learn more than one prototype simultaneously, even when the concepts being learned are related to each other. Cueing with the prototype name will give recall of the correct prototype (which was never presented to the network complete)

An auto associator network can extract prototype info while also learning specific info about individual exemplars of the prototype
Thus, the network’s capability to retrieve specific info from cues (content addressability) means that, given a specific enough cue, it can retrieve the specific info of the individual exemplars from which the prototype generalization is constructed
Such behaviour is an example of a unitary memory system that can support both ‘episodic’ and ‘semantic’ memory within the same structure

42
Q

connectionist modelling

A

Connectionist Modelling is inspired by information processing in the brain and a typical model consists of several layers of processing units

Unit can be thought of as similar to a neuron, with each layer summing info from units in the previous layer This info processing is derived from observations of the organization of the brain:

The basic computational operation in the brain involves one Neuron passing info • Related to the sum of the signals reaching it to other neurons
Learning changes the strength of the connections between neurons and thus the • Influence that one has on the other
Cognitive Processes involve the basic computation being performed in parallel by a • Larger number of neurons
Info, whether about an incoming signal or representing the network’s memory of • Past events, is distributed across many neurons on many connections

In contrast to models in Artificial Intelligence (AI) which contain a set of rules, connectionist models are said to be neurally inspired by our brain

43
Q

connectionism and the Brain

A

Neurons integrate Information
Neurons pass Information about the Level of their Input
Brain Structure is Layered
Learning is achieved by changing StrengthStrength between Neurons

44
Q

threshold linear

A

Real neurons have thresholds firing occurs only if net input is above threshold

45
Q

sigmoid

A

Range of possible activity has been set from 0 to 1. When the net input is large and negative, the unit has an activity level close to 0. As the input becomes less negative the activity increases, gradually at first and then more quickly. As the net input becomes positive the rate of increase in activity slows down again, asymptoting at the maximum value which the unit can achieve. They prevent saturation and are good in noise suppression.

46
Q

binary threshold

A

Models neurons as two state devices as either being on or off. This ensures that if the net input is below threshold, there is no activity. Once the net input exceeds the threshold, the neuron becomes activated.

47
Q

DISTRIBUTED PROCESSING

A

In connectionist models info storage is not local, but distributed across many different connections in different parts of the system

48
Q

LOCAL PROCESSING

A

Traditional models of cognitive processing usually assume a local representation of knowledge stored in different, independent locations

49
Q

GRACEFUL DEGRADATION

A

ability to continue to produce a reasonable approximation to the correct answer following damage, rather than undergoing failure
Any info processing system which works in the brain must be fault tolerant, because the signals it has to work with are seldom perfect
An attractive aspect of content-addressable memory is that it is indeed Fault Tolerant (because no input unit uniquely determines the outcome)

50
Q

Properties of Pattern Associators

A

gneralizatioh
During recall, pattern associator generalize
If a recall cue is similar to a pattern that has been learnt already, a pattern associator will produce a similar response to the new pattern as it would to the old

fault tolerance
Properties of Pattern Associators

51
Q

competitive learning can be divided into three phases:

A

EXCITATION: Excitation of the output units proceeds in the usual fashion by summing the products of the activity of each input unit and the weights of its connection
COMPETITION: The units compete with each other and the identification of the winner may be achieved by selecting the unit with the highest activity value
WEIGHT ADJUSTMENT: Weight adjustment is only made to connections feeding into the winning output unit in order to make it more similar to the input vector for which it was the winner

52
Q

goal of bcc

A

provide new channel or output for the brain that requires voluntary adaptive control by user Helping handicapped people

BCI system: allow user to interact with device
Interaction is enables through intermediary functional components, control signals and feedback loops

Intermediary functional components: perform specific functions in converting intent into action

Feedback loops inform each component in the system of the state of one or more components

53
Q

problem machine learning

A

Concerns about the biological plausibility of current machine learning approaches: if our brains’ abilities are emulated by algorithms that could not possibly exist in the human brain then these artificial networks cannot inform us about the brain’s behavio

z.b.humans learn with supervisor most successful deep networks have relied on feed-forward architectures whereas the brain includes massive feedback connections
no equivalent to backprpagation
humans influenced by chemicals

54
Q

competitive networks

A

connection. between winning in and output will be strengthened

loosing will be weekend

3 phases
excitement
competition
weight adjustment