Task 4-BCI, connectionism, activation functions, graceful degradation, delta rule Flashcards
BCI, connectionism, activation functions, graceful degradation, delta rule
BCI
- CONNECTS THE BRAIN WITH A COMPUTER / PROTHESIS ETC BY SIGNAL ACQUISITION (OF THE BRAIN) & SIGNAL PROCESSING
- CAN BE INVASIVE AND NON-INVASIVE
- ASSISTIVE BCI: AIMS TO REPLACE LOST FUNCTIONS (E.G.
COCHLEAR IMPLANT, ARM PROTHESIS) - REHABILITATIVE BCI: AIM TO FACILITATE RESTORATION OF BRAIN FUNCTION
feature extractor= transforms raw signals(EEG) into readable signal
control interface: gtarmsforms the signals into semantic commands
device controller: executes these demand s
Connectionism
- EMPHASIZES IMPORTANCE OF CONNECTIONS AMONG
NEURONLIKE STRUCTURES IN A MODEL (INTERCONNECTED
NETWORKS OF SIMPLE UNITS) WHICH WORK PARALLEL TO EACH
OTHER - CONSISTS OF DIFFERENT LAYERS: INPUT, HIDDEN AND OUTPUT
UNITS - LOCAL REPRESENTATION: EACH UNIT IS INDEPENDENTLY
ASSOCIATED WITH ONLY ONE REPRESENTED THING (SIMPLE,
“GRANDMOTHER-CELL) -> SYMBOLIC, units, links
DISTRIBUTED REPRESENTATION: SPECIFIC KNOWLEDGE IS REPRESENTED BY ACTIVITY OF DIFFERENT UNITS (MORE COMPLEX) LIMITATIONS: ADDITION OF NEW INFO CAN CAUSE
LOSS OF OLD INFO; LEARNING CANNOT BE IMMEDIATE -> SUB SYMBOLIC, feedforward, recurrent network
Practical applicability: backprpagation techniques have been used by engineers for prediction of stressors on materials
Connectionist models are always models of learning
Decision making : you can store decisions by learning 2 different outcomes and recalling them
ACTIVATION FUNCTIONS
THRESHOLD) LINEAR (/), SIGMOID (S) AND BINARY THRESHOLD (I)
Delta rule
METHOD USED TO CALCULATE THE DIFFERENCE BETWEEN ACTUAL AND DESIRED OUTPUT (ERROR) AND CHANGE THE
A rule for changing the weight of the connection, which will in turn change the activity level of i (Δwij = [ai (desired) – ai (obtained)] aj), where Δwij is the change in weight of the connection from unit j to unit i to be made after learning.
graceful degradation
ABILITY TO PRODUCE REASONABLE APPROXIMATIONS TO THE
CORRECT OUTPUT FOLLOWING A DAMAGE TO SINGLE UNITS
loss of a few units ist detremental
Generalisation
if recall cue is similar to the pattern, the output will produce similar response
Fault tolerance:
Even if pattern is incomplete or damaged, output will recognize it
network is robust against errors in representation
prototype extraction
: if more than 1 pattern is possible, output will produce pattern with the most activation
Autoassociator
Replaces at output the same pattern that was presented at input
Learning occurs when the weights change, so that the internal input to each unit matches the external input
Resistant to incomplete and noisy pattern (and cleans them up)
Uses the delta rule
Difference autoassociator to pattern associator:
Recurrent connections that give feedback
Same input pattern at output
Doesnt need external teacher
traditional hebbian learning
Traditional: doesnt specify how much a connection should increase and the exact conditions that need to be met for increase
Neo Hebbian learning
\: solves this traditional problem Mathematical equations (dynamical differential)weights change in their strength Nodes = instar/outstar
Differential Hebbian learning:
solves the problem that connection only increase in strength
Connections change according to the difference between the nodes activation and the incoming stimulus signal: change can be positive/ negative/ null
Drive reinforcement theory
solves the problem that the change is only linear = sigmoid curve
considers recent history of stimuli, e.g. recent trials of learning (=temporal memory)
Hippocampus
During behavior, memories are stored in Hippo
During sleep these memories are consolidated into neocortex by synchronous bursts (Hebbian Learning) Autoassociative memory: recall a memory with just a cue (subcomponent of the memory unit)
DG=>Competitive learning (sparse memories)
CA3=> recurrent connections, autoassociation
CA1=> Competition
distributed representation
attributes of concepts distributed through network => good representational and neurobiological power
Networks that learn how to represent concepts or propositions in more
complex ways and distribute over complex neuronlike structures
bachpropagation
calculates error between desired and actual level of activation, therefore changes weights
local representation
Neuronlike structures are given an identifiable interpretation in terms of
specifiable concepts + propositions
a kind of NN neutrons are specific concept like “apple” limiting representational power
pattern associator
often works with Hebb rule, learns to associate one pattern with another
sub symbolic
knowledge spread over units , learning as change of connection between chunks and production rules
symbolic learning
learning of new chunks and production rules by e.g. compilation