Task 4 Flashcards
What is meant by connectionism ?
- human mental processes (such as learning) can be explained by the computational modelling of neural nets
- Computational modeling is INSPIRED by neuronally information processing
How do human neurons generally work ?
- One neuron passes info to the next neuron
- learning changes the strength between neurons
- Work parallel
- Information proccessing is distributed over many neurons
What are the 5 Assumptions of neuronal or connectionim network ?
- intergrate info and need to overome a treshold
- Neurons give info about there input lvl via the first firing rate of the output
- Brain structues work in layers (hierarchically / transformation of represntation)
- The influence of one to another neuron depends on the strength
- Learning is achieved by changing the strength between two neurons
What is meant by weight ?
- amount of connectons between neurons
What is meant by the activation function?
- Different functions representing the activation of the ouput of a neuron
- linear, threshold binary and sigmoid
What is meant by the linear activation function ?
What ever I put in is what I will get out 1 = 1
- f(x) = x
What is meant by the thershold activation function ?
- if my input was larger then some threshold then it is equal to the input
- f(x)>0
- if smaller then the threshold then the output is zero
What is meant by the binary activation function ?
- if my input is larger then my threshold then it would be one
What is meant by the sigmoid activation function ?
- it is a super complicated function which represnt that a variety of function can be used as a Activation function
- it is also the best function to ensures that theactivty can not go beyond a fixed minimum
What is meant by the output function in the equation principle ?
- determines the output actually sends onwards
- usually in conncetionist model it is a liner function 1=1
- usually in biological model it is not linear
What is meant by the bias ?
- it is just the same as the threshold but a negative thershold
- the negative threshold can learn to identify the optimal threshold
What are some properties of the connectionist model ? (Task 3: Why are neuronal networks so key for machine learning ?)
- AI are damage resistant and resistant to fault tolerant
- Allow for content addressable memory (cue activates a pattern of memory)
- Tries to satisfy/compromise as much as it can regarding the constraints
Why are connectionsist model so damage resistant and resistant to fault tolerant ?
- No individual neuron/ connection is crucial
- If one neuron is incorrect the population will make up for it (graceful degregation)
What are the two type of neuronal networks ?
- pattern associator
- Autoassociator
What is meant by the pattern associator ?
- Describes how different stimuli become linked when they are repeatedly presented together (training) in a learning period
- can generalize from existing input which means it can respond to novel inputs and is damage resistent
- have been used to model the function of memory
- consist of input and output unit each input unit is connected to all output unit
- it is a non recurrent network
What is meant by an Autoassociator pattern ?
- ONE form of pattern associator
- includes the ability to recall a complex memory with a cue (linking)
- make recurrent connections
- can reproduce something even though there are noises or incomplete inputs
- each unit serves as both an input unit and an output unit, so that each unit is connected to every other unit.
- Every autoassociator is a recurrent network
How does the autoassociator network and the pattern associator learn ?
- Hebbian learning rule
- delta rule
- Neo hebbian leraning rule
- Differential hebbian learning
- drive reinforcment theory
What is meant by the original hebbain learning rule ?
- What fires together wires toegther
- if two neurons on either side of a synapse are activated simultaneously the strength of the synapse is selectively increased
what is meant by the neo hebbian learning rule?
- Same as the hebbian learning rule but a weight will always forget a certain proportion of its learning
- decrease strength
- it overcomes the weight problem
- It is better then the hebbian learning rule
what is meant by the diffrential learning rule ?
- Same formula as Hebb learning but with a Delta in front of the activation !
- Delta means = change /differnce
- In this law we are not interested if the neurons are active at the same time
- But we are interested if the neurons change in the same way (behave in the same way)
- If they do so then they will learn
- learning only happens when there is an activity change BETWEEN units
What is meanty by the drive reinforcment theory ?
- extension of the differntial hebbian learning rule
- It takes time and time differences into account
- So a neuron has to change and the other neuron has to change a bit before that
- The DRT network considers not only the incoming stimulus at the current time, but also the recent history of incoming stimuli
Name the limiattions of the original hebbian rule ?
- If to many patterns are in one network they will interfere -> Problem of Interference
- Problem of Satuartaion = weights only increase they do not become smaller -> eventually neurons will be active no matter what input will be there
Why is it good to have a bit of interference ?
- Well it uses the inference to cancel out certain noises
How does the delta rule work ?
- Different then the hebbian rule since we know what kind of desired output activity we want
What is the formula of the delta rule ?
- desired activity - (minus) actual activity = missing or to much energy (o)
What is meant by a double layer network ?
- you have an input layer and an output layer
what is meant by a multilayer network ?
- instead of having an input and an ouput layer u have hidden layer
Why do we need hidden layers ?
- because of seperability
- double layer network can only differentiate between the LOGICAL OR (linear)
- either it is a cat or a dof
- > BUT multilayer can differentiate the logical or and also the exclusive or (curve)
how do we train a neuronal network?
- Backpropagation = same as delta rule but you can imply that to hidden layer
- part of the error in the ouput is caused due to a failure in the hidden layer
- based on that assumption we can use the delta rule
What is meant by competitive learning ?
- neurons compete for the right to respond to a given subset of inputs -> such that only one output neuron is active
- to be the winning neuron its activity must be the lagest among all other competing neurons
- The output of the winning neuron is set to 1 and the output of all the other neurons is set to 0
What is the difference between a pattern associator and an autoassociator ?
- The difference is that the output line of each unit is connected back to the dendrites of the other units (Recurrent) in a autoassociator
- It repoduces the same input as output pattern
- > NOT 100% sure
What learns in an autoassociator ?
- Autoasscociator contains internal and external inputs
- the internal input undergoes recurrent learning
- external inputs only activates then a few unit (cue) which is enough to see the bigger picture
What were ginas problem with the original hebbian learning ?
- Does not specify how connections should increase
What is meant by instar?
- receives incoming stimulation
- Neo hebbian learning
What is meant by the outstsar ?
- transmits output back to other instars
- Neo hebbian learning
What are neurons instars or oustars ?
- neurons are both simultaneously
What is the opposite of the recurrent network ?
- Feed-forwardmodel (able of classification)
Give a definition or reccurent neuronal network ?
- Recurrent networks, the output of a layer is added to the input of the next layer and also feedback into the same layer.
- Able of forcasting
- Example would be the hippocampus