Neuron Networks Flashcards
2014 1.a Black box(how to acquire knowledge used to make decisions, how knowledge stored)?
It means that only its input and output information is visible.
The neuron network are arranged in layers. The input to the net in layer 0 is passed layer by layer, each layer’s neuron units perform some computation before handling to the next layer. By adaptive interconnects (weights), the net learn to perform a set of mappings from input vectors to output vectors.
Why black box feature pose problem?
Because if we can’t see what the inside of the neuron network, we can’t find the entity to take the responsibility for doing something wrong. For example, the financial forecasting, if we loses a lot of money, because the neuron network is a black box, we can’t find what was wrong inside.
Example suitable using neuron network.
Optimising routing in communication networks.
2014 1.b Limitation of perceptrons, need additional computational layers?
The only function that can be represented by a single layer net are those for which A and B class input vectors can be separated by a single line, namely linear separable. For those problems that are not linear separable, additional computational abilities is needed. Take the XOR problem for example, truth table, pattern. It’s clear that no single line can be drawn to separate the two classes.
2014 1.c How Rosenblatt group overcome lack of training hidden layers? Valid form of training?
They used a trial-and-error training method that involved a ‘preprocessing layer’.
No, it’s not a valid form of training, because the weights are acquired by random search, instead of something more efficient.
2014 1.d Single layer perceptron one advantage over the multilayer perceptron? Why multilayer perceptron training can’t have.
The single layer perceptron training process will always eventually converge to a solution which correctly partitions the pattern space, but only if such a partition is theoretically possible. The MLP does not come with a guarantee of success, because it’s prevented by a number of factors and may be trapped in a local minimum.
2014 1.e Hidden units, algebraic expression,weight factor and threshold?
Hidden units: 1的数量
Algebraic expression: y1= y2=
W = 1 or -1
S = n - 1/2 - number of input where f(x)=反x
2014 2.a i) fixed point of the system?
The fixed point of the system is a stable state of the network, in which the recirculated output no longer changes the network state.
2014 2.a ii) if fixed point the output, what is the input?
The initial state is the input, and it is the initial state that determines which fixed point(final state) it will reach.
2014 2.b Why lowest energy state always fixed point?
Because the Hopfield update rule does not allow you to go anywhere else, once you are in the lowest energy state.
2014 2.c Basin of attraction?
It’s a range of input states that will ultimately iterate to a local minimum. These local minima correspond to the stable states of the system.
2014 2.d Zero weights and threshold, no useful CAM
If weights and thresholds are set to zero, in a iteration, nothing will change. This means that nothing is in the base of attraction of a local minimum. Therefore, the net doesn’t have any useful CAM properties.
2014 2. e i)weight setting rule, give Hopfield function?
Weight setting rule
Threshold setting rule
Hopfield function
2014 2. e ii) Why 0 threshold not surprised?
Because the two binary pattens (1, 0, 1), (0, 1, 0) are complemental.
2014 2. e iii) State Transition Diagram?
- 写出W, S=-w, H=-wxx+sx
- 根据H计算Energy level
- 根据W, S,以及firing rule, 写出Neuron update rules x=h(wx-s, x)
- 根据update rule写出各个pattern转换的结果