5: Learning Flashcards
What is NETTalk?
???
What is a neural network?
???
What is the PDP model?
Parallel Distributed Processing ???
What is symbolic AI?
???
How do neural networks differ from symbolic AI?
???
What are some advantages of symbolic-based AI systems?
– A symbolic algorithm can execute anything expressed as following a
sequence of formal rules.
– Large amounts of memorised information can be copied and retrieved
accurately ad infinitum.
– Information processing is relatively fast and highly accurate.
What are some disadvantages of symbolic-based AI systems?
– Maybe not everything can be feasibly expressed as following a sequence of
formal rules. The Chinese Room, various solution searches, meaning.
– Symbolic retrieval of memories can be brittle in being all-or-none.
– Many Real World situations are novel and so require adaptation rather than
fast pre-set actions. Example: every-day situations.
Of symbolic and neural network AI systems, which is most similar to the organisation of the brain? How? Comment on the duplicity of neuron organisation.
Neural networks. They are modelled on the organisation of neurons in the brain and allow for parallel rather than serial organisation. The brain has much simpler and slower individual processing units than computers yet its computation in many areas is better, suggesting the organisation of the brain is better.
What constitutes a neural network?
A collection of interconnected neurons (or units). Some receive environmental input and some of the others give output to the environment.
What are hidden units? What are they aka?
Neurons/units in neural networks that have neither input nor output connections.
How are neurons modelled artificially in neural networks?
Binary threshold unit: compute excitation as the weighted sum of inputs, and if excitation is above a certain threshold then consider the neuron “excited” and is activated. When activated, the neuron is in the active state so will output 1 rather than 0.
What is the formula for calculating the output of an artificial neuron (BTU)?
Outj = g(Σ w(ij) in(i) - Θ); g(x) = 1 where x > 0; g (x) = 0 where x <= 0
g(x) is the activation function, here being a step function (“stepping” at 0)
Θ is the threshold
j is the jth threshold unit (with a unique Θ)
w(ij) is the weight of the ith input to the jth threshold unit
in(i) is the ith input to the jth threshold unit
What is an activation function?
A normalising function that defines the output of a neuron given the calculated activation from the inputs to the neuron and their weights as part of a threshold unit.
Name and describe 3 activation functions.
- Step function
- output 1 once activation reaches certain number, 0 otherwise - Sigmoid
- calculate output as part of sigmoid curve
- g(x) = 1/1+exp(-x) - Rectified Linear Unit
- output has threshold activation as with step function, then increasing linearly for further increases in activation
- e.g. with threshold of 0:
when x <= 0, g(x) = 0
when x > 0, g(x) = x
What is Feedforward Architecture?
???
What is supervised learning?
???
What is recurrent architecture?
???
What are network layers in neural networks?
???
What is the difference between lateral and feedforward connections?
???
For a feedforward-based neural network of n layers, how many are hidden?
n - 2. Since you can “see” the input and output layers, and all others only connect to each other or input and output layers, so are hidden.
What is Strictly Layered Architecture?
A neural network system in which there are no lateral connections and each neuron may only connect to others in adjacent layers.
What does it mean for a network to be “fully connected”?
Each neuron is connected to all others it is able to be connected to; which other neurons each neuron can be connected to is limited by the architecture of the network.
What is the concept of Feedforward Pass?
The way in which input patterns go through layers in feedforward networks in series - i.e. layer-by-layer, whereas within each layer the signal is propagated in parallel to all neurons in the layer simultaneously (from the previous layer or input).
What is the concept of generalisation?
???
How does sensibility apply to generalisation?
???
When is generalisation useful in real-world applications?
Where:
- the relationship between input and output is unknown
- little available data
- data contain noise
What is underfitting?
When the model created from an Ai system analysing data is too simple to explain the variance in the data and cannot generalise to fit it correctly.
What is overfitting?`
When the model created from an Ai system analysing data is too complex in explaining the variance in the data, missing the actual underlying patterns in the data. Here, it pays too much attention to noise and detail.
What is model complexity?
???
What is pruning?
Removing irrelevant neurons that have no effect from a neural network to make it less complex.
What is growing?
Systematically and repeatedly adding neurons to a neural network by some approach or algorithm while doing so appears to remain to be beneficial.
What is an error function?
???
What is weight decay?
???
How do you implement weight decay to regularise the function?
???
What is validation with respect to neural networks?
???
How do you perform validation with neural networks?
???
What is early stopping?
???
What is generalisation error?
???
What does a small generalisation error suggest? Why?
???
How can you find a good neural generaliser?
???
What is a bias unit? Why are they used?
An added input to a neuron fixed at 1 weighted such that it is equal to the threshold of the neuron, Θ. This means that the output of the neuron then only depends on the other “actual” inputs and their weights, allowing adaptation in neurons that can yield greater flexibility in learning.
How do you implement an AND gate with a neuron?
???
How do you implement an OR gate with a neuron?
Make Θ = 0.5 and g(x) = 1 when x > 0 and 0 when x <= 0.
Three inputs; first is bias unit fixed to -0.5 weight and +1 value to remove Θ threshold. Make both weights 0.6, i.e. bigger than Θ, so if either or both true neuron gives +1.