6: Deep Feedforward Networks Flashcards
Deep feedforward networks (other names)
Feedforward neural networks, or multilayer perceptrons (MLPs)
The goal of a feedforward network is
to approximate some function f*
A feedforward network defines…
a mapping y = f(x; theta) and learns the value of the parameters theta that result in the best function approximation
These models are called feedforward because
information flows from the inputs x through the intermediate computations and finally to the output y. There are no feedback connections
Feedforward neural networks extended to have feedback connections are called
Recurrent Neural Networks (RNNs)
The model is associated with..
a directed acyclic graph (DAG) describing how the functions are composed together
Depth of the model
The overall length of the chain of network layers (the term “deep learning” comes from this terminology)
Name of the final layer
output layer
Each training example x is accompanied by…
a label y ~= f*(x)
Hidden layers
Layers in the network for which the training data does not show the desired output. The learning algorithm must decide how to use these layers to produce the desired output.
Width of the model
The dimensionality of the vector-valued hidden layers (each vector element is analagous to a neuron)
Alternate view of the model (to the vector-vector view)
Instead of a layer representing a single vector-to-vector function, we can think of a layer consisting of many units that act in parallel representing a vector-to-scalar function.
Unit (or neuron)
A single element (or node) of a neural network layer. It receives input from many other units and computes its own activation value
Feedforward networks are…
function approximation machines that are designed to achieve statistical generalization (occasionally drawing some insights from what we know about the brain, rather than modeling brain function)