Chapter 11 Quiz Flashcards
model that mimics the way human experts learn, nodes/neurons
neural networks
input layer, hidden layer, output layer
multilayer feedforward networks
constant that controls the level of contribution of a node
bias
monotone function used on sum of node before
activation/transfer function
what scale does the model perform best on?
0 to 1
how to scale
value-a divided by b-a
using model errors to update weights, errors are computed from the last layer back to the hidden layers
back propagation
weights updated after each observation if run through network (trial)
case updating
entire training set is run through network before each updating of weights takes place
batch updating
multiplying factor for error correction during back propagation
learning rate
down weights new information to avoid overfitting
weight decay
convergence of weights to the optimum
momentum
one run through of training data
epoch
complex networks with many layers, incorporating processes for dimension reduction and feature discovery
deep learning
selects subset of predictors and applies the same operation to the entire subset
convolution neural networks
main features of neural networks
black box, flow of data (no cycle), fully connected connections, parametric, nonlinear
popular output layer activation, converts to probabilities
softmax
midrange scaling
midrange = (max-min)/2
(x-midrange)/(range/2)
what is the goal of weights and biases (parameters)?
minimizing prediction error
when to stop training (three situations)?
of epochs predetermined is reached
new weights computed are not too different from the previous values
target misclassification is reached
number of epochs predetermined is reached
what happens if there is more than one hidden layer?
too many results causes overfitting and high cost
how many hidden layers does deep learning have?
2 or more hidden layers
how to determine size of hidden layer?
start with number of predictors and then increase/decrease
model that can deal with any input/output model
universal approximator
what are the circles and lines?
circles are neurons
lines are parameters
how is complexity optimized?
controlling magnitudes of weights