Neural Networks Flashcards
process input signals and can be activated
Neurons
_____ are connected to and receive electrical signals from other ____
Neurons
a mathematical model for learning inspired by biological neural networks
Artificial Neural Network
How do Artificial Neural Networks work?
Artificial neural networks model mathematical functions that map inputs to outputs based on the structure and parameters of the network. In artificial neural networks, the structure of the network is shaped through training on data.
Different activation functions
- step function
- logistic function
- rectified linear unit (ReLU)
gives 0 before a certain threshold is reached and 1 after the threshold is reached.
step function
outputs any real number from 0 to 1, thus expressing graded confidence in its judgment.
logistic function
allows the output to be any positive value. If the value is negative, the function sets it to 0.
rectified linear unit (ReLU)
an algorithm for minimizing loss when training neural networks
gradient descent
Algorithm of gradient descent
- Start with a random choice of weights. This is our naive starting place, where we don’t know how much we should weight each input.
Repeat: - Calculate the gradient based on all data points that will lead to decreasing loss. Ultimately, the gradient is a vector (a sequence of numbers).
- Update weights according to the gradient.
gradient is calculated based on one point chosen at random. This can be quite inaccurate.
Stochastic Gradient Descent
computes the gradient based on on a few points selected at random, thus finding a compromise between computation cost and accuracy
Mini-Batch Gradient Descent
Only capable of learning linearly separable decision boundary
perceptron
Alternative to model data non-linearly
multilayer neural networks
an artificial neural network with an input layer, an output layer, and at least one hidden layer
multilayer neural networks
main algorithm used for training neural networks with hidden layers.
Backpropagation
neural networks that have more than one hidden layer.
deep neural networks
algorithm of backpropagation
- Calculate error for output layer
- For each layer, starting with output layer and moving inwards towards earliest hidden layer:
* Propagate error back one layer. In other words, the current layer that’s being considered sends the errors to the preceding layer.
* Update weights.
combatting overfitting in neural networks
dropout
In this technique, we temporarily remove units that we select at random during the learning phase.
dropout
encompasses the different computational methods for analyzing and understanding digital images
Computer Vision
applying a filter that adds each pixel value of an image to its neighbors, weighted according to a kernel matrix.
image convolution
neural network that uses convolution, usually for analyzing images.
convolutional neural network
reducing the size of an input by sampling from regions in the input
pooling
pooling by choosing the maximum value in each region
max-pooling
neural network that has connections only in one direction
feed-forward neural network
neural network that generates output that feeds back into its own inputs
recurrent neural network