Neural Networks Perceptron Flashcards
Exam2
What activation function does Perceptron use?
Step Function
Can a single perceptron find a non-linear decision boundary?
NO; it can only activate in 4 situations that are all linear decision boundaries
What is the XOR Problem for the Perceptron? and how was it ultimately solved?
XOR problem has a nonlinear decision boundary that a single layer perceptron is not capable of solving. It was solved by using a multi-layer perceptron (neural network) with at least one hidden layer- allowing the network to learn complex nonlinear relationships and accurately model the XOR function
Is a multi-layer perceptron a neural network?
Yes
What is the input to an activation function at the first layer of a neural network?
the weighted sum of the input features (ex: price, shipping cost, marketing, etc.)
How is a neuron/unit activated in a neural network? Will all units be activated?
Each neuron is activated based on feature input. Whether all units are activated or not depends on if the weighted sum of the inputs is greater than a certain threshold.
Why are neural networks referred to as a black box?
Because of the complexity of the models, it’s hard to trace exactly what features each neuron is working with or what relationship it finds; we don’t understand how it actually comes up with it’s predictions
What types of problems outperform other machine learning models like linear regression, decision trees, and SVM?
Natural Language Processing, Image Recognition, Speech Recognition
What role does back propagation play in neural networks?
This training algorithm is used to find the optimal coefficients of a neural network
Describe back propagation
Uses the derivative of the cost function to determine how much each coefficient at the output layer contributed to the error.
Measures how much error came from each neuron in the layer below, which measures the gradient across all neurons and layers.
Algorithm does the gradient descent step and tweaks each of the coefficients.