Neural Networks and Deep Learning Flashcards
What Is A Neural Network?
A neural network is a biologically inspired programming paradigm that allows machines to learn from observational data.
Neural Networks and Deep Learning currently provide the best solutions to which of the following areas?
Neural Networks and Deep Learning provide the best solutions to many problems in image recognition, speech recognition, and natural language processing.
How is the application, ‘Helping Faces’ using Neural Networks?
Helping Faces is using facial recognition to find missing children and reunite them with their families.
How is IBM using Neural Networks in healthcare?
To browse millions of genetic profiles in a way no single human has ever been able to do before to classify genetic disorders in patients accurately.
How is Amazon Alexa using Neural Networks?
To recognize, interpret, and act on human speech.
What is the difference between traditional programming and a neural network?
In traditional programming, we tell the computer what to do, breaking big problems up into many smaller, precisely defined solutions that the computer can easily perform.
By contrast, in a neural network we don’t tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.
Why is it important to obtain a solid understanding of the core principles of Neural Networks and Deep Learning, rather than a vague one?
Because if you’ve understood the core ideas well, you can rapidly understand other new material.
Whereas a human utilizes their primary visual cortex to recognize objects, how does a machine go through image processing?
A Machine takes in a large amount of observational training data and uses this data to infer rules for classifying what an object is and is not.
The more data it has, the more efficient it is as well.
There is a type of artificial neuron called a perceptron. Can you explain to me how it works, when it was developed, and by whom?
Perceptrons were developed in the 1950s and 1960s by Frank Rosenblatt. They take several binary inputs and produce a single binary output. In order to compute the output, weights, expressing the importance of the respective inputs to the output are used.
The output is always going to be 0 or 1 and that is determined by whether the weighted sum is less than or greater than some threshold value.
What is an easy way to think of a perceptron?
It is a device that makes decisions by weighing up evidence.
Suppose the weekend is coming up, and you’ve heard that there’s going to be a cheese festival in your city. You like cheese and are trying to decide whether or not to go to this festival. You might make your decision by weighing up three factors:
- Is the weather good?
- Does your significant other want to accompany you?
- Is the festival near public transit? (You don’t own a car)
How would you represent these three factors by corresponding binary variables x1, x2, and x3?
Each value equates to one if the response is positive. If the weather is good, then x1 would be 1, just as it would be 0 if it were storming.
What is the rule that Frank Rosenblatt proposed to computing the output and how does it work?
Frank Rosenblatt proposed weights, which express the respective inputs to the output.
How does dropping the threshold value increase the likelihood that an output is a positive result?
Because of the fact that neuron is more willing to take into account more variables.
∑j wj xj > threshold is quite cumbersome. We can make two notational changes to simplify it. What are they?
The first change is to write ∑j wj xj as a dot product,
so w ⋅ x, where w and x are vectors whose components are weights and inputs, respectively.
The second change is to move the threshold value to the other side of the inequality, and to replace it by what is known as the perceptron’s bias, b === -threshold
So it can be written as
output = { 0 if w⋅x +b ≤ 0
1 if w⋅x +b > 0