VI: Artificial Neural Networks Flashcards

1
Q

What are artificial neural networks?

A

Artificial neural network (ANN) is a set of connected neurons formed as a network. Neural networks apply algorithms in order to train models for data sets just as in machine learning, but with the addition that neural networks use several connected computational processes referred to as neurons. They are inspired by the structure of the brain. Artificial neural network is a group of artificial neurons connected in a topology joining several different neurons. They can be viewed as implementing complex machine learning algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the history of ANN?

A

Neural networks are models of how systems operate. The basic units are neurons and the most common neural model is the same as the first one from 1941, which is a nice tribute to the work that preceded the field. McCulloch and Pitts created ANN by creating a computational model called threshold logic and defined an artificial neuron that is still commonly used by many neural net systems. This neuron could understand inputs and outputs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do the simplest ANN networks consist of? How do more complex networks look?

A

They consist of a single artificial neuron with a set of inputs and a single output. A more complex multi-layered network, where the neurons are grouped into layers that can be fully connected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are perceptrons and support vector machines?

A

They are single-layer ANN networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When are ANN useful?

A

In instances where there is a high complexity of the data or tasks makes it very hard to create it by impractical. They can perform interior functions from observations, and can handle data processing like filtering and clustering.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are classical ANN applicable in?

A

They are applicable in fields where data sets are multi-dimensional but can be trained on a few numbers of layers:
Predicting the stock market
Loan applications
Predicting winning teams
Image recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a McCulloch-Pitts neuron?

A

McCulloch-Pitt’s neuron has a set of input, one is biased, and an output. The bias can be viewed as a constant term that is generally added and meaning a threshold used to determine if a neuron will become activated. The reason why bias is applied is to carry out mathematical computation for training the network and creating a model. Bias helps the network train and makes it easier for neurons to fire and not fire for the same input values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an activation function? What different types are there?

A

An activation function is a mathematical function that is applied to the value of the activation level, making neurons active. The activation function maps the sum of the activation level to a function equation, providing an output of the neuron:
- Sigmoid is the most common activation function, easy to analyze and easy to calculate.
- Hyperbolic tangent
- Heaviside step

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Why apply bias?

A

Bias is applied to carry out the mathematical computation for training the network and creating a model over the data set. It cannot be 0, since a random weight is multiplied to the bias, which is why 0 does not make any sense.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a sigmoid activation function?

A

Sigmoid is the most common activation function, easy to analyze and easy to calculate. It maps the weighted sum of inputs to a value between 0 and 1, allowing the output to represent a probability or a binary decision. The function has an S-shaped curve, gradually transitioning from 0 to 1 as the input increases. It is defined as f(x) = 1 / (1 + exp(-x)). The sigmoid activation function is useful for binary classification problems, but it suffers from vanishing gradients and tends to saturate for extreme input values, which can hinder learning in deep neural networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is a hyperbolic tangent?

A

The hyperbolic tangent (tanh) activation function is a non-linear activation function commonly used in neural networks. It maps the weighted sum of inputs to a value between -1 and 1, providing a smooth transition between negative and positive values. The tanh function is similar to the sigmoid function but centered at 0, making it suitable for tasks where the output range needs to be symmetric around zero. It retains the non-linear properties of the sigmoid function but avoids some of its limitations. However, like the sigmoid function, the tanh function can also suffer from vanishing gradients for extreme input values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a single-layer neural network?

A

Single-layer neural networks, also known as perceptrons, consist of only one layer of artificial neurons. They are the simplest form of neural networks and can be used for binary classification tasks. Each neuron in the network receives input signals, applies weights to them, and computes a weighted sum. Then, an activation function is applied to the sum to produce the output of the neuron. The weights and biases in the network are adjusted during training to optimize the model’s performance. However, single-layer neural networks are limited in their ability to handle complex patterns and non-linear relationships.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are multi-layer neural networks?

A

Multi-layer neural networks, also known as deep neural networks, consist of multiple layers of artificial neurons. These networks are designed to handle complex patterns and learn intricate representations. The input layer receives input signals, which are then passed through hidden layers that perform computations using weighted connections. Each hidden layer applies an activation function to produce output. The final layer, known as the output layer, generates the final predictions or outputs. Deep neural networks employ backpropagation to adjust the weights and biases during training, optimizing the network’s ability to learn and make accurate predictions across various tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is supervised learning and processing?

A

Supervised learning backpropagation follows the principle of gradient descent. The weights are modified towards a negative gradient of an error measure, minimizing the gap between the calculated value and the desired value. Supervised learning is a machine learning approach where an algorithm learns from labeled training data. The algorithm is trained to predict or classify new, unseen instances based on the patterns observed in the labeled examples. The process involves mapping input data to corresponding output labels, enabling the algorithm to make accurate predictions on unseen data. During training, the algorithm adjusts its internal parameters to minimize the difference between predicted and actual outputs. Supervised learning is commonly used in various tasks such as regression (predicting continuous values) and classification (predicting class labels).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is backpropagation for training neural networks?

A

Backpropagation is a popular training algorithm for neural networks. It involves a two-phase process: forward propagation and backward propagation. In forward propagation, input data is fed through the network, and the outputs are computed. Then, the error between predicted and actual outputs is calculated. In backward propagation, the error is propagated back through the network, adjusting the weights and biases of each neuron based on their contribution to the error. This iterative process continues until the network learns to minimize the error and make accurate predictions. Backpropagation allows neural networks to learn and improve their performance through adjusting their internal parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does overfitting and underfitting affect ANN?

A

For example overfitting can happen when a network learns to handle a sequence of occurrences but cannot deviate from that sequence. A symptom of overfitting is making accurate predictions for the training set and not being able to generate predictions for the validation set. Underfitting is when the model cannot adequately capture the structure of the data.

17
Q

How does clustering work in ANN?

A

Clustering techniques in training neural networks involve grouping data points based on their similarities to identify patterns or structures. One commonly used technique is K-means clustering, which partitions the data into K clusters. During training, the neural network can use the cluster assignments as additional input features or for unsupervised pre-training. This can help improve the network’s ability to capture complex relationships within the data. Clustering techniques can enhance the learning process by providing insights into the data distribution and aiding in data preprocessing or feature extraction before training the neural network.

18
Q

How do self-organizing maps work?

A

Self-organizing map (SOM), also called Kohonen maps, is a neural network algorithm that uses unsupervised learning to cluster data into a number of clusters. SOM is a low-dimensional, typically two-dimensional, map of the problem space. It uses competitive learning. They consist of neurons and weight vectors.

19
Q

What is Hebbian learning?

A

Hebbian learning is a rule-based learning algorithm for neural networks that strengthens the connections between neurons when they consistently fire together. It is based on the principle of synaptic plasticity, where the strength of a connection between two neurons is increased if they are active simultaneously. Hebbian learning enables neurons to learn and adapt based on the correlation of their activities, reinforcing connections that contribute to desired network behavior. However, it lacks specificity and may lead to overfitting. Modern variations, such as spike-timing-dependent plasticity (STDP), refine the Hebbian rule by considering the precise timing of neuron firing to better capture temporal relationships.

20
Q

What is a single-level neuron network?

A

A single-layer neural network is called a perceptron. Perceptrons are considered an earlier version of ANN. A perceptron can be considered a single-layer perceptron with two input values, weights and one output. Although a single-layer neural network has one input layer and one output layer it is called single-layer as no computation is performed in the input layer.

21
Q

How does a multi-layer perceptron work?

A

It feeds forward. This means that values are fed forward through the network from the input, via layer to the output and there are no recursions within the neurons. The reason is that many neurons are joined together to carry out complex computations. Multi-layer feedforward neural networks are like directed graphs with nodes that are neurons and edges linking neurons together in such a way that an output from a neuron is linked to an input of another neuron in the next layer

22
Q

What are the problems with ANNs?

A

Overfitting and Underfitting