Neural Networks Flashcards

1
Q

What is a neural network?

A

A neural network is a computational model inspired by the organization and functioning of the human brain. It is a core component of artificial intelligence (AI) and machine learning, designed to process and analyze complex data, recognize patterns, and make decisions or predictions. Neural networks consist of interconnected nodes, called neurons, organized into layers. These networks can be used for a wide range of tasks, including image recognition, natural language processing, and autonomous control.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are neural networks related to AI and machine learning?

A

Neural networks play a pivotal role in the fields of AI and machine learning. They are the foundation for many AI applications, providing the ability to learn from data and adapt to new information. Neural networks excel at tasks such as image classification, speech recognition, and recommendation systems, making them a fundamental technology in the advancement of AI and machine learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a perceptron?

A

A perceptron is one of the simplest forms of a neural network. It serves as a binary linear classifier, taking multiple inputs, applying weights to those inputs, summing them up, and then passing the result through an activation function to produce an output. Perceptrons are often used as building blocks to understand the basic principles of neural network operation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How is a perceptron related to neural networks?

A

Perceptrons are building blocks for more complex neural networks. While they are limited to linearly separable problems, they provide a foundational understanding of how neurons process information through weighted inputs and activation functions. Neural networks extend the concept of perceptrons by adding multiple layers and non-linear activation functions, enabling them to solve more intricate problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do neurons in a neural network work? What are the characteristics?

A

Neurons in a neural network receive inputs, each multiplied by a weight, which determines their contribution to the neuron’s output. These weighted inputs are summed, a bias term is added, and the result is then passed through an activation function. The activation function introduces non-linearity, allowing the neuron to capture complex relationships in the data. This process is fundamental to the operation of neurons in a neural network.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a bias in a neural network?

A

A bias is an additional parameter associated with each neuron in a neural network. It allows the neuron to shift the activation function’s output. The bias term is crucial for fine-tuning a neuron’s responsiveness and ensuring that the network can model various patterns and data distributions effectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are weights in neural networks?

A

Weights are parameters assigned to the connections between neurons in a neural network. They determine the strength of influence that each input has on a neuron’s output. Adjusting these weights during training is how the network learns to make accurate predictions or classifications.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is an activation in a neural network?

A

Activation in a neural network refers to the output value produced by a neuron after processing its inputs. It represents the neuron’s response to the weighted sum of inputs and the activation function applied to that sum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is an activation function in a neural network?

A

An activation function is a mathematical function applied to the weighted sum of inputs and bias in a neuron. It introduces non-linearity into the neuron’s output, enabling the network to model complex, non-linear relationships in the data. Common activation functions include sigmoid, ReLU (Rectified Linear Unit), and tanh (hyperbolic tangent).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are some different activation functions? What are the pros and cons of each?

A

There are several activation functions used in neural networks. Sigmoid functions produce outputs in the range (0, 1), which can represent probabilities. ReLU functions are computationally efficient and help mitigate the vanishing gradient problem but can suffer from the dying ReLU problem. Tanh functions produce outputs in the range (-1, 1) and are zero-centered. Each activation function has its advantages and disadvantages, making their choice task-dependent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Given a bunch of neurons, how is a neural network constructed?

A

A neural network is constructed by organizing neurons into layers. Typically, there is an input layer to receive data, one or more hidden layers to process and learn from the data, and an output layer to produce the final results or predictions. The connections between neurons are defined by weights, and each neuron has its activation function and bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are feed-forward neural networks?

A

Feed-forward neural networks, also known as multilayer perceptrons (MLPs), are a type of neural network architecture where information flows in one direction, from the input layer to the output layer. These networks do not have feedback loops or connections that create cycles, making them suitable for a wide range of supervised learning tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How should weights be initialized?

A

Weights in a neural network are typically initialized randomly, but careful consideration must be given to the initialization method. Techniques like Xavier/Glorot and He initialization are often used to promote efficient training by ensuring gradients do not vanish or explode during backpropagation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are deep networks?

A

Deep networks refer to neural networks with multiple hidden layers. The depth of a network allows it to capture increasingly abstract and hierarchical features in the data, making it highly effective for complex tasks such as image recognition and natural language processing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the width and depth of a network?

A

Width refers to the number of neurons in a layer, while depth indicates the number of layers in a neural network. The combination of width and depth determines the network’s capacity to represent and learn from data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How does a neural network learn?

A

Neural networks learn by adjusting their weights during training. This is typically achieved through optimization techniques like gradient descent, where the network’s error is minimized by iteratively updating weights based on the gradient of the error with respect to each weight.

17
Q

Are neural networks a form of supervised learning, unsupervised learning, or reinforcement learning?

A

Neural networks can be applied to all three forms of learning. In supervised learning, they are used for tasks with labeled data; in unsupervised learning, they can discover patterns and structures in unlabeled data, and in reinforcement learning, they learn to make sequential decisions through interactions with an environment.

18
Q

What are the characteristics of a deep neural network?

A

Deep neural networks are characterized by their depth, which allows them to capture hierarchical features in data. They can automatically learn representations from raw data, making them powerful for tasks like feature extraction and abstraction.

19
Q

What is the input layer?

A

The input layer is the initial layer of a neural network where data is introduced into the network. It contains neurons corresponding to the features or input dimensions of the data.

20
Q

What is the hidden layer?

A

Hidden layers are intermediary layers between the input and output layers. They perform transformations on the input data, gradually abstracting and representing complex patterns.

21
Q

What is the output layer?

A

The output layer is the final layer of a neural network. It produces the network’s predictions or results based on the processed information from the hidden layers.

22
Q

What is a fully connected layer?

A

A fully connected layer, also known as a dense layer, is a layer where each neuron is connected to every neuron in the previous and subsequent layers. This dense connectivity allows for the modeling of complex relationships within the data.

23
Q

What are tensors in relation to neural networks?

A

Tensors are multi-dimensional arrays that serve as the primary data structure for input, output, and intermediate data within neural networks. They enable efficient manipulation and storage of data at various stages of network computation.

24
Q

How does one prevent a neural network from collapsing? What are the causes?

A

To prevent a neural network from collapsing during training, it’s essential to choose appropriate activation functions, initialize weights carefully, use techniques like batch normalization, and fine-tune hyperparameters. Collapsing can occur due to issues such as vanishing gradients, poor weight initialization, or overly aggressive learning rates.

25
Q

How do vector databases relate to neural networks?

A

Vector databases are used in conjunction with neural networks to store and retrieve data efficiently. They allow for rapid data access and manipulation, which is crucial for training and inference in neural network applications.

26
Q

How are artificial neurons related to real neurons in humans and biology?

A

Artificial neurons are simplified models inspired by the basic functioning of biological neurons. While they don’t capture the full complexity of real neurons, they share the fundamental concept of processing input and producing an output.

27
Q

What are artificial neurons?

A

Artificial neurons, also known as perceptrons or nodes, are the fundamental building blocks of neural networks. They are designed to mimic the basic information processing of biological neurons, performing weighted summations and applying activation functions to their inputs.

28
Q

What is softmax? What does it do in a neural network?

A

Softmax is an activation function used in the output layer of a neural network for multi-class classification tasks. It transforms raw scores or logits into probability distributions, making it suitable for determining class probabilities in scenarios where there are multiple possible classes.

29
Q

How do neural networks relate to finance?

A

Neural networks are widely applied in finance for tasks such as stock price prediction, risk assessment, fraud detection, and algorithmic trading. They use historical data and market indicators to make informed decisions, manage portfolios, and optimize trading strategies.

30
Q

Can you give an example of neural networks used in finance?

A

An example of neural networks in finance is the use of recurrent neural networks (RNNs) to predict stock prices based on historical price movements and news sentiment analysis. RNNs can capture temporal dependencies in financial data, aiding in more accurate price forecasting and investment decision-making.