Lecture 8: Deep Neural Networks Flashcards

1
Q

What is a neural network?

A

A neural network is a set of interconnected stimuli. It is able to learn new behaviours.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Name 3 milestones in the history of NNs.

A

The electronic brain was the first advancement. It was not able to learn.

Hinton produced the multi-layered perceptron, which allowed for backpropagation.

Hinton then produced a deep neural network, which allowed pre-trained models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the 3 neuron activation functions? Is there another alternative?

A

Sigmoid
Tanh
ReLU

An alternative would be to apply a layer function like SoftMax.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Give two examples of Neural Network topologies.

A

The first is perceptron, which has 2 layers: input and ouput.

The second is multi-layer perceptron, which is the same as perceptron but has an additional hidden layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is meant by the term “training parameters”?

A

Finding/changing weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Give three learning algorithms that are used in neural networks.

A

Batch gradient descent, which calculated the gradient at the end of each batch.

Stochastic gradient descent, which calculates gradient at each iteration/stimulus.

Mini batch gradient descent uses mini-batch, which is a subset of the total dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between the training dataset and test dataset?

A

Training dataset refers to the example stimuli used, but test dataset is the test stimuli used to assess model performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is overfitting and underfitting?

A

Overfitting is when the model does not generalise well, i.e. when the test accuracy error is high.

Underfitting is when the model generalises almost too well, i.e. when the test accuracy error is almost zero.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are examples of optimisation methods for neural networks?

A

Adam
SGD
RMSProp

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can overfitting be limited?

A

By tuning the regularisation strength and dropout probability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Give an example of a benchmark dataset for neural networks.

A

MNIST consists of handwritten digits (10 categories). It has 60,000 training set and 10,000 test set.

CIFAR-10 consists of colour images (10 classes). It has 50,000 training images, 10000 test images.

CIFAR-100 has 100 classes with 600 each and 20 superclasses.

ImageNet is the most important and complex dataset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a convolutional neural network?

A

It is a neural network that converts human-defined features to learned features, called filters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What was the first convolutional network called?

A

LeNet

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How are convolutional feature maps used?

A

It is a sequential implementation of feature maps, where the window is slided over each feature map.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How can the spatial resolution of feature maps be reduced?

A

By pooling/sub-sampling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Name the following two CNN topologies:

  1. input -> convolution -> fully connected -> softmax
  2. input -> (convolution -> pooling) -> fully connected -> softmax
A
  1. Minimal topology
  2. Convolution-pooling module
17
Q

What are the three types of challenges for deep learning in robotic vision? Give two examples of each.

A

Learning challenges (uncertainty estimation, identifying unknowns, incremental learning, class-incremental learning, active learning)

Embodiment challenges (temporal embodiment, spatial embodiment, active vision, manipulation for perception)

Reasoning challenges (object + scene semantics, object + scene geometry, semantics + geometry)