Data Sciene MODULE 4 Flashcards

1
Q

3 tipe neurale netwerke

A

Convolutional neural networks
Recurrent neural networks
Multi layer perceptrons (MLP) - dis waarna ons gaan ky in hierdie module

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Wanneer raak n neurale netwerk n deep neural network?

A

Wanneer daar meer as twee hidden layers is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

3 rules of thumb for deciding on the number of hidden neurons

A
  1. Somewhere between the number of outputs and inputs
  2. Two thirds the size of the input layer plus the number of neurons in the output layer
  3. Double the size of the input layer
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Met neurale netwerke, hoe gaannons te werk met categorical data?

A
  • algoritme kan hierdie tipe data ontvang as n respons
  • as jy dit wil gebruik as n inset, moet jy dot eers one hot encode
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Shape van die response data met neurale netwerke

A

Moet n one dimensional array wees. So daar is n ravel() function wat dit kan doen
Np.ravel(x)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Met neurale netwerke, wat is belangrik om te onthou met die insette?

A

Moet op dieselfde skaal wees.
So daar is n StandardScaler funksie wat gebruik kan word

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Wat doen sklearn se standard scaler?

A

Werk maar basies die z score uit per input waarde, gebaseerd om die gemiddeld en die standaard afwyking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Hoe lyk die waarde van n neuron nou weer?

A

F(b+som van dir produk tussen alle weights en neuron outputs vanaf vorige layer)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Hoe lyk die aktiverings funskie in die output layer vir regressie en klassifikasie probleme?

A

Regressie - linear function normally
Classification, sigmoid or softmax function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Weet maar net dat tanh vinniger leer as n sigmoid funskie

A

Vanwee a dissapearing gradient. Dit is veral n issue as dit voorkom in die begin layers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Wat is die issue as jou learning rate te hoog gestel is?

A

Dan kan j die heeltyd die minimum overshoot. So jy wil eintlik aanvanklik groot tree he, en dan al hoe kleiner

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Stochastic gradient descent

A

Random subsets are used to train an optimise the neural network over several itterations. Deur dit te herhaal, kan ons dir average error slope brpaal oor die hele area, wat dan kan help om die global minimum te kry (nie net n lokale minimum nie)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

So in hierdie kursus doen ons nie deep learning nie, sal koet kyk vir n udemy kursus op dit dalk?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly