CSCI 343 Quiz 3 Flashcards

1
Q

random forests is a(n) ? method

A

ensemble

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

for each tree in random forests, you should

A

randomly choose a subset of features and build a decision tree using only those

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

decision rules

A

if, then rules interpreted from a tree (ex: if income > 70, then risk)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

build trees with

A

training data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

test tree strength (quality) with

A

validation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

you can combine trees with high percentages to make

A

better trees

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

NN stands for

A

neural network

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

NN is named this way because it is

A

a model of a brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

single neuron structure

A

inputs with weights -> neuron -> output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

the neuron contains some function

A

f(Σxiwi)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

weights are originally

A

randomly assigned

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

deep neural nets have

A

a bunch of hidden layers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

recurrent neural nets have

A

cycles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

model of a neural net is the

A

structure of the net and learned weights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

activation function at node’s idea is to make the output

A

non-linear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

sigmoid function

A

f(y) = 1 / (1 + e^-y)

17
Q

which node activation function is most common

18
Q

ReLU stands for

A

rectified linear unit

19
Q

ReLU function

A

f(y) = max(0, y)

20
Q

in NN, the work is in (training/testing)

21
Q

structure of a neural net

A

input layer, 0 or more hidden layers, output layer

22
Q

NN are ? graphs

A

directed, acyclic

23
Q

typically the number of inputs in a neural net is equivalent to

A

the number of features

24
Q

it is typical for a neural net to be fully

25
typically the number of outputs in a neural net is equal to
the number of classes
26
training process for NN
backpropagation (backprop) of errors
27
Epochs
the amount of times you run through all training data
28
you want the learning rate n to (increase/decrease) with time
decrease (so you only make minor tweaks as time goes on)
29
ways to stop NN training
predetermine # epochs, look at error rate, use a validation set to determine if you can move on
30
errors are backpropagated through; adjust ?
weights