Final Review Flashcards

1
Q

casewrite()

A

writes a frozen video frame to a file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

caseread() –

A

reads a frozen video frame from a file. The file also contains the “supervisor’s” choice of a correct move.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

readweights()

A

reads in the weights data from a file. This is the “brain.” There are two weights arrays formed. One is 3 dimensional and the second is 2 dimensional.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Writeweights()

A

records the current weights files (the “brain”) to a file.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

applynet()

A

uses the input node values and the weights arrays to calculate the firing values at the hidden nodes and then at the output nodes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

backpropagate()

A

where the learning happens. Working backward from output values to hidden values to input values and the current weights, the network choice for an output node and the correct output node value from the supervisor, this function calculates and applies changes to the weights to reduce the overall network error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

steer()

A

send the correct commands to the robot motors to steer the robot and go forward

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

normalize()

A

adjusts input values to produce a consistent contrast to reduce the differences caused by different light levels

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

sigmoid() brief

A

our “threshold” function to make all hidden and output node firing values be between 0 and 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Capture functions

A

normalize() and casewrite()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

capture

A

cam on, robot off; record case w/ steering move

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Initialize functions

A

writeweights()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Initialize

A

builds starting wts file; creates brain that knows nothing

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Train functions

A

ApplyNet(), Backpropogate(), readwts, writewts, caseread, sigmoit, normalize

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

train

A

read in current wts and training, use applynet to find firing vals at hidden and output nodes, use supervisor’s input and do backprop() to correct all wts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

RobotRun functions

A

readwts(), normalize(), applynet(), sigmoid(), steer()

17
Q

RobotRun

A

runs robot; neural network driving

18
Q

how to apply steering move

A

after applynet finds all the hidden and then output node values, we choose that output node with the highest value. (-2, -1, 0, 1, 2)

19
Q

tic tac toe recursion

A

minimax is double recursion with min and max calling each other;

20
Q

supervised learning

A

learns by comp calc w/supervisors; hardest part is getting training cases

21
Q

sigmoid funct and x

A

1/(1+e^-x);

22
Q

sigmoid role

A

use instead of square wave for differentiability; is used to connect every layer to the next layer. For each, the system needs an array of real number weights.

The final layer is called the output layer. Each node represents an action for the agent. The agent chooses that action with the largest firing value.