Chapter 14 Keep The Best Models During Training With Checkpointing Flashcards

1
Q

What’s application check-pointing? P 104

A

Application check-pointing is a fault tolerance technique for long running processes. It is an approach where a snapshot of the state of the system is taken in case of system failure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What do check-points do? P 104

A

When training deep learning models, the checkpoint captures the weights of the model. These weights can be used to make predictions as-is, or used as the basis for ongoing training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The Keras library provides a checkpointing capability by a…, The … callback class allows you to define where to checkpoint the model weights, how the file should be named and under what circumstances to make a checkpoint of the model. P 104

A

callback API , ModelCheckpoint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

The callback API for check-point allows you to specify metrics to monitor, such as loss or accuracy on the training or validation dataset. True/False? P 104

A

True

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What does the below code do? P 104

# Checkpoint the weights when validation loss improves
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
from sklearn.datasets import load_breast_cancer, load_iris,load_diabetes

dataset =  load_breast_cancer(as_frame=True)
X = dataset.data
Y = dataset.target
model = Sequential()
model.add(Dense(12, input_dim=30,activation= "relu" ))
model.add(Dense(8, activation= "relu" ))
model.add(Dense(1, activation= "sigmoid" ))
# Compile model
model.compile(loss= "binary_crossentropy" , optimizer= "adam" , metrics=[ "accuracy" ])
# checkpoint 
filepath="weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor= "val_loss" , verbose=1, save_best_only=True,
mode= "min" )
callbacks_list = [checkpoint]
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)
A

This is a very simple check-pointing strategy. It may create a lot of unnecessary checkpoint files if the validation loss moves up and down over training epochs. Nevertheless, it will ensure that you have a snapshot of the best model discovered during your run.

Checkpoint Doc

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How can we create just one file for check-points of a neural network that saves the best weights? P 105

A
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
from sklearn.datasets import load_breast_cancer, load_iris,load_diabetes

dataset =  load_breast_cancer(as_frame=True)
X = dataset.data
Y = dataset.target
model = Sequential()
model.add(Dense(12, input_dim=30,activation= "relu" ))
model.add(Dense(8, activation= "relu" ))
model.add(Dense(1, activation= "sigmoid" ))
# Compile model
model.compile(loss= "binary_crossentropy" , optimizer= "adam" , metrics=[ "accuracy" ])
# checkpoint 
filepath="weights_best.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor= "val_accuracy" , verbose=1, save_best_only=True,
mode= "max" )
callbacks_list = [checkpoint]
# Fit the model
model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10, callbacks=callbacks_list, verbose=0)

A simpler checkpoint strategy is to save the model weights to the same file, if and only if the validation accuracy improves. This can be done easily using the same code from above and changing the output filename to be fixed (not include score or epoch information).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What’s a callback API in NN? External

A

A callback is an object that can perform actions at various stages of training (e.g. at the start or end of an epoch, before or after a single batch, etc). You can use callbacks to: Write TensorBoard logs after every batch of training to monitor your metrics. Periodically save your model to disk.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can we load and use a check-pointed model weights? P 106

A
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import ModelCheckpoint
import matplotlib.pyplot as plt
import numpy
from sklearn.datasets import load_breast_cancer, load_iris,load_diabetes
dataset =  load_breast_cancer(as_frame=True)
X = dataset.data
Y = dataset.target
model = Sequential()
model.add(Dense(12, input_dim=30,activation= "relu" ))
model.add(Dense(8, activation= "relu" ))
model.add(Dense(1, activation= "sigmoid" ))
# Compile model
model.compile(loss= "binary_crossentropy" , optimizer= "adam" , metrics=[ "accuracy" ])
# Load weights
model.load_weights("weights.best.hdf5")
print("Created model and loaded weights from file")

scores = model.evaluate(X, Y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))

Note the abscense of “fit”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly