Autoencoders Flashcards
Example of unsupervised tasks
Clustering
Dimensionality reduction
Feature learning
Data density estimation
What is an auto encoder network?
A feedforward neural network aiming at learning a compressed distributed representation of a dataset
The autoencoders network learns a “mapping between the training data and its labels T or F?
False
The autoencoder network learns the internal structure of the data itself T or F
True
The network structure of an autoencoder should force the network to learn only the most important features T or F?
True
Autoencoder scheme
Input -> Nn encoder - code - NN decoder - output that should be as close as possible to the input
In autoencoder training correspond to approximate which function?
The identity function
Why is sparsity paradigm required in training autoencoders?
Because learning the identity function is ill-posed when the number of hidden units is lower than the number of inputs
Sparsity paradigm does not constrain weight optimization T or F?
False
Neural coding
The patterns of electrical activity of neurons induced by a stimulus
Sparse coding paradigm
When a stimulus yields the activation of just a relatively small number of neurons, that combined represent it in a sparse way
Average activation of a neuron
Average of the output on the training set
Regularization weight can have any sign T or F
False always positive
Penalty factor formula (and input)
Input are sparsity parameter and average activation of hidden units. Formula is KL divergence
What is Kullback_Leiboer divergence
A standard function for measuring how different two distributions are