Sparse Autoencoders
Sparse Autoencoders (SAEs) are an unsupervised machine learning algorithm that trains a model by calculating the error between the output of the autoencoder and the original input, and continuously adjusting the parameters of the autoencoder. Autoencoders can be used to compress input information and extract useful input features.
The autoencoder was originally proposed based on the idea of dimensionality reduction. However, when the number of hidden nodes is greater than the number of input nodes, the autoencoder will lose the ability to automatically learn sample features. At this time, it is necessary to impose certain constraints on the hidden nodes. As with the starting point of the denoising autoencoder, high-dimensional and sparse expressions are good, so it is proposed to impose some sparsity limits on the hidden nodes. The sparse autoencoder is obtained by adding some sparsity constraints on the basis of the traditional autoencoder. This sparsity is for the hidden neurons of the autoencoder. By suppressing most of the outputs of the hidden neurons, the network achieves a sparse effect.