Autoencoder is unsupervised neural network algorithm that uses backpropagation to learn a function that maps input features to output where the output is similar to the input. A simple form of autoencoder is a shallow artificial neural network with three layers; the input layer, a hidden encoding layer and an output decoding layer that forms the latent space. An autoencoder is made up of two parts i.e the encoder and decoder parts. The operational principle of autoencoder is to learn a function that best map the input into the output which is similar to the input. Some autoencoders such as Restricted Boltzman Machine can be stacked to form Deep Deep Belief Network which are deep neural network architectures. Autoencoders have been widely used in machine learning and especially in deep learning for good feature learning, generative models and data compression. Its application include dimensionality reduction, semantic hashing and feature extraction.

Autoencoders

Autoencoder - Introduction to Autoencoders

 

Autoencoders are regarded as shallow types of artificial neural network which learn data representation in unsupervised manner. Instead of labeling data manually we can train autoencoder to extract the important features in the data. This aspect is very useful in information retrieval tasks. The input layer of the autoencoder takes in the input while the hidden layer transforms (compresses/encodes)the  input data signals. The output layer decodes/uncompress the encoded data into output data that is similar to the input data. This aspect of autoencoder is very useful in tasks such data compression, denoising and natural language processing among others. In image processing stacked sparse autoencoders are used in learning the features of an object in the input layer.

 

Vannila Autoencoder - Introduction to Autoencoders

From the above image an autoencoder is made up of two parts, the encoder and the decoder part. The encoder part is responsible for reading the input and transforming (encoding) the data into a compressed format. The decoder part is responsible for converting the encoded data into an original format. The hidden layer is the latent space of the autoencoder and is responsible for encoding and decoding function. Autoencoders are simply artificial neural networks that can be trained in similar manner as other neural network.

Autoencoders are evaluated using the similarity functions such as cosine similarity function between the input and the output. Autoencoders are closely related to Principal Component Analysis (PCA) algorithms but are more flexible and can be combined to form deeper network that can learn detailed features than PCA can. Unlike PCA which uses linear transformation to map data from higher dimensional space to lower dimensional space, autoencoders uses non-linear transformation to map data from higher dimensional space to lower dimensional space.  Autoencoders can be either undercomplete where the hidden layer has smaller dimensions than the input layer as shown below

Undercomplete Autoencoder - Introduction to Autoencoders

or overcomplete where the autoencoder has more number of dimensions in the hidden layer than the input layer as shown below.

Overcomplete Autoencoder - Introduction to Autoencoders

Types of Autoencoders

  1.  Denoising autoencoder. Given corrupted data, denoising autoencoder reconstructs originaly uncorrupted data from the corrupted data. This type of autoencoders are trained to learn important features from distorted data.
  2. Sparse autoencoder. Sparse autoencoder has more number of hidden units in the hidden layer than the input layer. However, only fewer units in the hidden layer can activate at the same time. It is useful in learning the sparse representation of inputs and mostly used in feature extraction.
  3. Variational autoencoder (VAE). Uses variational approach for latent representation learning by incorporating Stochastic Gradient Variational Bayes (SGVB).
  4. Convolutional autoencoder. This is used in convolution layer to learn representational details that makes up an image.

Application of Autoencoders

1. Data compression
2. Reconstructing corrupted data
3. Feature extraction
4. Text generation and information retrieval
5. Image processing

Advantages of Autoencoders

  1. Autoencoders can automatically learn features from a given data set without human intervention.

Challenges with Autoencoders

  1. Data specific. Autoencoder can only perform well on data similar to the trained data. If an autoencoder is trained on human images it can only perform well when presented with human related images and it will perform poorly if given different images such as vehicles.
  2. Lossy techniques. Autoencoders operates in a lossy approaches where the resultant output has degraded quality as compared to the original input.

Conclusion

Identifying important patterns from data by human intervention is still a major challenge that autoencoders easily solves. Despite of autoencoders being lossy and data specific, they are expremly powerful algorithms in feature extraction for learning important representations in data without labels. Autoencoders are not true unsupervised learning algorithm but more of self-supervised learning algorithms. Some autoencoders such as Restricted Boltzman Machine (RBM) have been greatly used in overcoming the problem of vanishing gradient in deep learning models.

What’s Next

In this post we have looked at autoencoders, in the next post we are going to look at how autoencoders can be implemented in Keras.

Introduction to Autoencoders

Post navigation