Autoencoders are neural networks models that encode and decode data where the output should look like the input. Autoencoders have wide applications in tasks such as dimensionality reduction, generative models and feature extraction in machine learning. The greatest impact of autoencoders has been experienced in the field of computer vision and natural language processing. In the previous post we looked at what the autoencoders are, various types, there strengths, challenges and applications. In this post we are going to develop a simple autoencoder with Keras to recognize digits using the MNIST data set.

Simple Autoencoder with Keras

Autoencoders can be implemented with different tools such as TensorFlow, Keras, Theano, PyTorch among other great tools. In this post we are going to use Keras framework with the TensorFlow back-end. You can install TensorFlow and Keras by following my Working With TensorFlow And Keras post. Autoencoder is nothing but an artificial neural network with two parts; an encoder and a decoder part. The encoder compresses the input data while the decoder uncompresses the encoded data back to the original format. The objective here is to train the model to be able to reproduce the output which looks like the input.

A simple autoencoder is a neural network made up of three layers; the input layer, one hidden layer and an output layer. However, autoencoders can be stacked to form deep autoencoder that can learn better representations.

Let’s implement our simple three layer neural network autoencoder and train it on the MNIST data set.

Import required libraries

Load MNIST data

Scaling our data

Let’s inspect our data set

Reshaping our images data into vectors of length 784

Creating our autoencoder model

Making Prediction on Test data

Visualizing model predictions





MNIST Prediction with autoencoder - Simple Autoencoder With Keras

Complete Code



Autoencoders have pushed the limit of deep learning further with there great power to learn important features in data. Most complex deep learning models have incorporated autoencoders in one or another way. We have different types of autoencoders such as densoising, sparcial, variational and convolutional autoencoders that perform different tasks. In this post we have simply scratched the surface of a wider class of neural network algorithms. We will be using more autoencoders in different deep learning models in the coming posts.

What’s Next

In this post we have seen how develop a simple autoencoder model to predict the handwriting, in the next post we are going to introduce ourselves to text analytics using the natural language processing concepts.

Simple Autoencoder With Keras

Post navigation