Principal component analysis (PCA) is a statistical measure of how each explanatory variable is correlated to one another. PCA is unsupervised learning algorithm that is used for identifying patterns in the data. It shows the direction in which the data is dispersed (Eigenvectors) and the magnitude (Eigenvalues). Principal component analysis is used in dimensionality reduction for feature extraction. PCA transforms a complex high dimensional data through a linear combinations of features in the original data and creates a new set of features called principal components that are orthogonal (uncorrelated) while preserving the variance between the features. With PCA we can analyze and easily visualize complex data. In this post we are going to look at what PCA is, how it works, its limitations and strengths.

Principal Component Analysis

Principal component analysis is an orthogonal linear transformation that transforms the data to new dimension according to the variance of the features. The projection with the highest variance comes first, then followed by the least variance and so on. Using principal component analysis we can be able to transform a more complex high dimensional data into a simple to interpret low dimensional data by creating principal components. Principal component analysis was developed by Karl Pearson in 1901. PCA has been widely used in many fields and it is usually renamed according to that field e.g in eigenvalue decomposition (EVD) of XTX in linear algebra, empirical orthogonal functions (EOF) in meteorological science, empirical modal analysis in structural dynamics among others. In machine learning PCA is used in fearture extraction where it creates new dimension of the data with only important features and leave the (“bad”) unimportant features in the original data set. This is referred to as dimensionality reduction and in this way data is not lost.

Principal Component Analysis Terminologies

Before we look at how PCA works let’s define some of the fundamental terminologies that are commonly used in PCA.

  • Matrix: Its a rectangular array of objects.
  • Variance: This measures how data is spread.
  • Covariance: This shows the direction to which the variables tend to move.
  • EigenVector: This is a vector whose direction remains unchanged after linear transformation.
  • EigenValue: It is a number which when subtracted from a matrix and multiplied by the identity matrix gives a zero determinant. It’s also referred to as characteristic roots..
  • Dimensionality: This is the number of features in the data set.
    – Orthogonal: This implies lack of correlation between variables.

How Principal Component Analysis Works

Principal component analysis creates a new set of features from the old set of features. The new set of features has the following properties;

  • The new set of features have zero correlation between the features.
  • The new set of features are linear combination of the old features.
  • The axes of these new features are called the principal components.
  • The first principal component has the largest variance followed by the second principal component and so on.
  • The principal components are orthogonal.
  • The variance in principal components decreases from the first principal component to the last one.

When creating new set of features from the old features we compute the variance of the features and select the features that yield the highest variance. This forms the first principal component. This process continues as we go to the second, third and more principal components. There are various approaches that we can use to come up with principal components which includes maximizing variance and maximizing the reconstruction error. Below is a summary of the approaches of finding the principal components;

  • Maximizing the variance
  • Maximizing the reconstruction error
  • Eigen-decomposition.
  • Singular Value Decomposition.

Below are simple steps on implementing the PCA.

– Data standardization.
– Computing the covariance matrix.
– Computing the Eigenvectors and Eigenvectors for the covariance matrix.
– Selecting the components starting with largest to smallest Eigenvalues.
– Creating the principal components.

When To Use Principal Component Analysis

  • When reducing the dimensionality of data.
  • For feature Extraction.
  • When determining the linear combinations of variables.
  • When understanding the structure of the data set.
  • When visualizing high dimensional data.

Below is a diagram showing three principal components. Note the distribution of data where data with highest variance forms the first principal component, followed by second then the third.

PCA - Principal Component Analysis

Principal Component Analysis Example in Scikit-Learn

There are different ways to achieving the principal component analysis, One of the way to do this is by manually calculating the PCA from the matrix by computing the covariance of the matrix, getting the egienvectors and eigenvalues of the covariance matrix and finaly creating the principal components. However, we also have many software tools and libraries that can make PCA easy to use. In this post we are going to leverage the scikit-learn library. Scikit-learn comes with a sklearn.decomposition.PCA(n_components=None, copy=True, whiten=False, svd_solver=’auto’, tol=0.0, iterated_power=’auto’, random_state=None) class which is used for the analyses of principal components in the data.

Principal Components

Explained Variance

Explained Variance Plot

Explained Variance - Principal Component Analysis

Principal Components Plot

Output

PCA Plot - Principal Component Analysis

PCA Inverse Transform - Principal Component Analysis

PCA With Iris Data Set

Output

PCA With Iris Data - Principal Component Analysis

Pros

  • Easy and simple to implement.
  • Easy to visualize complex data.
  • We only focus on most useful features of the data.
  • Reduces the size of the data.

Cons

  • It is sensitive to outliers.
  • It is not very efficient in some high dimensional data set compared to other methods such as singular value decomposition (SVD).

Applications Of Principal Component Analysis

Principal component analysis has vast number of applications in different domains. It is mostly used in finding hidden patterns in data and reducing the dimensionality of data. Below are few domains where the PCA is very useful.

  • Data mining.
  • Image processing.
  • Financial analysis.
  • Statistical quality control.
  • Computer vision.
  • Stock market prediction.

Conclusion

Principal component analysis is unsupervised learning algorithm that is used in reducing the dimension of the data set and finding hidden pattern from the data. Before starting data modeling process PCA should be the first exploratory data analysis done on data to understand it. PCA is a linear transformation technique that transforms a high dimensional data to lower dimensional data that is orthogonal. PCA has many applications in different domains, in machine learning it is commonly used in feature extraction. It is also very useful in data mining tasks. PCA is not the only method for dimensionality reduction, other methods include singular value decomposition which we will cover in the next post.

What’s Next

In this post we have looked at the Principal component analysis, in the next post we will look at the singular value decomposition (SVD).

Principal Component Analysis

Post navigation