Kernel PCA

What's that?

  • a non-linear variant of the normal linear PCA
  • we intuitively map our data to a higher-dimensional feature space using some mapping f, in which it can better be linearly separated
  • but we actually never specify this mapping f, but use the Kernel trick, i.e., specify some function K such that K(x,y) = <f(x),f(y)> where <.,.> denotes the inner product
  • so if our algorithm does only need to compute the inner product of vectors and not f(x) explicity for some points x, we can specify a corresponding Kernel function K and avoid to specify the mapping f

Wait... Is it about dimension reduction or dimension increasing?

  • we can do dimension reduction with this technique!
  • we only map the data to a higher dimensional space, to find a better projection / representation in that space using some of the principal components there
  • i.e.: original space –> feature space –> some few principal components space

What are the differences to linear PCA?

We do not compute the principal components themselves, but only the projections of our data onto those components!

Steps of the algorithm

See here

The following slide is from this video by Pratik Prabhanjan Brahma perfectly summarize the steps for Kernel PCA:

public/kernel_pca.txt · Last modified: 2014/01/11 14:53 (external edit) · []
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki