- a GP is a prior over functions
- a Gaussian process is a collection of random variables {X_i}, such that any new random variable vector consisting of a subset of some of these random variables is Gaussian distributed
- simple example #1: the {X_i} are Gaussian distributed, then any new random vector Y=(X_t1, …, X_tn), is Gaussian distributed as well
- simple example #2: {X_i=i*W}, where W is a normal distributed random variable is a GP as well

- you can define a family of functions using GPs indirectly by specifying a mean and a covariance function
- you can model a stochastic process using a GP and draw random samples from that model (e.g., Brownian motion)

Short video only presenting the theorem, but not the proof of the theorem:

Given some mean and covariance function, the theorem states, that there exists a GP such that it has this mean and for any two selected random variables the covariance is that specified by the covariance function

Given different covariance functions (GP kernels), Matlab plots of samples of the resulting GP are shown.

Especially part II gives you a good intution about GP:

- GP allow you to draw random samples from multi-dimensional functions
- where you specify the functions indirectly
- by the mean function & the covariance function
- while the mean function is nearly always set to 0 for all points
- the covariance functions allows you to define the covariance between nearby (and far away) function values
- e.g., such that the function values change smoothly
- by taking the squared exponential kernel for the covariance function

Part I

Part II

A function f is a covariance function (or: a positive semi-definite kernel) if for any collection of random variables of the GP the corresponding covariance matrix is positive semi-definite)

public/gaussian_process_gp.txt · Last modified: 2014/01/19 13:07 (external edit) · []