reduce the dimensionality of multidimentional Gaussian process

17 Views Asked by At

Say my training data is $(\mathbf{x_i},y_i), i=1..n$ where $\mathbf{x_i}$ are the inputs and $y_i$ are the labels. Also, each $\mathbf{x_i}$ is a vector of two numbers $\mathbf{x_i}=(x_{i,1},x_{i,2})$.

Now, let us consider two ways of training a Gaussian Process and calculate our interest $E(y_{n+1}|x_{n+1,1},(\mathbf{x_i},y_i), i=1..n)$:

  1. Do the "full" analysis: train a bivariate function $f_{x_{i,1},x_{i,2}}$ from $(\mathbf{x_i},y_i), i=1..n$, then calculate the marignal function $f_{x_{i,1}}$. We know $f_{x_{i,1}}(x_{n+1,1})$ is our result.
  2. Do a "cheating" analysis (e.g., due to the cost): we simply train a 1-d function $f_{x_{i,1}}$ from $(x_{i,1},y_i), i=1..n$ by discarding $x_{i,2}, i=1..n$.

I would like to know how bad the "cheating way" could be as an approximation to the "full" analysis. Any insights, links of papers? Thanks.