Do Gaussian Processes not generalize well out of the training range?

46 Views Asked by At

I'm a beginner of the Gaussian Processes (GPs). I would like to know the generalizability of GPs. I trained a GP using training data generated by an objective function (the range $-5<x<5$, 10 sample points). Now I test the GP using test data generated by the objective with the range $-10<x<10$, 2000 sample points. Here's the picture:enter image description here.
As you can see, the GP well estimates only in the range of training data ($-5<x<5$) and are crap out of the range ($-10<x<-5, 5<x<10$). Is this the nature of GPs? Or I might be wrong with my algorithm...

For your information, I show my algorithm:

def objective(x):
  return 2 * np.sin(x) + 3 * np.cos(2 * x) + 5 * np.sin(2 / 3 * x)
  
    
  
x_train=np.arange(-5, 5, 1)
y_train=objective(x_train)


for i in range(len(x_train)):
  y_train[i] += np.random.normal(0, 0.5)




plt.plot(x_train,y_train,"x",label="$y_{train}$")
plt.legend()
plt.show()
    
    
def gaussian_kernel(x, x_train, t1=1, t2=0.4):
  xx = (x_train - x) ** 2
  return t1 * np.exp(-xx / t2)

def gpr(x_test, x_train, y_train, kernel, t3=0.1):
    mean = y_train.mean()

    K = []
    for x in x_train:
        K.append(kernel(x, x_train))
    K = np.array(K) + np.eye(len(x_train)) * t3
    A = np.linalg.inv(K)
    yy = np.dot(A, y_train - mean)

    mu = []
    var = [] 
    for x in x_test:
        k = kernel(x, x_train)
        s = gaussian_kernel(x, [x]) + t3
        mu.append(np.dot(k, yy))
        var.append(s[0] - np.dot(k, np.dot(A, k.T)))

    mu = np.array(mu) + mean
    var = np.array(var)

    return mu, var 
        

x_test = np.arange(-10, 10, 0.01)
mu, var = gpr(x_test, x_train, y_train, gaussian_kernel)
plt.plot(x_test, mu)
plt.plot(x_train, y_train, "x", c="C0")
plt.show()