This question concerns determining a function, $f(x)$ say, based on noisy measurements $y = f(x) + \xi$ (where $\xi$ is IID gaussian noise) using Gaussian process machine learning with likelihood maximisation for hyperparameter determination.
As the noise value increases, the amount of data required to determine both the noise and the underlying function will increase. My questions are:
1) For a given noise standard deviation is there any way of estimating the required number of data points needed for the function to be 'seen' by the process (perhaps based on some measure of how the function varies over the input space)
2) Is there any limit of noise SD value beyond which the GP will not be able to identify the function even given infinite data?