Let $f:\mathbb{R}^n\rightarrow\mathbb{R}$ and $q:\mathbb{R}^m\rightarrow \mathbb{R}$ be analytic (well-behaved) functions. Let $n>m$, and $\int_{\mathbb{R}^{n-m}}fd\lambda = q$. Then, if I am trying to find an estimator (such as a neural network, or something else) of either one by using gradient descent on the error, I'd think finding an estimator of $q$ is "easier" than $f$. Where "easier" means I can get less error, or the error function decreases faster, or my estimator requires less parameters, or maybe another notion of "easier" natural to humans.
Is there some theoretical result or theory to support my thought process? Or is it that we can't really know? Or do I need to make my definition of "easier" more precise?
In simple words: $f(x,y)$, $q(y)$, and $\int fdx=q$. So, I want to know which one between $f$ and $q$ I can approximate more "easily".