I've been studying the Fundamental Lemma of Calculus of Variations and the concept of completeness in statistical inference, and I've noticed that both concepts involve the idea that an integral (or expectation) being zero implies a function being zero. In the Fundamental Lemma of Calculus of Variations, the integral of the product of a smooth function $f(x)$ on a compact real interval $I$ and an arbitrary test function $\varphi(x)$ being zero for all test functions with compact support leads to the conclusion that $f(x)$ must be zero on $I$:
$∫_I f(x)\varphi(x) dx = 0$, for all $\varphi(x)$ with compact support implies that $f(x)=0$ for all $x\in I$
On the other hand, in the context of statistical inference, $T$ is a complete statistic if for some random variable function $g$ :
$E[g(T)]= \int g(t)f_T(t)dt= 0$, for all $\theta$, implies that $g(T) = 0$ with probability $1$ for all $\theta$.
My question is: Is there any underlying connection or relationship between these two concepts, despite being applied in different fields? Or is the similarity in their hypothesis and conclusion merely coincidental?
Thank you in advance