Conditions for integrals to be equal

127 Views Asked by At

Suppose that $f, g,h:[0,1]\rightarrow \mathbb{R}$ are functions that satisfy $f,g\geq 0$ and $$\int_0^1 f(t)h(t)dt = \int_{0}^1 g(t)h(t)dt.$$ What are necessary and sufficient conditions on $h$ to ensure that $$\int_0^1f(t)dt = \int_0^1g(t)dt?$$

EDIT: (Some thoughts:) I thought maybe if $h>0$, this would be enough. However, I was shown that, for example, if $ f = 1/h $ and $g = 1/\left(\int h\right)$, then $$\int fh = \int gh.$$ In this case, for $\int f = \int g$ to be satisfied, $h$ must satisfy $$\int \frac{1}{h} = \frac{1}{\int h}$$ and of course many $h>0$ do not satisfy that condition (e.g. $h(t) = 1+t$). Since, as user251257 points out, $h(t) \equiv1$ is sufficient, I was curious what the answer to the above might be.

(Where this question came from) I originally came across this question when looking at what can be said when you have solutions $y_i$ and $y_2$ to the Ricatti equations $y_i' + y^2_i + r_i =0$ on the interval $[a,b]$ that satisfy $y_1(a) = y_2(a)$ and $y_1(b) = y_2(b)$. In this case, you can write $$0 = g(y_2(t)-y_1(t))|_a^b = \int_a^b g(r_1(t)-r_2(t))dt$$ where $g$ is a function that satisfies $g' = (y_1 + y_2)g$. So, I wanted to claim that this means $\int_a^b r_1(t)dt = \int_a^b r_2(t)dt$, and in the process, I realized I didn't know the answer to the question above.

3

There are 3 best solutions below

2
On

Here's some a bit heuristic justification for why I don't expect anything more useful than $h$ being constant to work: think of the set of integrable functions $[0,1]\to\mathbb{R}$ as a vector space. Then what you're saying is that we have some element $f-g$ of that space such that $\langle f-g, h\rangle = 0$, i.e. $f-g\in h^\perp$ and you're asking for conditions on $h$ which would imply that $\langle f-g, 1\rangle = 0$, i.e. $f-g\in 1^\perp$. Now if $h$ and $1$ are not proportional to each other, there's no way for that implication to work.

Note that $f,g\geq 0$ doesn't change the generality of the above argument, since any element of our vector space can be written as a difference of nonnegative elements.

1
On

Nothing will work except $h(x)=c$ constant.

Here's a sketch of the argument: suppose $h(a) \neq h(b)$ for $a,b\in (0,1)$. The you can pick $f$ to be a small bump near $x=a$ with height $1/h(a)$, and similarly for $g$ and $b$.

It may help to take a step back and build intuition in finite dimensions. If $v,w,h$ are vectors in $\mathbb{R}^n$ (and $n>1$) and you have $$u\cdot h=v\cdot h,$$ there is no reason to believe that $$u\cdot\mathbf{1} = v\cdot \mathbf{1},$$ and the situation doesn't change in infinite dimensions.

0
On

There has to be a constant $c\neq 0$ such that $h(x)=c$ for almost all $x\in [0,1]$. Sufficiency is obvious, as is the necessity of $c\neq 0$. We prove the necessity of $h$ being essentially constant. In the following, let $I=[0,1]$ and $\lambda$ be Lebesgue measure. The argument works for every probability space though.

We first show that if there is no $c\in\mathbb{R}$ such that $h(x)=c$ for almost all $x$, then there exists $k\in\mathbb{R}$ such that $U_k=\{x\in I:h(x)>k\}$ and $L_k=\{x\in I:h(x)<k\}$ both have positive measure. To see this, pick any $c\in\mathbb{R}$. If both $U_c$ and $L_c$ have positive measure, we are done, and if both have measure zero, then $h(x)=c$ for almost all $x\in I$. For the remaining cases, assume that $U_c$ does have positive measure and $L_c$ does not (the other case works essentially the same way.) Let $$s=\inf\Big\{w\in\mathbb{R}: \lambda\Big(h^{-1}\big((-\infty,w)\big)\Big)>0\Big\}.$$ The set over which the infimum is taken is clearly nonempty and bounded below by $c$, so the infimum exists. For any natural number $n$, we must have $\lambda\Big(h^{-1}\big((-\infty,s+1/n)\big)\Big)>0$. However, we must also have $\lambda\Big(h^{-1}\big((s+1/n^*,\infty)\big)\Big)>0$ for some $n^*$. Otherwise, we would have $h(x)=s$ for amost all $x\in I$. So we can take $k=s+1/n^*$.

So assume there is no $c\in\mathbb{R}$ such that $h(x)=c$ for almost all $x\in I$. Pick some $k\in\mathbb{R}$ such that both $U_k$ and $L_k$ have positive measure. Let $\alpha_u=1/\lambda(U_k)\int_{U_k} h~\mathrm d\lambda$ and $\alpha_l=1/\lambda(L_k)\int_{L_k} h~\mathrm d\lambda$ and note that $\alpha_u>k>\alpha_l$. Indeed $\alpha_u$ is the average value of $h$ over $U_k$ and $\alpha_l$ is the average value of $h$ over $L_k$.

Assume for now that $\alpha_u\neq0\neq\alpha_l$. Let $$f=1_{U_k} \frac{\alpha_l}{\alpha_u}\frac{\lambda(L_k)}{\lambda(U_k)}$$ and $g=1_{L_k}$. Then $$\int fh~\mathrm d\lambda=\alpha_l\lambda(L_k)=\int gh~\mathrm d\lambda,$$ but $$\int f~\mathrm d\lambda=\frac{\alpha_l}{\alpha_u}\lambda(L_k)\neq 1\lambda(L_k)=\int g~\mathrm d\lambda.$$

It remains to dispose of the cases $\alpha_u=0$ and $\alpha_l=0$. We do the first, the second one can be dealt with in the same manner. If $\alpha_u=0$ then we can let $f=1_{U_k}$ and $g$ the constant function with value $0$. We then get $$\int fh~\mathrm d\lambda=\alpha_u\lambda(U_k)=0=\int gh~\mathrm d\lambda,$$ but $$\int f~\mathrm d\lambda=\lambda(U_k)>0=\int g~\mathrm d\lambda.$$

Edit: The proof above does not take account of $f$ and $g$ having to be nonnegative. Indeed, it can happen that $\alpha_l<0<\alpha_u$, so that $\alpha_l/\alpha_u<0$ (that is the only thing that can go wrong). To take care of this case, let $f$ take the value $1$ on $U_k$ and the value $\frac{\alpha_u}{\alpha_l}\frac{\lambda(U_k)}{\lambda(L_k)}$ on $L_k$ and $0$ everywhere else. Also, let $g$ be the constant function with value zero. Then $\int fh=\int gh$, but $\int f>0=\int g$.