Here's the question from the book:
Here's my response:
But I don't think I answered the actual question asked. I proved if $f(x)=0$, then $\int_a^b fg=0$, but I think they want me to prove it vice versa : if $∫_a^b fg=0$, then $f(x)=0$, which I can't figure out how to prove/
------------------------EDIT---------------------------------
Here's the solution from the solution manual, but I have a question about it: (Analysis With an Introduction to Proof, by Steven R. Lay)
Does this still hold when g doesn't equal f ??


You are correct that you proved the wrong thing. As you said, you want to prove that if $\displaystyle \int_a^b fg =0$ for all integrable $g$, then $f\equiv 0$.
Here's how we can do that:
For $\varepsilon>0$ take an integrable function $g$ such that $|f(x)-g(x)|<\epsilon$ for all $x \in[a,b]$. One way you could obtain such a $g$ is to just perturb $f$ continuously on some tiny interval, by less than $\varepsilon$.
Then $$\int_a^b f^2 = \int_a^b f(f-g) + \int_a^b fg$$ $$= \int_a^b f(f-g).$$
But since $f$ is continuous on $[a,b]$, it is bounded by some $M$ so $$\left|\int_a^b f(f-g)\right| \le M\int_a^b |f-g| < \epsilon(b-a)M.$$
Since we can do this for every $\varepsilon >0$, we have that $$\int_a^b f^2 =0.$$ Since $f$ is continuous, then $f^2$ is a nonnegative, continuous function whose integral is zero. This implies that $f^2 \equiv0$ on $[a,b]$, which implies that $f \equiv 0$ on $[a,b]$.