Let we have two functions $f(x)$ and $g(x)$.
We know that $f' \in AC[a, b]$, $f(x) > 0$ (Absolutely Continuous and positive) and $g \in L^p(a, b)$.
Does it true, from $\int_a^b f(x) g(x) dx = 0$ it follows $\int_a^b g(x) dx = 0$?
Let we have two functions $f(x)$ and $g(x)$.
We know that $f' \in AC[a, b]$, $f(x) > 0$ (Absolutely Continuous and positive) and $g \in L^p(a, b)$.
Does it true, from $\int_a^b f(x) g(x) dx = 0$ it follows $\int_a^b g(x) dx = 0$?
The result is false in general. The mapping $\phi: g \mapsto \int_a^b f(x) g(x) d x$ is a continuous linear form on $L^p(a,b)$. So is $\psi: g\mapsto \int_a^b g(x)dx$. If your property is true, it means that $\ker \phi \subset \ker \psi$. It follows easily that $\psi = \lambda \phi$ for some number $\lambda\neq 0$, which means that $$\forall g\in L^p(a,b),\quad \int_a^b(f(x)-\lambda)g(x) d x = 0$$ Taking $g(x) = f(x) - \lambda$ we get $$\int_a^b(f(x)-\lambda)^2 dx = 0$$ hence $f(x)=\lambda$ is a constant. Conversely, if $f$ is a non null constant, the result is true.
Edit In the comments, you ask if we can get information about $g$ when $\int_a^b f(x) g(x) d x=0$, and if $g$ has positive and negative parts. The difficult part of the question is that there are many directions in which we could go from $$\int_a^b f(x) g(x) d x = 0\qquad(R)$$ and probably many valid conclusions. You could decompose as usual $$g = g^+ - g^-$$ with $g^+ = \max(0, g)$ and $g^- = \max(0, -g)$.
I first thought that one could prove for $p \in \left(1 , {+\infty }\right)$ a relation such as
$$\left(R\right) \Rightarrow {\left\|{g}^{+}\right\|}_{p} \leqslant {\alpha} {\left\|g\right\|}_{p}$$
with ${\alpha} \in \left(0 , 1\right)$, but it turns out to be false in general as shown by the following counter example:
Let ${\epsilon} > 0$ and let ${{\tau}}_{{\epsilon}} = \frac{1}{1+{{\epsilon}}^{2}}$, so that $1-{{\tau}}_{{\epsilon}} = {{\epsilon}}^{2} {{\tau}}_{{\epsilon}}$. Let
$$\renewcommand{\arraystretch}{1.5} {g}_{{\epsilon}} \left(x\right) = \left\{\begin{array}{rl}{-{\epsilon}}&\quad x \in \left(0 , {{\tau}}_{{\epsilon}}\right)\\ \displaystyle \frac{1}{{\epsilon}}&\quad x \in \left({{\tau}}_{{\epsilon}} , 1\right) \end{array}\right.$$
Then one has
$$\int_{0}^{1}{g}_{{\epsilon}} \left(x\right) d x = 0 , \quad {\left\|{g}_{{\epsilon}}\right\|}_{2} = 1 , \quad {\left\|{g}_{{\epsilon}}^{-}\right\|}_{2} = \frac{{\epsilon}}{\sqrt{1+{{\epsilon}}^{2}}} \mathop{\longrightarrow}\limits_{{\epsilon} \rightarrow 0} 0$$
which proves that the ${L}^{2}$ norm of the negative part can be arbitrarily small when ${\left\|g\right\|}_{2} = 1$ and $\int_{}^{}g = 0$.
Similar counter examples could certainly show that ${\left\|{g}^{-}\right\|}_{p}$ can be arbitrarily small when ${\left\|g\right\|}_{p} = 1$ and $\left(R\right)$ is satisfied.
I think you can only expect a lower bound in ${L}^{1}$ for $\left\|{g}^{+}\right\|$ and $\left\|{g}^{-}\right\|$ when $\left(R\right)$ is satisfied.