The question concerns the topic of relaxations in optimization problems. Moreover, proving the following, straightforward proposition. Let $z^P$ be the optimal solution for the problem $P$, $z^R$ be the optimal solution for the LP relaxation R. More generally: $z^{P} = \min\limits_{x\in X} f( x)$ and $z^{R} = \min\limits_{x\in Y} g( x)$.
If an optimal solution, $x^R$, of the relaxation $R$ is in the set of feasible solutions $x^R\in X$ and if $g(x^R) = f(x^R)$, then $x^R$ is also optimal for P.
The proposition makes sense to me, however, I do not see why $g(x^R) = f(x^R)$ has to be true. Is it necessary to also mention this in the proof? As you already proofed that $x^R$ is in the set of feasible solutions for the original problem, isn't it implied that $g(x^R) = f(x^R)$?
To reformulate my question from another perspective: "Could there be a case where there exists an $x^R$ that is optimal for the relaxation R and is in the set of feasible solutions for the original problem BUT where $g(x^R) \neq f(x^R)$?
The answer of your question is YES. If $g$ is a lower estimator of $f$ instead of $f$.