Why can we replace $|f(x_1, \cdots, x_{j\geq 1})|$ with $r+s$ subject to the constraints $f(x_1, \cdots, x_{j \geq 1}) = r-s$ and $r,s \geq 0$ in LP? $f(x_1, \cdots, x_j)$ is either present in the objective function or in the constraints or both.
I have seen ample number of problems where this technique is being followed and I have not come across a proof in support of this. How do I prove that this this works in general (for both maximization and minimization problems)?
Crucially it only works for convex use of the operator, e.g., minimization with it entering with positive constant, or in a constraint entering with a positive constant (to the left of $\leq$). The general case is nonconvex and is not LP-representable (although MIlP-representable).
Let $x = a - b$ and replace $|x| = a-b$ where the intention is that optimality implies $a = \max(x,0)$ and $b = \max(-x,0)$.
Now assume you instead obtain a solution with $a = \max(x,0) + \epsilon $ and $b = \max(-x,0) + \epsilon $ and study what happens if you assume this is an optimal solution (hint: if $|x|$ enters the minimization with a positive sign, how does $\epsilon$ enter the objective, and what does a non-zero $\epsilon$ tell you about optimality...)
If the term only is used in the constraints, and the constraint is inactive at optimality, correctness is not important. If the constraint is tight at optimality, using the same principle as for the objective, $\epsilon$ will be driven to 0.