I have a problem related to the use of the "add and subtract" strategy in optimization problems. This is related to a question I asked on Cross Validated. I got really helpful answer to this question, but I still did not crack the crux of the problem.
Take a function $f(x) : \mathbb{R} \rightarrow \mathbb{R}$ and suppose it has a minimum on $\mathbb{R}$. It seems obvious that
${\arg \min}_{x\in \mathbb{R}} f(x) = {\arg \min}_{x\in \mathbb{R}} f(x + a - a)$ for any $a\in \mathbb{R}$
However, there are many counterexamples to this. Consider for instance $f(x) = (2-x)^2$. We have
${\arg \min}_{x\in \mathbb{R}} (2-x)^2 = \{2\}$
But
\begin{align*} & {\arg \min}_{x\in \mathbb{R}} (2 - 4 + 4-x)^2\\ = & {\arg \min}_{x\in \mathbb{R}} (2 - 4)^2 + 2(2-4)(4-x) + (4-x)^2\\ = & \{4\} \end{align*}
So something must be wrong with using "add and subtract" in optimization problems. However, I have seen the technique used in some proofs (in particular the one related to the aforementioned question on Cross Validated). Are all these proofs wrong or are there cases in which ${\arg \min}_{x\in \mathbb{R}} f(x) = {\arg \min}_{x\in \mathbb{R}} f(x + a - a)$ for any $a\in \mathbb{R}$ is true and others in which it is not?
How did you get $4$ as the minimizer of the second equation? You have $$\arg \min_x -4(4-x)+(4-x)^2$$ which is a convex function whose global minimum occurs when its derivative is zero: $$4-2(4-x)=0 \Rightarrow 2x=4 \Rightarrow x=2.$$ Adding $0$ to the argument of any function obviously does not change that function, or its extrema, in any way.