Consider a linear or non-linear optimization problem of the form:
$$\min x_1$$
$$x_1 \ge x_2 \ge x_3$$ $$f_1(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123}) \ge 0$$ $$\ldots$$ $$f_n(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123}) \ge 0$$
where the functions $f_i$ are symmetric for an arbitrary permutation of the indexes $1,2,3$ of the $x$ variables, for example $f_i(x_3,x_2,x_1,x_{23},x_{13},x_{12},x_{123})=f_i(x_1,x_2,x_3,x_{12},x_{13},x_{23},x_{123})$ (where we exchange $1$ with $3$).
If we now consider the problem:
$$\min x$$
$$f_1(x,x,x,y,y,y,z) \ge 0$$ $$\ldots$$ $$f_n(x,x,x,y,y,y,z) \ge 0$$
Is a solution of the second problem necessarily a solution of the first problem, once that we set $x_1=x_2=x_3=x$, $x_{12}=x_{13}=x_{23}=y$, $x_{123}=z$, due to symmetry? Or can it happen that $\min x_1 \lt \min x$? How can one justify the answer formally?
I also suppose that the natural extension to indexes made from $1,2,3,\ldots ,m$ (the $x$ variables will have all subsets of them as indexes) admits the same answer for the $1,2,3$ case. Is it true?
Sure, that's possible. A simple example would be $f_i\equiv 0$ for $i\ge 2$, and $$f_1(x_1,x_2,x_3,x_{11},x_{12},x_{13},x_{123})=\mathbf{1}\{x_1=x_2=x_3= 0\}+\mathbf{1}\{\{x_1,x_2,x_3\}=\{-1,-2,-3\}\}.$$ Then the minimum for the first problem is $-1$, and $0$ for the second.
You can also choose $f_1$ to be smooth. Without giving a concrete example, there are clearly choices with $f_1^{-1}(\mathbb R_{\ge 0})=\mathcal B_r(1,1,1)\cup\bigcup_i\mathcal B_{r}(x_i)$, where $x_i$ for $i=1,\dots,6$ are the $6$ permutations of $(-1,-2,-3)$, further $r$ is a small radius, say $r=\frac{1}{4}$, and $\mathcal B_r(x)=\{y\in\mathbb R^3:\|y-x\|_2\le r\}$. Take $x_1=(-1,-2,-3)$, then by construction we have $\|y-x_1\|_\infty\le\|y-x_1\|_2\le 1/4$, which ensures that $y<0$ and pairwise distinct coordinates of $y=(y_1,y_2,y_3)$ for all $y\in\mathcal B_r(x_1)$.
Constructing a linear optimization problem is also not an issue. Just take the line segment between $(-1,-1,-1,-1,0,0,0)$ and $(1,1,1,1,0,0,0)$. Clearly, we can choose $f_i$ such that these are the feasible points, since this is a polytope. In the lower optimization problem, there is exactly one feasible point, the origin, since there we have $x_{11}=x_{12}=x_{13}=y=0$. Hence, in the lower problem the minimum is $0$, while in the upper problem it is $-1$.
A very simple way to see that these two problems are fundamentally different, is to notice that the set of feasible points in the lower problem can be empty, while it is non-trivial in the top problem. Just take the line segment between $(-1,-1,-1,-1,0,0,0)$ and $(1,1,1,-1,0,0,0)$. Also, I would recommend to start with the simplest possible examples to capture the problem. Here, it would be $\mathbb R^2\times\mathbb R^2$, even omitting the last coordinate $x_{123}$.
Finally, I think at this point the extension to arbitrary dimensions is straightforward.