In Theorem 10.38 Principles of Mathematical Analysis by W Rudin it states without an explanation that $E \subset \mathbb{R^n} $ the open domain of $f$ to be convex is necessary and sufficient for the following claim to hold :
If $1\le p\le n$ and $D_jf(x) =0$ for all $p <j \le n$ then $f$ does not depend on $x_{p+1}, \dots x_n$.
The book and comments in here gives examples of why $D_jf(x) =0$ is not enough to make $f$ independent of $x_j$ but how imposing E to be $convex$ is enough to make $f$ independent of $x_j$? If the answer is " that allows you to have a set of directions to take partial derivatives along lines" how does this hint applies to my question, rigorously?
Suppose $D_1f = 0$. If $(a, x_2, ..., x_n)$ and $(b, x_2, ..., x_n)\in E$, then the segment from $(a, x_2, ..., x_n)$ to $(b, x_2, ..., x_n)$ is contained in $E$. Now, by the mean-value theorem, $$f(b, x_2, ..., x_n) - f(a, x_2, ..., x_n) = D_1f(\theta, x_2, ..., x_n)(b - a) = 0,$$ hence $f$ does not depend on $x_1$. (Here, after fixing $x_2, ..., x_n$ we thought of $f$ as a single-variable function, and we needed $E$ to be convex, so the function would be defined on the segment, so we could use the mean-value theorem).