Using method of Lagrange multipliers, I am looking to minimise the function
$f(x_1,...,x_n)=\prod_{i=1}^n(1+x_i)$ with the side condition $g(x_1,...,x_n)=\prod_{i=1}^nx_i-q^n=0$.
My goal is to show that $f$ is minimal when $x_i=q$ for all $i$.
I have $$\nabla g=(\prod_{i=1, i \neq j}^nx_i)_{1\leq j \leq n}$$ $$\nabla f=(\prod_{i=1, i \neq j}^n(1+x_i))_{1\leq j \leq n}$$
Now I know that $\nabla f = \lambda \nabla g$, so for all $j$:
$$\prod_{i=1, i \neq j}^n(1+x_i)=\lambda\prod_{i=1, i \neq j}^nx_i$$
and since all $x_i>0$
$$\prod_{i=1, i \neq j}^n(1+\frac{1}{x_i})=\lambda$$
Since the products are equal for all $j$, it follows that the $x_1=x_2=...=x_n$ and thus $x_1...x_n=x_1^n=q^n \iff x_i=q$ for all $i$. So far so good.
My problem is to show that it is a minimum: The entries of the Hessian are
$$\partial_{x_i} ^2f=0$$
$$\partial_{x_j} \partial_{x_k}f=\prod_{i=1, i \neq j,i \neq k}^n(1+x_i)>0$$
So I have a matrix with zero diagonal and positive entries everywhere else. This matrix is not positive definite, as is easily seen in the $2 \times 2$ case.
Did I do any mistakes? How do I argue that it is a minimum?
The associated Bordered Hessian matrix is $$ \left[ \begin{array}{c|ccc} 0 & (1+q)^{n-1} & (1+q)^{n-1} & (1+q)^{n-1} &\dots \\\hline (1+q)^{n-1} & 0 & (1+q)^{n-2} & (1+q)^{n-2} &\dots \\ (1+q)^{n-1} & (1+q)^{n-2} & 0 & (1+q)^{n-2} &\dots \\ (1+q)^{n-1} & (1+q)^{n-2} & (1+q)^{n-2} & 0 &\dots \\ \vdots & \vdots & \vdots & \vdots & \ddots \end{array} \right] $$ Up to a multiplicative constant, the above has the shape $$ A = \left[ \begin{array}{c|cccc} 0 & a & a & a &\dots & a \\\hline a & 0 & 1 & 1 &\dots & 1\\ a & 1 & 0 & 1 &\dots & 1\\ a & 1 & 1 & 0 &\dots & 1\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ a &1 & 1 & 1 & \dots & 0 \end{array} \right]\ . $$ It is an $(n+1)\times(n+1)$ matrix, and its characteristic polynomial is $$ P_A(x) = (x+1)^{n-1}(x^2-(n-1)x-na^2)\ . $$ So exactly one eigenvalue is positive. Let $C$ be the matrix: $$ C= \left[\begin{array}{c|cccc} 1 & 0 & 0 & 0 & \dots & 0 \\\hline 0 & 1 & 0 & 0 & \ddots & 0 \\ 0 & -1 & 1 & 0 & \ddots & 0 \\ 0 & 0 & -1 & 1 & \ddots & 0 \\ \vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & -1 & 1 \end{array}\right]\ . $$ ($C$ has ones on the diagonal, in the $n\times n$ block "minus ones" immediately under the diagonal, else zeros.) Then conjugation with $C$ gives: $$ CAC^{-1} = \left[\begin{array}{c|c|ccc} 0 & na & (n-1)a & (n-2)a & \dots & a \\\hline a & n-1 & (n-1) & (n-2) & \dots & 1 \\\hline 0 & 0 & -1 & 0 & \dots & 0 \\ 0 & 0 & 0 & -1 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \dots & -1 \end{array}\right]\ . $$ As as a quadratic form, $A$ changes by the base change action of $C$ into $$ CAC^t = \left[ \begin{array}{cc|cccc} 0 & a & 0 & 0 & 0 &\dots & 0\\ a & 0 & 1 & 0 & 0 &\ddots & 0\\\hline 0 & 1 & -2 & 1 & 0 &\ddots & 0\\ 0 & 0 & 1 & -2 & 1 &\ddots & 0\\ 0 & 0 & 0 & 1 & -2 &\ddots & 0\\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots\\ 0 & 0 & 0 & 0 & 0 & \dots & -2 \end{array} \right]\ . $$
If Lagrange multiplicators are used, the optimization problem is not regarging the "full function $f+\lambda g$", and the Hessian matrix of $f+\lambda g$ may fulfill sufficient positivity / negativity conditions for only a "tangential null space". The idea is roughly to have a "conditional Taylor polynomial of order two in the $x$-variables", that still allows to deduce a local extremal value.
In our case, the condition is $x_1x_2\dots x_n=q^n$. Formally, writing $x_1=q+h_1+O(h_1^2)$ and analogously for the other variables, we get the relation $$ \prod_{1\le k\le n}(q+h_k+O(h_k^2))=q^n\ , $$ so formally $q^{n-1}(h_1+h_2+\dots+h_n)+O(|h|^2)=0$.
In the given sitation, the matrix $C$ provides an $(n-1)\times (n-1)$ block which is built with vectors from the null space. Eventually, one can work to convert this beginning into a proof.
The most simple solution to the minimum problem is to observe that if two component values in $x=(x_1,x_2,x_3,\dots)$ are not equal, say $x_1\ne x_2$ without loss of generality, then we can redistribute $x_1x_2=c^2$ in a new point $x_c:=(c,c,x_3,\dots)$ and we get a smaller value because $$ (1+x_1)(1+x_2)-(1+c)^2=x_1+x_2-2c=(\sqrt x_1-\sqrt x_2)^2>0\ . $$ (Because of this simpler argument i did not insist to complete the proof using Lagrange multiplicators. Search please the net for the bordered matrix to see many explicit examples.)