This proof is from a book about linear programming; it is about interior point methods, but I think that might not be important for my question.
We have:
$$f_{\mu}(\mathbf{x}) = \mathbf{c}^T\mathbf{x} + \mu\sum_{i=1}^{m}\log(b_{i} - \mathbf{a}_{i}\cdot\mathbf{x}) $$
For vectors $\mathbf{c},\mathbf{x},\mathbf{a}_i$ for $i = 1,2,3,...,m$ and $\mu > 0$. The book claims that $f_{\mu}$ has a unique maximum inside the polytope defined by $A\mathbf{x} < \mathbf{b}$ where the $\mathbf{a_i}$'s are the rows (not the columns) of $A$; we also assume that this polytope is bounded. $A$ is an $m \times n$ real matrix of rank $m$; so the polytope is in $\mathbb{R}^n$.
I don't understand a line in the proof of the uniqueness claim. The books states that if there were two different maxima $\mathbf{x}$ and $\mathbf{y}$ then $f_{\mu}$ must be constant along the line segment $\mathbf{xy}$ because the polytope is convex so all of the line segment is inside the polytope and $f_{\mu}$ is concave.
The book then states that "since the logarithm is strictly concave, this can happen only if $A\mathbf{x} = A\mathbf{y}.$" This is the part I don't understand; why does the logarithm being concave imply that $A\mathbf{x} = A\mathbf{y}$? I don't have much experience with convexity.
Suppose $a_i \cdot x \neq a_i \cdot y$ for some $i$, then you get a contradiction that $x$ and $y$ are global minima: \begin{align} f_\mu(0.5x + 0.5y) &= c^T(0.5x+0.5y)+\mu \sum_{i=1}^m \log(b_i-a_i\cdot(0.5x+0.5y)) \\ &= c^T(0.5x+0.5y)+\mu \sum_{i=1}^m \log(0.5(b_i-a_i\cdot x) + 0.5(b_i - a_i\cdot y)) \\ &> 0.5c^Tx+0.5c^Ty+0.5\mu\sum_{i=1}^m \log(b_i-a_i\cdot x)+0.5\mu\sum_{i=1}^m \log(b_i-a_i\cdot y) \\ & = 0.5f_\mu(x) + 0.5 f_\mu(y) \\ & = \max_x f_\mu(x). \end{align} The strict inequality comes from the strict concavity of the logarithm: if $b_i-a_i\cdot x \neq b_i-a_i\cdot y$, then the midpoint has a lower function value.