show the quadratic function $W(x_1,x_2,\ldots,x_n)=A\sum_{i} x_i^2+ \sum_{i\neq j} x_ix_j$ is quasi-concave

639 Views Asked by At

I have a quadratic function $W(x_1,x_2,\ldots,x_n)=A\sum_{i} x_i^2+ \sum_{i\neq j} x_ix_j$, with $x_i$ nonnegative and $A \in[0,1)$. And w.l.o.g. we can normalize $x_i's$ to between 0 and 1. In quadratic form, it is $W(x_1,x_2,\ldots,x_n)=X^TMX$, where $$M=\begin{bmatrix} A & 1 & 1 & \dots & 1 \\ 1 & A & 1 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & 1 & \dots & A \end{bmatrix}.$$ I want to show it is quasi-concave. First I checked $W$ is not concave or convex as $M$ is indefinite. Then I look at the bordered hessian matrix, but the calculation of the determinants of the leading principle minors are messy. There is a fact: if a function is quasi-concave and homogeneous, the ratio of partial derivatives are non-increasing in the ratio of corresponding variables. While here the function $W$ satisfy the condition, but this is a necessary condition for it to be quasi-concave, is there a corresponding sufficient condition of this fashion, in the other direction? Generally, is there any other way to check quasi-concavity?

Monotonic (single variable) function is quasi-concave. While, there is no common definition of monotonicity for multi-variable functions, because $R^n$ is not totally ordered for $n>2$. If I use the following definition provided by Clement C. in another post Monotonicity of function of two variables: $f$ is said to be monotone (non-decreasing) if for all fixed $(x, y),(x', y') \in \mathbb{R}^2$, $$(x \leq x' \text{ and } y \leq y' ) \Rightarrow f(x,y) \leq f(x',y')$$ (i.e., monotonicity wrt a partial order on $\mathbb{R}^2$)

Then monotonic multi-variable function is quasi-concave. And my $W$ here is monotonic in each $x_i$, hence it is quasi-concave. But this definition of monotonic for multivariable function seems to be disagree with definition of quasi-concave multivariable function. Because in quasi-concavity, we can pick any two points to take convex combination, and when one point (vector) is higher in some dimensions and lower in other dimensions, the above definition of monotonic is not applicable.

1

There are 1 best solutions below

2
On

A quadratic function of one variable fails to be quasiconcave on an interval if and only if it has an interior minimum point there. It follows that a quadratic form $W$ fails to be quasiconcave on a convex domain $\Omega\subset \mathbb{R}^n$ if and only if there exist $x\in \Omega$ and $h\in \mathbb{R}^n$ such that the first derivative of $W$ in direction $h$ is zero but the second derivative in the same direction is positive. In terms of the matrix $M$ that defines the quadratic form $W$, we can state: $W$ is not quasiconcave if and only if there exist $x\in \Omega$ and $h\in \mathbb{R}^n$ such that $h^TMx=0$ and $h^TMh>0$.

In your case, $x$ is constrained to have positive entries. When can we find such $x$ for which $h^TMx=0$ (equivalently, $x^TMh=0$)? Only when the vector $Mh$ has components of different signs. So the problem reduces to the following claim.

Claim 1: If $h\in\mathbb{R}^n$ is such that $Mh$ has components of different signs, then $h^TMh < 0$.

Let $u$ be the vector $(1,1,\dots,1)^T$. The matrix $M$ can be written as $vv^T+(A-1)I$. The Sherman–Morrison formula tells us that for $A\ne 1$, $$ M^{-1} = (1-A)^{-1}\left(\frac{uu^T}{n-1+A} - I\right) $$ Let's focus on the term $B=\dfrac{uu^T}{n-1+A} - I$. In terms of it, Claim 1 can be restated as follows, by letting $y=Mh$.

Claim 2: If a vector $y\in\mathbb{R}^n$ has components of different signs, then $y^TBy < 0$.

Proof of Claim 2. Given a vector $y$ as above, write it as $y=p+q$ where in $p$ the negative components of $y$ are replaced by $0$, and in $q$ the positive components of $y$ are replaced by $0$. E.g., if $y=(1,-2,-3,4)^T$ then $p = (1,0,0,4)^T$ and $q=(0,-2,-3,0)^T$. We have $$y^TBy = p^TBp + q^TBq + 2 p^TBq < p^TBp + q^TBq $$ where the inequality holds because the off-diagonal entries of matrix $B$ are positive.

I claim that $p^TBp\le 0$ and $q^TBq\le 0$; from this Claim 2 follows. Indeed, each of these vectors has some zero components and therefore it effectively deals with some principal submatrix of $B$. But any such submatrix is negative semidefinite, as follows by computing its eigenvalues. For a $k\times k$ submatrix of $B$, the largest eigenvalue is $$ \frac{k}{n-1+A} - 1 \le \frac{k}{n-1} - 1 \le 0 $$ with $k$ coming from the truncation of $uu^T$ to $k$ components.