Let $a_1,\ldots,a_m$ be elements of $\mathbb{R}^n.$ Then the convex cone $K_{\Omega}$

72 Views Asked by At

I am having a problem with one aspect of the following proof I came across in "An Easy Path to Convex Analysis and Applications" by Mordukhovich and Nam.

It is Proposition 3.9 and it is the line marked by * that is giving me troubles

Proposition 3.9 $\;\;\;\;\;\;\;$Let $a_1,\ldots,a_m$ be elements of $\mathbb{R}^n.$ Then the convex cone $K_{\Omega}$ generated by $\Omega:=\{a_1,\ldots,a_m\}$ is closed in $\mathbb{R}^n$.

Proof $\;\;\;\;\;$ Assume first that the elements $a_1,\ldots,a_m$ are linearly independent and take any sequence $\{x_k\}\subset K_{\Omega}$ converging to $x.$ By construction of $K_{\Omega}$ find numbers $\alpha_{ki}\geq0$ for each $i=1\ldots m$ and $k\in\mathbb{N}$ such that $$x_k=\sum_{i=1}^{m}\alpha_{ki}a_i$$ *Letting $\alpha_{k}:=(\alpha_{k1},\ldots,\alpha_{km})\in\mathbb{R}^n$ and arguing by contradiction, it is easy to check that the sequence $\{\alpha_{k}\}$ is bounded$\ldots$

I am not convinced to is easy to check that the sequence is bounded. My initial thoughts are as follows:

Assume the sequence $\{\alpha_k\}$ is not bounded. It then follows that $\{\alpha_k\}$ divergess to infinity. i.e. as $k\rightarrow \infty, \{\alpha_k\}\rightarrow\infty$. But then $\sum_{i=1}^{m}\alpha_{ki}a_i\rightarrow\infty$ which means $\{x_k\}\rightarrow\infty$. However, this contradicts our choice of $\{x_k\}$ which converges to $x$. Thus, $\{\alpha_k\}$ is bounded.

Does this seem like I am on the correct path?

1

There are 1 best solutions below

0
On

I have to methods for proving the boundedness. The first one uses a contradiction, as proposed by the authors. However, the second method seems to be more elegant and even yields the convergence of the coefficients.

Method 1: Assume $\{\alpha_k\}$ is not bounded. Then, $$\frac{\sum_{i=1}^m \alpha_{ki} \, a_i}{\max_{i} \alpha_{ki}} = \sum_{i=1}^m \frac{\alpha_{ki}}{\max_j \alpha_{kj}} \, a_i \to 0.$$ Now, the sequences $\{\frac{\alpha_{ki}}{\max_j \alpha_{kj}}\}_k$ are bounded and you can extract convergent subsequences such that at least one of them does not converge to zero. Hence, $$\sum_{i=1}^m \alpha_i \, a_i = 0$$ and at least one $\alpha_i$ is not zero. This is a contradiction to the linear independence.

Method 2: Since the $a_i$ are linear independent, there are dual elements $b_j$ with $$b_j^\top a_i = \delta_{ij} = \begin{cases} 1 & \text{if } i = j, \\ 0 & \text{if } i \ne j.\end{cases}$$ Then, $$b_j^\top x \leftarrow b_j^\top x_k = b_j^\top \sum_{i=1}^m \alpha_{ki} \, a_i = \alpha_{kj}.$$