Existence and Uniqueness of an NLP

135 Views Asked by At

I need to solve the following optimisation problem:

\begin{aligned}\max_{\alpha_{1},\ldots,\alpha_{n}} & \sum_{k=1}^{n}\left(\begin{array}{c} n-1\\ k-1 \end{array}\right)p\left(\alpha_{1},\ldots,\alpha_{n}\right)^{k-1}\left(1-p\left(\alpha_{1},\ldots,\alpha_{n}\right)\right)^{n-k}\sum_{r=1}^{k}\frac{\alpha_{r}}{k}\\ \text{s.t. } & \alpha_{1}\geq\alpha_{2}\geq\ldots\geq\alpha_{n}\geq0\\ & \sum_{k=1}^{n}\alpha_{k}=1\\ & \varDelta\left(p;\,\alpha_{1},\ldots,\alpha_{n}\right)=0 \end{aligned}

where at least two allocation "weights" $\alpha_j$ are different, and $\varDelta\left(\cdot\right)$ is a function of $p$ and $\alpha_{1},\ldots,\alpha_{n}$. To clarify, the probability $p=p\left(\alpha_{1},\ldots,\alpha_{n}\right)$ is the solution to the equation $\varDelta\left(p;\,\alpha_{1},\ldots,\alpha_{n}\right)=0$. We know that $p\left(\alpha_{1},\ldots,\alpha_{n}\right)$ is uniquely defined since for all feasible allocations $\varDelta\left(\cdot;\,\alpha_{1},\ldots,\alpha_{n}\right)$ is continuous and strictly decreasing in $p\in\left[0,1\right]$ with $\varDelta(0)>0$ and $\varDelta(1)<0$. Also, $\varDelta$(.) is non-linear in $p$.

I wish to show that this problem has a unique solution and also find it. I struggle with the fact that $p$ is a function of the weights $\alpha_j$ which complicates the objective function.

Here is my attempt to solve it: I relax the above (original) problem and solve for a given $p\in\left[0,1\right]$ the following problem:

\begin{aligned}\max_{\alpha_{1},\ldots,\alpha_{n}} & \sum_{k=1}^{n}\left(\begin{array}{c} n-1\\ k-1 \end{array}\right)p^{k-1}\left(1-p\right)^{n-k}\sum_{r=1}^{k}\frac{\alpha_{r}}{k}\\ \text{s.t. } & \alpha_{1}\geq\alpha_{2}\geq\ldots\geq\alpha_{n}\\ & \sum_{k=1}^{n}\alpha_{k}=1 \end{aligned}

The objective function of this relaxed problem is a Binomial expectation over $K$. For every realization $k=1,\ldots,n$, $\sum_{r=1}^{k}\frac{\alpha_{r}}{k}$ is maximized at $\alpha^{*}_{1}=1$, $\alpha^{*}_{2}=...=\alpha^{*}_{n}=0 $ (the arithmetic mean of a monotone sequence has the same monotonicity). Since this holds for every realization, it holds in expectation as well. That is, the solution to the relaxed problem is $\alpha^{*}_{1}=1$, $\alpha^{*}_{2}=...=\alpha^{*}_{n}=0 $. The (unique) $p$ that solves the eliminated feasibility $\varDelta$ constraint of the original problem is also optimal for the original.

Is this proof correct? Since there is a unique solution for the relaxed problem, and $\varDelta(p)=0$ has a unique solution $p$, is this sufficient to prove that the solution is unique. If not, how to show uniqueness rigorously?

1

There are 1 best solutions below

6
On BEST ANSWER

Based on our conversation in the comments, I think $p$ should be considered as an optimization variable in the original problem, to be formal about things.

Under this setting, and under the assumption that your relaxed problem has a unique solution as you mentioned, I believe your argument for the uniqueness of the unrelaxed solution holds true. Here's my argument as to why:

Consider the more general problem of maximizing $f\colon\mathbb{R}^n\times\mathbb{R}\to\mathbb{R}$ given by $F^* = \sup\mathcal{F}$, where $\mathcal{F}=\{f(\alpha,p) : g(\alpha,p)=0, ~ \alpha \in A, ~ p\in P\}$ and $A$ and $P$ are a fixed subsets of $\mathbb{R}^n$ and $\mathbb{R}$, respectively. (In the context of your problem, $f$ is your objective function, $g(\alpha,p)=\Delta(p;\alpha)$, $P=[0,1]$, and $A=\{\alpha\in\mathbb{R}^n:\alpha_1\ge\alpha_2\ge\cdots\ge\alpha_n, ~ \sum_{i=1}^n\alpha_i = 1\}$.) Define $\bar{\mathcal{F}} = \{f(\alpha,p) : \alpha\in A, ~p\in P\}$. Then clearly we have that $\mathcal{F}\subseteq \bar{\mathcal{F}}$, so we obtain the following relaxation: \begin{equation*} F^* = \sup\mathcal{F} \le \sup\bar{\mathcal{F}} = \bar{F}^*. \end{equation*} Note that we can rewrite the relaxed problem as an infinite number of subproblems over only $\alpha$: \begin{equation*} \bar{F}^* = \sup_{p\in P,~\alpha \in A}f(\alpha,p) = \sup_{p\in P}\sup_{\alpha\in A}f(\alpha,p). \end{equation*}

We will now show that the existence and uniqueness assumptions you provided imply the existence and uniqueness of the unrelaxed solution. In particular, suppose that for all $\alpha\in A$ there exists a unique $p\in P$ satisfying $g(\alpha,p)=0$. Furthermore, assume that $\alpha^*\in A$ is the unique maximizer of $f(\cdot,p)$ for all $p\in P$, i.e. $\arg\max_{\alpha \in A}f(\alpha,p) = \{\alpha^*\}$ for all $p\in P$. Then \begin{equation} \bar{F}^* = \sup_{p\in P}f(\alpha^*,p) = f(\alpha^*,p) ~ \text{for all $p\in P$}. \tag{1}\label{eq1} \end{equation}

By our assumption on $g$, there exists a unique $p^*\in P$ satisfying $g({\alpha}^*,p^*) = 0$. Hence, $(\alpha^*,p^*)$ is feasible for the unrelaxed problem since ${\alpha}^*\in A$, $p^*\in P$, and $g(\alpha^*,p^*)=0$. Therefore, \begin{equation} f({\alpha}^*,p^*) \le \sup\mathcal{F} = F^* \le \bar{F}^*. \tag{2}\label{eq2} \end{equation} On the other hand, by (\ref{eq1}) we know that $\bar{F}^* = f({\alpha}^*,p^*)$, and therefore all inequalities in (\ref{eq2}) become equalities, i.e. \begin{equation*} F^* = \bar{F}^* = f({\alpha}^*,p^*), \end{equation*} so $(\alpha^*,p^*)$ is a maximizer of the unrelaxed problem. This shows the existence of the unrelaxed solution. To show uniqueness, note that, since $\alpha^*\in A$ is the unique maximizer of $f(\cdot,p)$ for all $p\in P$ by assumption, it holds that $f(\alpha,p)<f(\alpha^*,p)$ for all $\alpha\in A\setminus \{\alpha^*\}$, for all $p\in P$. Therefore, since $F^* = f(\alpha^*,p^*)$, all maximizers of the unrelaxed problem must be of the form $(\alpha^*,p)$ for some $p\in P$. By the uniqueness assumption on $g$, this implies that $(\alpha^*,p^*)$ is the only maximizer of the unrelaxed problem.

Hopefully I understood your setup and this proves what you were looking for!