Here is the objective function to be maximized: $$ E_{v}(\log(1+v^{\mathsf T} \Lambda v) ) $$ where $v$ is a Gaussian distributed random variable vector $v ∼ \mathrm{CN}(M,I)$ with its mean vector $M$ and covariance matrix $I$ (identity matrix). $\Lambda$ is a given diagonal matrix whose elements are non-negative and in decreasing order.
I want to find $M$ that maximize the objective function where $M$ is subject to power constraint: $$ \Vert M \Vert^2 \le P\ . $$
I have done some simulations on this problem. It turns out that $M=[\sqrt{P},0,0,\ldots,0]$ is optimal. However the proof is somewhat difficult because it isn't a concave function respect to $v$.
I wonder that whether a strict proof can be given or anyone can give me some hints on it?
Thank you!
You can relax the $log$ since it is monotonically increasing, and then you have
$$f = \mathbb{E}\left[1 + \sum_{i=1}^n\lambda_iv_i^2\right] = 1 + \sum_{i=1}^n\lambda_i\mathbb {E}[v_i^2] = 1 + \sum_{i=1}^n\lambda_i (1-M_i^2)$$
Removing all constant terms, you get:
$$f = -\sum_{i=1}^n \lambda_iM_i^2$$
At this point, you can transform your problem into a linear programming one and use some methods you like (i.e. simplex method). Just impose that:
$$x_i = M_i^2$$
and you have the following:
$$\left\{\begin{array}{l} \max_{x} \left(-\sum_{i=1}^n\lambda_i x_i\right)\\ \text{s.t.}\\ \sum_{i=1}^n x_i \leq P \\ x_{i} \geq 0 ~~\forall i \in \{1, \ldots, n\} \end{array} \right. $$
Remember that the solutions you get in $x$ will produce solutions in $M$ since $M_i = \pm \sqrt{x_i}$.