Consider the cost function J:
$J=|P_1-\beta P_2|^2+\lambda(\pmb{q}^H\pmb{q}-E)$
where $P_1, P_2$ and $\beta$ are complex scalars, $\lambda$ is the Lagrange multiplier and E is the constraint applied to the norm of the solution $\pmb{q}$, which is an n-by-1 vector. The first term of the function J defines the error that needs to be minimized. $\beta$ is just a fixed value. $P_1$ and $P_2$ depend on the solution as follows:
$P_1=\pmb{H}_1\pmb{q}$ and $P_2=\pmb{H}_2\pmb{q}$
where $\pmb{H}_1$ and $\pmb{H}_2$ are 1-by-n vectors
Introducing these expressions, J becomes:
$J=\pmb{q}^H((\pmb{H}_1-\beta \pmb{H}_2)^H(\pmb{H}_1-\beta \pmb{H}_2)+\lambda \pmb{I})\pmb{q} -\lambda E$
where $\pmb{I}$ is the n-by-n identity matrix. Note that the expression $(\pmb{H}_1-\beta \pmb{H}_2)^H(\pmb{H}_1-\beta \pmb{H}_2)$ is a n-by-n positive semidefinite matrix of single rank.
I'm not sure about next step but I would say that, since $\lambda$ is positive, the expression $((\pmb{H}_1-\beta \pmb{H}_2)^H(\pmb{H}_1-\beta \pmb{H}_2)+\lambda \pmb{I})$ can be considered positive definite and therefore a global minimum can be found by:
$\frac{\partial J}{\partial \pmb{q}}=0$
which leads to
$2((\pmb{H}_1-\beta \pmb{H}_2)^H(\pmb{H}_1-\beta \pmb{H}_2))\pmb{q}=-\lambda \pmb{q}$
The solution $\pmb{q}$ that minimizes the cost function is therefore the eigenvector corresponding to the minimum eigenvalue of $(\pmb{H}_1-\beta \pmb{H}_2)^H(\pmb{H}_1-\beta \pmb{H}_2)$. Since that matrix is rank 1, there is only one non-zero eigenvalue, so the solution is its corresponding eigenvector.
I tried and it turns out that the solution minimizes J but does not minimize the error $|P_1-\beta P_2|^2$. What am I missing? Thank you very much for your help!