I'm trying to solve an optimization problem, where I want to minimize a function with respect to $\mathbf{n}$. However, after differentiating w.r.t $\mathbf{n}$ and setting this to 0, I get stuck. The equation is:
$ \sum_{i=1}^N 2\theta_i (\mathbf{n}\cdot(\mathbf{p_i} - \mathbf{r})) (\mathbf{p_i} - \mathbf{r}) = \mathbf{0} $
Explanation of terms: $\theta_i$ - Value between 0 and 1 representing a weight depending on the distance between $\mathbf{p_i}$ and $\mathbf{r}$.
$\mathbf{n}$ - $3$ dimensional vector
$\mathbf{p_i}$ - point in $3$ dimensional space (there are N of those points)
$\mathbf{r}$ - point in $3$ dimensional space
My thoughts are to write this on matrix/vector form but I can't seem to figure this out... Any tips on how to reformulate the equation?
$ \def\o{{\tt1}} \def\smn{{\small N}} \def\t{\theta} \def\qiq{\quad\implies\quad} $Collect all of the $p_k$ vectors into a matrix, i.e. $$P=\Big[\,p_1\;p_2\:\cdots\:p_\smn\Big]\qiq p_k = Pe_k$$ where $e_k$ is the $k^{th}$ cartesian basis vector.
Then using the all-ones vector $\big(\o=\sum_je_j\big)$ construct the matrix $$Q = \big(P - r\o^T\big) \qiq q_k=Qe_k = \big(p_k-r\big)$$ In terms of these matrix variables, the summation becomes $$\eqalign{ v &= \sum_{k=1}^{N}2\t_k \big(p_k-r\big)\big(p_k-r\big)^Tn \\ &= \sum_{k=1}^{N}2\t_k\big(Qe_k\big)\:\big(e_k^TQ^T\big)\:n \\ &= 2Q\left(\sum_{k=1}^N\t_ke_ke_k^T\right)Q^Tn \\ &= 2QWQ^Tn \\ }$$ where $W$ is a diagonal matrix whose diagonal elements are the $\t_k$ weights.
Therefore, if $n$ is in the nullspace of the given matrix then $v=0.$
Using the pseudoinverse $(+)$ this can be expressed by the formula $$ n = \left[I-\left(QWQ^T\right)^{\large+} \left(QWQ^T\right)\right]a $$ where $a$ is an arbitrary vector.