The goal is finding $\frac{{\partial f}}{{\partial {\bf{A}}}} = 0$ where $ f\left( {{\bf{A}},{\boldsymbol{\alpha }}} \right) = {\left( {{{\bf{p}}^{\bf{T}}}{{\bf{A}}^{\bf{T}}}{\boldsymbol{\alpha }} + \eta } \right)^2}$. $\bf A$ is matrix and $\boldsymbol{{p^T}{A^T}\alpha}$ is scalar.
\begin{array}{l} {\bf{p}} = \left[ {\begin{array}{*{20}{c}} {{p_{1}}}\\ {{p_{2}}} \end{array}} \right]\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \to \,\,{{\bf{p}}^T} = \left[ {\begin{array}{*{20}{c}} {{p_{1}}}&{{p_{2}}} \end{array}} \right]\\ \\ {\bf{A}} = \left[ {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}\\ {{a_{21}}}&{{a_{22}}} \end{array}} \right]\,\,\, \to \,\,\,{{\bf{A}}^T} = \left[ {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{21}}}\\ {{a_{12}}}&{{a_{22}}} \end{array}} \right]\\ \\ {\boldsymbol{\alpha }} = \left[ {\begin{array}{*{20}{c}} {{\alpha _{1}}}\\ {{\alpha _{2}}} \end{array}} \right] \end{array}
Derivation of f w.r.t matrix $\bf A$ will be: \begin{array}{l} 2{\boldsymbol{\alpha }}\left( {{{\bf{p}}^T}{{\bf{A}}^T}{\boldsymbol{\alpha }} + \eta } \right){{\bf{p}}^T}=0 \end{array} \begin{array}{*{20}{l}} {\frac{{\partial f}}{{\partial {\bf{A}}}} = 0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \Rightarrow {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 2{\boldsymbol{\alpha }}\left ( {{{\bf{p}}^T}{{\bf{A}}^T}{\boldsymbol{\alpha }} + \eta } \right){{\bf{p}}^T} = 0}\\ {}\\ {{{\bf{A}}^T} = - \eta {{\left( {{\boldsymbol{\alpha }}{{\bf{p}}^T}} \right)}^{ - 1}}} \\ {\bf{A}} = - \eta {\left( {{\bf{p}}{{\boldsymbol{\alpha }}^T}} \right)^{ - 1}} \end{array} But the problem is that the rank of $\bf{p}{\boldsymbol{\alpha }^T} $ is always one. \ I need to put result of optimal A in an iterative algorithm, So using regularization technique is not useful because after few iterations matrix components go to infinity. Please kindly let me know, Can SVD decomposition solve this problem? I used pinv function in matlab to find pseudo inverse based on SVD decomposition.But, results are not correct. I think the solution for psedu-inverse is not unique in my case, because the rank of matrix is always 1. can anyone give me good hints to solve this problem please?
I don't see how you can infer from $2{\boldsymbol{\alpha }}\left( \mathbf p^T \mathbf A^T {\boldsymbol{\alpha}} + \eta \right) \mathbf p^T = 0$ that $\mathbf A = -\eta \left( \mathbf p {\boldsymbol{\alpha }}^T\right)^{-1}$. The equation $2{\boldsymbol{\alpha }}\left( \mathbf p^T \mathbf A^T {\boldsymbol{\alpha}} + \eta \right) \mathbf p^T = 0$ can be rewritten as $\left( {\boldsymbol{\alpha}}^T \mathbf A \mathbf p + \eta \right) {\boldsymbol{\alpha }}\mathbf p^T = 0$. Now: