I try to solve for the minimization with constraints on x. \begin{align} \min_{x}\,\,&\frac{x^TAx}{x^Tx},\\ \mbox{s.t.}\,\,& c^Tx=0,\\& \end{align} where $A$ is symmetric but may not be positive definite.
I used Lagrangian multiplier to solve the question like here. I followed that wiki page on equality constraints and formed the matrix equation. The right hand side is just zero, but just cannot find the solution of the matrix equation:
$$\begin{bmatrix}2A & c \\ c^T&0\end{bmatrix}\begin{bmatrix}x\\\lambda\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}$$
It seems this matrix never has a null space?
This answer is mostly a summary of the comments to the question. You want to solve
\begin{align*} \min_{x\in\mathbb{R}^{n}}\frac{x^T A x}{x^T x} \end{align*}
subject to the constraint
\begin{align*} c^Tx &= 0. \end{align*}
If $x$ is a solution of this problem then $\lambda x$ with $\lambda\in\mathbb{R}\setminus\{0\}$ is also a solution. Only the direction of $x$ is relevant but not its length. Such we can also restrict ourselves to the representative $x$ with unit length $x^Tx=1$.
User copper.hat proposed to project the matrix $A$ to the orthogonal complement of $c$ and to solve the unconstrained minimization of the Rayleigh quotient for the projected matrix. The solution of this minimization problem is just an eigenvector corresponding to the smallest eigenvalue of the projected matrix. This method is implemented in the following Octave routine
cstrEigen1. The code is quite short and self-descriptive. Beneath the code forcstrEigen1there is some additional code for testing.The projection method is certainly the method of choice for small and medium sized problems.
For large problems iterative schemes might be better. The normal equations for the problem are
\begin{align*} F(x,\lambda_1,\lambda_2) &= 0 \end{align*}
with
\begin{align*} F(x,\lambda_1,\lambda_2) &= \begin{pmatrix} 2Ax+ c\lambda_1 + 2x\lambda_2\\ c^Tx\\ x^Tx-1 \end{pmatrix} \end{align*}
Newton's algorithm leads to a first formula how the new iterated $\bar x,\bar \lambda_1, \bar\lambda_2$ can be derived from the old iterated $x,\lambda_1,\lambda_2$.
\begin{align*} 0 &= F(x,\lambda_1,\lambda_2) + F'(x,\lambda_1,\lambda_2) \begin{pmatrix} \bar x - x\\ \bar \lambda_1-\lambda_1\\ \bar \lambda_2-\lambda_2 \end{pmatrix}\\ &= \begin{pmatrix} 2Ax+ c\lambda_1 + 2x\lambda_2\\ c^Tx\\ x^Tx-1 \end{pmatrix} + \begin{pmatrix} 2A& c& 2x\\ c^T& 0& 0\\ 2x^T& 0& 0 \end{pmatrix} \begin{pmatrix} \bar x - x\\ \bar \lambda_1-\lambda_1\\ \bar \lambda_2-\lambda_2 \end{pmatrix}\\ &= \begin{pmatrix} 2A\bar x + c\bar\lambda_1 + 2x\bar\lambda_2\\ c^T\bar x\\ 2x^T\bar x - 1 - x^T x \end{pmatrix} \end{align*}
This can be written as a linear system for the new iterated.
\begin{align*} \begin{pmatrix} 2A& c& 2x\\ c^T& 0& 0\\ 2x^T& 0& 0 \end{pmatrix} \begin{pmatrix} \bar x\\ \bar\lambda_1\\ \bar\lambda_2 \end{pmatrix} &= \begin{pmatrix} 0\\ 0\\ 1+x^Tx \end{pmatrix} \end{align*}
A slight modification is to use the information $x^T x=1$ in advance and normalize $x$ before using it in the matrix equation, i.e., $x:=\frac{x}{|x|}$.
With this normalized $x$ the above equation reads
\begin{align*} \begin{pmatrix} 2A& c& 2x\\ c^T& 0& 0\\ 2x^T& 0& 0 \end{pmatrix} \begin{pmatrix} \bar x\\ \bar\lambda_1\\ \bar\lambda_2 \end{pmatrix} &= \begin{pmatrix} 0\\ 0\\ 2 \end{pmatrix} \end{align*}
(The only difference is the 2 at the end.)
The following list represents a pseudo-program for the algorithm with precedental re-normalization and step-size limitation.
Newton's algorithm with and without precedental re-normalization are implemented in the following Octave/Matlab script together with some code for testing. EDIT: I have also included projected inverse iteration into the code. Further below there is an outline of this algorithm.
Test result generated by Newton's method with precedental re-normalization (and step-size limitation):
Test result with Newton's method only with step-size limitation: This result is generated with the same seed for the random number generator.
As one sees Newton's algorithm with precedental re-normalization performs better than without.
Other methods mentioned in the comments are:
In my opinion projected inverse matrix iteration with shift looks most promising. Maybe, I will try it when I find some spare time.
EDIT: Now, also projected inverse matrix iteration is included in the code. This method is reliable and performs best for larger systems. The inverse iteration with shift exploits that the system becomes singular. In the singular case we do not need the minimum norm solution (which is returned by most Octave solvers) but the kernel of the system matrix. I've included a linear solver
luRookwhich returns the kernel if the system becomes singular.Output for the test from the above script with
methodintest_cstrEigenset to 3 (projected inverse iteration):There follow some notes on how this works:
The inverse matrix iteration for a matrix $M$ with shift starts with some approximation $\lambda$ for the smallest eigenvalue of $M$ and some initial guess $y$ for the corresponding eigenvector. The system \begin{align*} (M-\mathbf1\lambda)\cdot \bar y = y \end{align*} is solved for $\bar x$. The new iterate $\bar\lambda$ for the eigenvalue can be calculated exploiting the previous equation: \begin{align*} \frac{\bar y^T y}{\bar y^T \bar y} = \frac{\bar y^T M \bar y}{y^T y} - \lambda \end{align*} The Rayleigh quotient in this equation is used as new estimate for the eigenvalue, i.e.: \begin{align*} \lambda := \lambda + \frac{\bar y^T y}{\bar y^T \bar y} \end{align*} The new estimate for the eigenvalue is obtained by re-normalizing $x$: \begin{align*} y := \frac{\bar y}{|\bar y|} \end{align*} The special aspect for our task is that we use a projection of $A$ onto $\operatorname{ker}c^T$ as matrix $M$. Instead of building an orthonormal base transformation $Q$ with $Q(:,1)=\frac{c}{|c|}$ and using $B:=Q(:,2:\operatorname(end))$ as projector such that $M = B^TAB$ we can also iterate with the original matrix by projecting $x$ onto the orthogonal complement of $c$. That means we iterate with $y := Bx$ instead of $x$. In the (inverse) power iteration there is the projector $P:=BB^T=\mathbf1-\frac{cc^T}{c^Tc}$ in between two applications of $M$: \begin{align*} M^k y_k &= y_0\\ B^T A \underline{B\cdot B^T} A B\cdot \ldots \cdot B^T A B y_k &= y_0 \end{align*} We multiply by $B$, substitute $P=BB^T$ as well as $y = Bx$, and obtain \begin{align*} B\cdot M^k y_k = BB^T A B\cdot B^T A B\cdot \ldots \cdot B^T A B y_k &= B y_0\\ P A\cdot \ldots\cdot P A x_k &= x_0. \end{align*} This system has to be solved under the restriction that $y_k$ lies within the image of $P$, i.e. $c^T \cdot x_k=0$, because of $y_k = PAy_{k+1}$. For reaching that goal we have one additional degree of freedom for which we introduce a new variable $\gamma_k$, namely the $c$-component of $Ay_k$ which is nullified by $P$. This leaves us with the following system \begin{align*} \begin{pmatrix} A&-c\\ -c^T&0 \end{pmatrix} \begin{pmatrix} x_k\\ \gamma_k \end{pmatrix} &= \begin{pmatrix} x_{k-1}\\ 0 \end{pmatrix} \end{align*} to be solved for the iterate $y_k$.
A shift in the system for $x_k$ translates directly into a shift into the system for $y_k$: \begin{align*} (B^T A B - \lambda\mathbf1)y_k &= y_{k-1}\\ P A x_k -\lambda\mathbf1 x_k &= x_{k-1} \end{align*} The estimation for the new shift in the projected system can also directly be translated into a shift in the original system: \begin{align*} \lambda_{k+1} &= \lambda_k + \frac{y_k^T y_{k-1}}{y_k^T y_k}\\ &=\lambda_k + \frac{x_k^TB B^Ty_{k-1}}{x_k^TB B^T x_k}\\ &=\lambda_k + \frac{x_k^T P x_{k-1}}{x_k^T P x_k}\\ &=\lambda_k + \frac{x_k^T x_{k-1}}{x_k^T x_k} \end{align*}
Thereby the last identity holds since the $x_k$ lie in the image of the projector $P$.