Supposing we have a depth picture where objects from the picture have estimates $v \in \Bbb R^3$ of their surface normals, vectors on walls that point horizontally, and vectors on flat surfaces that point either up to the sky or down to the ground.
A computer vision algorithm I am reading about available here on page $2$, right side of the page, explores an iterative algorithm to estimate a gravity vector. The paper is titled Perceptual Organization and Recognition of Indoor Scenes from RGB-D Images.
At step $i$, after obtaining an estimate for our vector, say $g_i$, we classify our local estimates of surface normal estimates from an image into two categories:
$N_{||}$, the set of vectors within an angle-error $d$ of being parallel to $g_i$
$N_{\perp}$, the set of vectors within an angle-error $d$ of being perpendicular to $g_i$,
where initially $d$ is broad enough that almost all surface normals will belong to one of the two categories (but iteratively $d$ decreases).
The algorithm treats the concatenation of the two categories of surface normals as column vectors of the corresponding matrices $M_{||}, M_{\perp}$.
Next, we solve for a new estimate $g_i^{*}$, where the authors state without proof that solving for
$\min_{ \{g:\|g\|_2=1 \}}\left(\sum\limits_{n \in N_\perp} \cos^2(\theta(n,g)) + \sum\limits_{n \in N_{\|}}\sin^2(\theta(n,g))\right)$ is equivalent to finding the eigenvector with the smallest eigenvalue of the $3 \times 3$ matrix:
$M_\perp M_\perp^T- M_{\|}M_{\|}^T$, where $\theta(n,g)$ is the angle between vectors $n,g \in \Bbb R^3$.
The first expression makes perfect sense, we are computing an optimal unit vector that is perpendicular to our horizontal surface normals, and parallel to the up facing surface normals.
I am not sure why solving for the eigenvector solution with minimal eigenvalue is equivalent, and after some attempts manually to write examples I have not progressed any further. Any insights appreciated.
The missing piece of the puzzle here is Courant-Fischer Theorem, which is a variational characterization of the eigenvectors and eigenvalues of a matrix. It's probably my favorite theorem and it's what makes eigenvectors make sense to me. There are several very general forms of the Courant-Fischer theorem, but the simplest form is,
Theorem (Courant-Fischer): Let $A$ be a symmetric matrix with eigenvalues $\lambda_1 \leq \lambda_2 \leq \dots \lambda_n$ and corresponding normalized eigenvectors $\nu_1, \nu_2, \dots \nu_n$. Then the program $$ \min_{\|v\| = 1} v^T A v $$ is minimized at $\nu_1$, and the minimal objective value is $\lambda_1$.
In your case, I presume (though I haven't checked the details in the paper) that the relevant matrix is $A = M_\perp M_\perp^T- M_{\|}M_{\|}^T$ and that your objective function is a quadratic form in $A$. That is, $$ g^T \left( M_\perp M_\perp^T- M_{\|}M_{\|}^T \right) g = \sum\limits_{n \in N_\perp} \cos^2(\theta(n,g)) + \sum\limits_{n \in N_{\|}}\sin^2(\theta(n,g)). $$
If so, then it follows by the Courant-Fischer Theorem that the minimizer $g^*$ is the eigenvector corresponding to the smallest eigenvalue of $M_\perp M_\perp^T- M_{\|}M_{\|}^T$.