For an assignment, we've been asked to compute a ranking of basketball teams by finding the eigenvector of a matrix $A$ corresponding to $A$'s maximal eigenvalue (ie, using the power iteration method). The matrix $A$ is given to us, and all we have to do is implement the power method, but I'm curious as to why what we're computing would yield a sensible ranking?
The matrix $A$ is constructed like so: All relevant teams are mapped to $\{1,\ldots,n\}$. If team $i$ played team $j$ and won by $6$ or less, then $A_{i,j} = 0.4$ and $A_{j,i} = 0.5$. If team $i$ won by more than $6$, then $$A_{j,i} = \frac{1}{1+\exp(-d)}$$ where $d$ is the score difference.
$A_{i,j} = 0$ for all other $1 \le i,j \le n$.
Then, if $v$ is the eigenvector of $A$ with maximal eigenvalue, the entries of $v$ correspond to the rankings of the team. Why would this be a sensible ranking scheme?
edit: In the case where a team wins by more than $6$, I suspect the intent was to also set $A_{i,j} = 1 - A_{j,i}$, as this value was computed in the code given to us, but never actually assigned to the matrix. I'm unsure if this was a mistake or if $A_{i,j}$ should remain $0$ in that case.
The basic idea of ranking by eigenvectors is a translation from a "plus-max arithmetic" (or max-plus algebra) to normal arithmetic.
The idea in "plus-max arithmetic" is that the rank of a player should be bigger than the rank of all other players over which they have won, adding some distance for the decisiveness of the win. Adding a degree of freedom for the separation of the ranks (resp. to uniformly adjust the distances) gives the formula $$ r_i=s+\max_{j:p_i>p_j}(r_j+d_{i,j}) $$
where $p_j>p_i$ means "player $i$ played and won against player $j$". Setting $d_{i,j}=-\infty$ for games not played or lost by player $i$ gives the generalized formula $$ r_i=s+\max_{j}(r_j+d_{i,j}). $$
An approximate translation from "plus-max" to normal arithmetic is via exponentiation $$ \exp(r_i)\approx\exp(s)\sum_{j}\exp(r_j)\cdot \exp(d_{i,j}) $$ which is now in eigenvector form. Set $v_i=\exp(r_i)$, $A_{i,j}=\exp(d_{i,j})$ if $p_i>p_j$, with $\exp(-\infty)=0$, and $\lambda=\exp(-s)$, the usual form of an eigenvalue equation follows, $$\lambda v=Av.$$ A maximal eigenvalue that is bigger than one (i.e., $s<0$) would need some explaining, more precisely, $s+d_{ij}$ should be positive for games won. If this is not the case, the winner in some games might get a lower rank than the loser. Which might be a sensible outcome in more complex situations.
Changing $\exp(d)$ to $\frac{\exp(d)}{1+\exp(d)}=\frac1{1+\exp(-d)}$ changes the distance in the "plus-max" picture for games won, but since the transformation from $x$ to $\frac{x}{1+x}=1-\frac1{1+x}$ is via a monotically increasing function, it only changes the quantities, not the quality ("changing quantity" may result also in different rankings, "quality" here is just the applicability of the plus-max interpretation).
However, since $$ h(d)=\log(1/(1+\exp(-d))=-\log(1+\exp(-d)<0, $$ the original ranking distances are negative, which has then to be compensated by $s$. One could also multiply by $2$ to get all factors for games won to be larger than 1, this only changes the eigenvalue or $s$.
Inclusion of the lost games could have the reasoning that tightly losing against a strong player is comparable or better than winning against a weak player, which could be interpreted via the "plus-max" formula $$ r_i=s+\max\left(\max_{j:p_i>p_j}(r_j+h(d_{i,j})),\max_{j:p_i<p_j}(r_j+h(-d_{j,i}))\right) $$ One could of course also use a different formula than $d_{ij}=-d_{ji}$ for games lost, or have $h(-x)$ unrelated to $h(x)$ in $$ r_i=s+\max_{j}(r_j+h(d_{i,j})) $$ so that games lost have a weaker influence on the rank than games won.