If I can find a solution to the following set of equations then, with a bit of luck, I should be able to derive all sorts of nifty new results in non-equilibrium statistical mechanics. However, I'm not sure if there is a general solution.
The equations are as follows. There is one equation for every $k \in \mathbb{Z}^\star$ (the positive ingtegers). $$ e^{-\eta_k}\sum_{j=0}^\infty e^{\eta_j-\lambda f_{kj}} = e^{\eta_k}\sum_{i=0}^\infty e^{-\eta_i-\lambda f_{ik}}. $$ What I want is an expression for $\eta_k$ in terms of $k, \lambda$ and $\{f_{ij}\}$, so this is an infinite number of equations in an infinite number of unknowns, with an infinite number of parameters. (It could also be seen as a singe functional equation.)
The numbers $f_{ij}$ are arbitrary real parameters. We can assume they are bounded below (i.e. there exists $g$ such that for every $i$ and every $j$, $f_{ij}>g$) but not necessarily bounded above. $\lambda$ is also a real parameter; if necessary we can assume that it is sufficiently large that the infinite sums in the equations converge.
Equivalently, if we let $x_k = e^{-\eta_k}$ and $A_{ij}=e^{-\lambda f_{ij}}>0$ then we're trying to find positive solutions (for $\{x_k\}$) of $$ \sum_{j=0}^\infty A_{kj} \frac{x_k}{x_j} = \sum_{i=0}^\infty A_{ik} \frac{x_i}{x_k}. $$ I've tried everything I can think of to solve this, but I'm not much of a mathematician and haven't really got very far. Can anyone else see how to do it?
I often find myself answering my own questions; this time it's a negative answer.
The question as I wrote it is equivalent to a known problem called "matrix scaling", for which there are efficient numerical algorithms (because it's used as an important preprocessing step before numerically solving eigenproblems) but no analytical solution, at least not in the general case.
The matrix scaling problem is as follows: given a matrix $A$, find a diagonal matrix $D$ such that the row sums of $D^{-1}A D$ are equal to the corresponding column sums. If the diagonal elements of $D$ are $\{x_i\}$ then expanding this out gives the second equation in my post.
The simplest algorithm for solving this numerically is simply to scale one row/column pair at a time, iterating over the whole matrix, and then repeat until it converges. This will probably suffice for my purposes, though if it doesn't I believe I can find more sophisticated algorithms implemented as part of LAPACK.