Consider the set of coordinates $x_i$ and $y_i$ for $i = \pm 1, \pm 2 \dots \pm N$ and $N \geq 2$. Consider the change of coordinates from $\mathbf{x}$ to $\mathbf{y}$ defined by $$ y_i(\mathbf{x}) = c_i + x_i \sum_{j \neq i} c_j x_{-j} \tag{1} $$ where the $c_i$ are any strictly positive real numbers constrained by $\sum_{i} c_i = K$ for a constant $K$. Let $\mathbf{J}(\mathbf{x}) = \partial \mathbf{y}/\partial \mathbf{x}$ be the Jacobian matrix of the transformation. Let $\mathbf{1}$ be a vector of length $2N$ consisting of all $1$'s.
Conjecture 1: The constant $K$ is an eigenvalue of $\mathbf{J}(\mathbf{1})$.
Conjecture 2: Assuming Conjecture $1$ is true, any vector $\mathbf{v}$ satisfying $\mathbf{J}(\mathbf{1}) \mathbf{v} = K \mathbf{v}$ has the property that $v_{j} + v_{-j} = 0$, where $v_{\ell}$ is the entry of $\mathbf{v}$ corresponding to coordinate $x_\ell$.
I've verified both conjectures by explicit (computer-assisted) calculation for $N = 2, 3, 4$. One observation is that Eq. $(1)$ can be written as
\begin{align} y_i &= c_i + x_i \sum_{j} c_j x_{-j} - c_i x_{i} x_{-i} \\ &= c_i + \alpha x_i - c_i x_{i} x_{-i} \end{align} where $\alpha$ is a constant that doesn't depend on $i$. In particular, swapping $i \to -i$ shows that $y_i$ and $y_{-i}$ both (sort of) only depend on $x_i$ and $x_{-i}$ which is suggestive of Conjecture $2$. It really seems like there should be a simple reason for the conjectures but I can't see it.
I made a little computational mistake earlier. The underlying argument remains the same. Here is now the complete proof of both conjectures. The conjecture is equivalent to saying that for $n\ge2$ and $c_i\ne0,\,\forall i\in\{\pm1,\pm 2,\cdots,\pm n\}$ the column matrix $[v_i]_{{i=-n}\atop{i\ne0}}^n \ni \big(0=\sum_{j=1}^n (c_{-j}-c_j)v_j \bigwedge v_{-j}=-v_j,\,\forall j\ge1\big)$ is necessary and sufficient for $\mathbf J(\mathbf1)v=Kv$. The solution space of the first equation is $n-1$ dimensional. Note that $c_i$'s do not have to be positive but so long as they are nonzero.
Proof:
Using the Kronecker delta $\delta_{k,i}$, write $$y_i = c_i+x_i\sum_k(1-\delta_{k,i})c_kx_{-k}.$$ Compute the $(i,j)$'th entry of the Jacobian $\mathbf J(x)$ \begin{align} \mathbf J(x)_{i,j} &=\frac{\partial y_i}{\partial x_j} \\ &= \delta_{i,j}\sum_k(1-\delta_{k,i})c_kx_{-k}+x_i\sum_k(1-\delta_{k,i})c_k\delta_{-k,j} \\ &=\delta_{i,j}\sum_kc_kx_{-k}-c_i(\delta_{i,j}x_{-i}+\delta_{i,-j}x_i)+c_{-j}x_i. \end{align} Obviously $$\mathbf J(\mathbf 1)_{i,j} = K\delta_{i,j}-c_i(\delta_{i,j}+\delta_{i,-j})+c_{-j} \tag1$$ In the explicit matrix form $$\mathbf J(\mathbf 1)=KI-\text{diag}(c)-\text{adiag}(c)+\mathbf 1\tilde c^T,$$ where $c=[c_{-n},\cdots,c_{-1},c_1,\cdots,c_n]^T$ while $\tilde c=[c_n,\cdots,c_1,c_{-1},\cdots,c_{-n}]^T$, diag$(c)$ is the diagonal matrix with $c$ laying on its main diagonal, and adiag$(c)$ is the diagonal matrix with $c$ laying on its main anti-diagonal.
It is however more convenient to use expression $(1)$.
For $v$ to be an eigenvector of $\mathbf J(\mathbf 1)$ with eigenvalue $K$ or $$Kv_i=\sum_j\mathbf J(\mathbf 1)_{i,j}v_j=Kv_i-c_i(v_i+v_{-i})+\sum_jc_{-j}v_j$$ which is obviously equivalent to $$c_i(v_i+v_{-i}) = \sum_{j=1}^n (c_{-j}v_j+c_jv_{-j}), \,\forall i. \tag2$$ This means the left hand side of the above equation is independent of $i$.
It suffices to find a nonzero column matrix $[v_i]_{i=1}^n \ni 0=\sum_{j=1}^n (c_{-j}-c_j)v_j$ then let $v_{-j}=-v_j,\,\forall j\ge1$ and the solution space is $n-1$ or $n$ (if $c_{-j}-c_j,\,\forall i$) dimensional.
From Equation (2), $$c_i(v_i+v_{-i}) = c_{-i}(v_{-i}+v_i) \, \Longleftrightarrow\, (c_i-c_{-i})(v_i+v_{-i})=0,\,\forall i.$$ Then by Equation (2) again, $$\big(\exists k\ni c_k\ne c_{-k} \implies v_k+v_{-k}=0 \big)\implies \big(v_i+v_{-i}=0,\forall i \bigwedge \sum_{j=1}^n (c_{-j}-c_j)v_j=0\big).$$ Clearly $$c_i=c_{-i}\ne0\,\forall i \implies \sum_{j=1}^n (c_{-j}v_j+c_jv_{-j})=\sum_{j=1}^n c_j(v_j+v_{-j}).$$ But by Equation $(2)$, $c_j(v_j+v_{-j})$ are all the same for all $j$'s, Equation $(2)$ becomes $$c_i(v_i+v_{-i}) = nc_i(v_i+v_{-i}),\,\forall i\Longleftrightarrow (nc_i-1)(v_i+v_{-i})=0,\,\forall i.$$ Further more, \begin{align} \exists k\ni nc_k\ne1 &\implies v_k+v_{-k}=0 \\ &\implies c_i\ne0\wedge v_i+v_{-i}=0,\,\forall i; \end{align} and \begin{align} nc_i=1\,\forall i&\implies v_i+v_{-i}=n(v_i+v_{-i}) \\ &\implies n>1\wedge v_i+v_{-i}=0,\,\forall i. \end{align}
In conclusion $v_i+v_{-i}=0,\,\forall i$ so long as $n>1$ and $c_i\ne0$ if $c_i=c_{-i}$, $\forall i$. By Equation $(2)$, $$\sum_{j=1}^n (c_{-j}-c_j)v_j=0.$$
For $n=1$ and $c_1=c_{-1}=1$, $\mathbf J(\mathbf 1)=K\mathbf I$. The only eigenvalue is $K$ and any vector $2$ dimensional vector is an eigenvector.