Consider a simple random walk $S_n$ on $\Bbb Z^d$. We write $$ g(x,y) = \Bbb E^x \left[ \sum_{n=0}^\infty \mathbf{1}_{\{S_n=y\}}\right]$$ for the expected number of visits to $y$ starting from $x$. This function satisfies the following property: $$ g(x,y) = \delta_x(y) + \sum_{z\in\Bbb Z^d} p(x,z)g(z,y) = \delta_x(y) + \sum_{z\sim x} p(x,z)g(z,y).$$ Intuitively, this property takes out the first step of the random walk. If $x=y$, then we already have 1 visit, otherwise we consider the random walk from step 2 and onwards, taking into account the probability of starting on the next node. Rewriting this equation a bit, yields: $$ \sum_{z\sim x} p(x,z) (g(z,y) - g(x,y)) = \Delta g(x,y) = -\delta_x(y), $$ where $\Delta$ is the graph Laplacian.
Consider now a simple random walk $S_n$ on the discrete torus $(\Bbb Z/N\Bbb Z)^d$, and write $g^z(x,y)$ for the expected number of visits to $y$ starting from $x$ before hitting $z$. Now, a paper I'm reading states (very matter-of-factly) that in this case we have $$ \Delta \frac{g^z(x,y)}{2d} = \delta_z - \delta_x. $$
I'm not really seeing how this is true, if we rewrite this, we get:
$$ g^z(x,y) = \delta_x(y) - \delta_z(y) + \sum_{w\sim y} \frac{g^z(x,w)}{2d}. $$
The first $\delta$-function is intuitively in the right place like before. If we now look at the second $\delta$-function, it only plays a role whenever $z = y$. In this case $g^z(x,y) = g^y(x,y) = 0$, and the above formula should reflect this, meaning (assume $x\neq y$) that $$ 1 = \sum_{w\sim y} \frac{g^y(x,w)}{2d}. $$
And I don't see how this needs to be true, I explicitly calculated it for a very simple example and it worked, but I honestly do not see how to generalize it. Can someone help me? Maybe I'm stuck in the wrong thought pattern. Thanks.
Let $(M_n)$ be an ergodic, recurrent Markov chain, on a discrete state space $\Sigma$, with stationnary measure $\mu$. Let $A \subset \Sigma$ be finite. Then, for all $B \subset \Sigma$,
$$\mathbb{E}_A (\# \{(M_n) \text{ hits } B \text{ before going back to A}\}) = \frac{\mu(B)}{\mu(A)},$$
where the expectation is taken with respect to $\mu_{|A} / \mu(A)$. This is a consequence of the stationnarity of the measure $\mu$. Note that this also holds for $B \subset A$, as then $M_0 \in B$ with probability $\mu (B) / \mu (A)$.
For reference: see this pdf (althoug $\mu$ doesn't need to be a probability measure).
Now, let's consider your problem. $\mu$ is the uniform measure on $\Sigma = (\mathbb{Z}_{/N\mathbb{Z}})^d$. The above can be reframed as:
$$g^A (A,B) = \frac{\mu (B)}{\mu(A)}.$$
Taking $A := \{y\}$ and $B := \{w: w \sim y\}$, we get:
$$g^y (y,B) = 2d.$$
But, starting from $y$, the first step is to $B$, and it hits $B$ uniformly, so:
$$g^y (y,B) = \frac{1}{2d} \sum_{w \sim y} g^y (w, B) = g^y (B, B) = 2d.$$
By symmetry, for all $w \in B$, we have $g^y (B, B) = g^y (w, B) = 2d$.
Finally, starting from $x \neq y$, let $W$ be the point where the random walk first hits $B$. Then:
$$g^y (x,B) = \sum_{w \sim y} g^y(x,w) = \sum_{w \sim y} \mathbb{P}_x (W = w) g^y (w,B) = 2d.$$