Suppose I have a Markov chain as depicted in the following figure:

where $N$ is even. State 0 and $N$ are the two sinks of the chain. The transition probabilities have the following property: $g_{N/2}=r_{N/2}$ and $g_{N/2+\delta}=r_{N/2-\delta}$ where $\delta$ is an integer number from $1-N/2$ to $N/2-1$. For a chain starting at state $n$, denote $\pi_{N,n}$ to be the probability that it eventually reaches state $N$ and $\pi_{0,n}$ state $0$, denote $\tau_{N,n}$ to be the conditional mean first passage time if it eventually reaches $N$, and $\tau_{0,n}$ the conditional mean first passage time if it eventually reaches state $0$.
I'm interested in the quantity $\tau_{n}\equiv\pi_{0,n}\tau_{0,n}+\pi_{N,n}\tau_{N,n}$, which is the mean exist time of the chain or the average time that the chain reaches either one of the sinks. My conjecture is that $\tau_{N/2}>\tau_{N/2+1}>\tau_{N/2+2}>\cdots>\tau_{N-1}$. Question: Do you think my conjecture is true? If yes, can you provide a proof? Or, can you provide a counter-example to prove my conjecture wrong?
A direct proof is to note that, if $(X_k)$ denotes the Markov process on $\{0,1,\ldots,N\}$, then the random process $(Y_k)$ defined by $$Y_k=u(X_k),\qquad u(x)=\max\{x,N-x\},$$ is a Markov chain on $\{N/2,N/2+1,\ldots,N\}$ with transition rates $2g_{N/2}=2r_{N/2}$ for $N/2\to N/2+1$, $g_n=r_{N-n}$ for $n\to n+1$ and $r_n=g_{N-n}$ for $n\to n-1$ for every $N/2\lt n\lt N$, and $1$ for $N\to N$.
In general, a function of a Markov process is not Markov but in your case the symmetry $g_{n}=r_{N-n}$ for every $n$ is exactly the necessary condition for this lumping operation to preserve the Markov property.
Now, each $\tau_n$ with $N/2\leqslant n\leqslant N$ is also the mean hitting time of $N$ by the Markov process $(Y_k)$ starting from $n$ and the monotonicity $\tau_n\gt\tau_{n+1}$ follows from the tree structure of the linear graph on $\{N/2,N/2+1,\ldots,N\}$ since $\tau_n=\theta_{n}+\tau_{n+1}$ where $\theta_n$ is the mean hitting time of $n+1$ by $(Y_k)$ starting from $n$.
Edit: In general, if $(X_k)$ is a Markov chain with transition rates $q$ and state space $\mathfrak X$, the lumped process $Y=u(X)$ is a Markov chain on the state space $\mathfrak Y=u(\mathfrak X)$ when the following condition holds.
Then the transition rates $r$ of $Y$ are such that, for every $y\ne y'$ in $\mathfrak Y$, $$r(y,y')=\sum_{z:u(z)=y'}q(x,z),$$ for any $x$ in $\mathfrak X$ such that $u(x)=y$ (since the sum on the RHS does not depend on the choice of such a state $x$).