Consider random variables $n_1$ and $n_2$ such that they are independent zero mean gaussian.
Let $r_1= s+n_1$ and $r_2=s+n_1+n_2$ where $s$ is a constant.
Then,
$$f(r_2|r_1,s) = f(r_2=s+n_1+n_2|\ r_1=s+n_1,s)$$
Since we are given the observation of $s+n_1$ and $s$, we can just plug in the "parameters" and we get,
$$f(r_2|r_1,s) = f(r_2=r_1+n_2)=N(r_1,\sigma^2)$$
Now, if $n_1$ and $n_2$ are dependent, how does this change the problem? We are still given an observation of $r_1$, why can't we just plug in the observation and get the same result?
In the dependent case, the observation of $r_1$ can actually tell you something about $n_2$, so you cannot just plug in the observation like that. ("Plugging in" is only valid when $n_1$ and $n_2$ are independent.)
The most extreme counterexample is complete dependence $n_1 = n_2$. Then $r_1 = s + n_1$ and $r_2 = s + 2 n_1$, so knowing $r_1$ and $s$ will force $r_2 = 2r_1 - s$.
In the general case, if $(n_1, n_2)$ is bivariate normal with standard Gaussian marginals, then $n_2$ can be written as $n_2 = an_1 + b z$ for some constants $a,b$ and standard normal $z$ independent of $n_1$. Then $r_2 = an_1 + b z + s$ and $r_1 = n_1 + s$, so $r_2 = ar_1 + bz + (1-a) s$. Since $z$ and $r_1$ are independent, you can now plug in the observation of $r_1$ and look at the remaining randomness in $z$ to obtain the conditional distribution.