Having the scalar system: \begin{cases} x(t+1)=ax(t)+b\eta(t)\\ y(t)=cx(t)+d\xi(t) \end{cases} where $\eta(t)=\text{WN}(0,1)$ and $\xi(t)=\text{WN}(0,1)$ are uncorrelated noises.
Why if $d=0$ and $c\neq0$, the variance of the Kalman filtering error is zero?
After the prediction step the variance of the estimate of $x(t)$ would be non-zero, assuming $b\neq0$. However, during the correction step the measurement $y(t)$ in this case allows one to perfectly reconstruct $x(t)$, since when $d=0$ it holds that $x(t)=y(t)/c$. The Kalman filter utilizes this information fully, which drives the variance of the estimate of $x(t)$ during this step to zero (for it to go up again in the prediction step, ect.).
In order to show this one can use the standard Kalman filter calculations. Namely, the prediction step is given by
\begin{align} \hat{x}_{k|k-1} &= F_k\,\hat{x}_{k-1|k-1} + B_k\,u_k, \\ P_{k|k-1} &= F_k\,P_{k-1|k-1}\,F_k^\top + Q_k, \end{align}
with in your case $F_k = a$, $B_k=0$ (there are no deterministic inputs in your system) and $Q_k = b^2$. Substituting this into the prediction equations and using that in your scalar case the matrices in the equations do commute yields
\begin{align} \hat{x}_{k|k-1} &= a\,\hat{x}_{k-1|k-1}, \\ P_{k|k-1} &= a^2\,P_{k-1|k-1} + b^2. \end{align}
So even if $P_{k-1|k-1} = 0$ then during the prediction phase $P_{k|k-1}$ would at least become equal to $b^2$.
Now, the correction step is given by
\begin{align} \tilde{y}_k &= y_k - H_k\,\hat{x}_{k|k-1}, \\ S_k &= H_k\,P_{k|k-1}\,H_k^\top + R_k, \\ K_k &= P_{k|k-1}\,H_k^\top S_k^{-1}, \\ \hat{x}_{k|k} &= \hat{x}_{k|k-1} + K_k\,\tilde{y}_k, \\ P_{k|k} &= (I - K_k\,H_k)\,P_{k|k-1}, \end{align}
with in your case $H_k = c$ and $R_k = d^2 = 0$. Substituting this into the correction equations and using that in your scalar case the matrices in the equations do commute yields
\begin{align} \tilde{y}_k &= y_k - c\,\hat{x}_{k|k-1}, \\ S_k &= c^2\,P_{k|k-1}, \\ K_k &= P_{k|k-1}\,\frac{c}{c^2\,P_{k|k-1}} = \frac{1}{c}, \\ \hat{x}_{k|k} &= \hat{x}_{k|k-1} + K_k\,\tilde{y}_k = \hat{x}_{k|k-1} + \frac{1}{c}\,(y_k - c\,\hat{x}_{k|k-1}) = \frac{y_k}{c}, \\ P_{k|k} &= (1 - K_k\,c)\,P_{k|k-1} = \left(1 - \frac{1}{c}\,c\right) P_{k|k-1} = 0. \end{align}
It can be noted that also in the non-scalar case it is possible to get $P_{k|k} = 0$. Namely, when $R_k = 0$ and $H_k$ is a square full rank $n$ by $n$ matrix (with $n$ the number of states in $x_k$).