I am trying to understand an example in the book "Introduction to Probability and Statistics" by Bertsekas and Tsitsiklis.
The example is as follows:
The speed of a typical vehicle that drives past a police radar is modeled as an exponentially distributed random variable $X$ with mean $50$ mph. The police radar's measurement $Y$ of the vehicle's speed has an error which is modeled as a normal random variable with zero mean and standard deviation equal to one tenth of the vehicle's speed. What is the joint pdf of $X$ and $Y$.
The joint pdf is found by finding $f_X$ (the marginal density of $X$) and $f_{Y\vert X}(y\vert x)$. Since $f_{X,Y}(x,y)=f_Xf_{Y\vert X}(y\vert x)$. My question pertains to how we find $f_{Y\vert X}(y\vert x)$.
My thought would be the following:
We have would have $f_X(x) = \frac{1}{50}e^{\frac{-x}{50}}$, for $x \geq 0$, else $f_X(x)=0$. Then conditioning $Y$ on $X=x$ we would have
$$f_{Y\vert X}(y\vert x)=\frac{1}{\sqrt{2\pi}(x/10)}\exp(-y^2/(2x^2/100)).$$
However, the book says that conditioned on $X=x$ the mean of $Y$ should be $x$. But if the mean of $Y$ is given as $0$ how does it depend on $X$? I understand how the variance depends on $X$.
The conditional formula given by the authors is below, and it is explicitly stated the the mean of $Y$ (given $X=x$) is $x$.
$$f_{Y\vert X}(y\vert x)=\frac{1}{\sqrt{2\pi}(x/10)}\exp(-(y-x)^2/(2x^2/100)).$$
The error (measurement $Y$ minus speed $x$) is modeled to be a zero-mean normal random variable.