Let $T$ and $\epsilon$ be two independent random variables. Let $R=T+\epsilon$. Given this information, we can derive the posterior distribution $f(T=t|R=r)$. I have the following question: I haven't figure out how to write mathematically the posterior distribution that I am interested in. I will try to describe it from the perspective of simulation.
- Generate many realizations of $T$, i.e., $\mathcal{T}=\{t_i\}_{i=1}^N$ with $N$ being very large
- For each $t_i$, generate one $\epsilon_i$. So we have $\mathcal{E}=\{\epsilon\}_{i=1}^N$
- Construct $r_i=t_i+\epsilon_i$. So we have $\mathcal{R}=\{r_i\}_{i=1}^N$
- Remove all $i$'s such that $r_i<\bar{r}$, where $\bar{r}$ is some pre-specified threshold. Hence, we have truncated set $\mathcal{T}',~\mathcal{E}',~\mathcal{R}'$.
- Then when you observe $R=r'_i\in\mathcal{R}'$, that is the density that this $r'_i$ is generated from $t'_i$? I mean, something like $f(T'=t'_i|R'=r_i)$.
I think the simulation process I've described is clear but I don't know how to express this last posterior distribution mathematically. Also, how to derive this posterior distribution?
I think the distribution I have described is $f(T=t|R=r,R\geq \bar{r})$, is it correct? How to derive this distribution?
I was thinking the following way \begin{align*} f(T=t|R=r,R\geq \bar{r})&=\frac{f(T=t,R=r,R\geq \bar{r})}{f(R=r,R\geq \bar{r})}\\ &=\frac{f(R=r,R\geq \bar{r}|T=t)f(T=t)}{f_R(r)1_{\{r\geq\bar{r}\}}}\\ &=\frac{f(\epsilon=r-t,\epsilon\geq \bar{r}-t|T=t)f_T(t)}{f_R(r)1_{\{r\geq\bar{r}\}}}\\ &=\frac{f(\epsilon=r-t,\epsilon\geq \bar{r}-t)f_T(t)}{f_R(r)1_{\{r\geq\bar{r}\}}},~\because~\epsilon\perp T\\ &=\frac{f_\epsilon(r-t)1_{\{r\geq\bar{r}\}}f_T(t)}{f_R(r)1_{\{r\geq\bar{r}\}}}\\ &=\frac{f_\epsilon(r-t)f_T(t)}{f_R(r)}1_{\{r\geq \bar{r}\}} \end{align*} Is this derivation correct?
I am suspect about my derivation. Because if I don't do truncation, then \begin{equation} f(T=t|R=r)=\frac{f_\epsilon(r-t)f_T(t)}{f_R(r)}. \end{equation}
I think truncation is a big thing. But from my derivation it seems that truncation just adds an indicator function to the original "untruncated posterior". Hence, I think my derivation is wrong.