I want to find a probability density function (pdf) of a statistic $T:=t(X_1,\dots, X_n)$ given its samples $(X_1,\dots, X_n)=: \mathbf{X}$, where $t(\cdot)$ is a function such as $t(x_1,\dots, x_n) := \frac{1}{n}\sum_{i=1}^n x_i$ (sample mean).
In other words, I want to find the pdf $p_{T \mid \mathbf{X}}$.
When $\mathbf{X} = (X_1,\dots, X_n)$ are discrete random variables, I think finding the counterparts of the probability math function (pmf) is easy. The answer is as follows: $$ p_{T\mid \mathbf{X}}(t\mid x_1,\dots, x_n) = \begin{cases} 1, & \text{if } t = t(x_1,\dots, x_n), \\ 0, & \text{otherwise}. \end{cases} $$
What about when the samples are continuous?
Motivation:
I want to find the pdf $p_{Y\mid \mathbf{X}}$ given the following Markov chain:
$$ \mathbf{X} \to T\to Y, $$ where $T:= t(\mathbf{X})$.
In other words, I want to represent $p_{Y\mid \mathbf{X}}$ by $p_\mathbf{X}, t(\cdot)$ and $p_{Y\mid T}$.
I have tried the following calculation: \begin{align} p_{Y\mid \mathbf{X}} (y\mid \mathbf{x}) &= \frac{p_{\mathbf{X}, Y}(\mathbf{x}, y)}{p_{\mathbf{X}}(\mathbf{x})}\\ &= \frac{\int p_{\mathbf{X}}(\mathbf{x})p_{T\mid \mathbf{X}}(t\mid \mathbf{x})p_{Y\mid T}(y\mid t)\, dt}{\int\int p_{\mathbf{X}}(\mathbf{x})p_{T\mid \mathbf{X}}(t\mid \mathbf{x})p_{Y\mid T}(y\mid t)\, dtdy}. \end{align} Then I have realized that I need to calculate the pdf $p_{T\mid \mathbf{X}}$.
When $Y = T + Z$ and $Z\sim N(0, \sigma^2)$, i.e., $p_{Y\mid T}$ is a pdf of the distribution $N(t, \sigma^2)$, I suppose $p_{Y\mid X}(y\mid x)$ is equal to $p_{Y\mid T}(y \mid t(x))$.
I'm not sure that OP is familiar with the measure-theoretic definition of conditional expectation and probability, but I'll present it anyway. (This is very roughly written assuming that you have little background on measure theory.) If $X,Y$ are random variables such that $E[|X|]<\infty$, then the conditional expectation of $X$ given $Y$ is defined as the unique random variable $ E[X|Y] = \Phi(Y)$ (almost surely) for some (Borel-measurable) function $\Phi$ such that $$E[Xf(Y)] = E[\Phi(Y)f(Y)]\tag{*} $$ holds for all bounded (Borel-measurable) function $f$.
We often write $\Phi(y)$ as $E[X|Y=y]$. If $Y$ is a discrete random variable, then the conditional expectation calculated by $$ E[X|Y=y] = \frac{E[X1_{\{Y=y\}}]}{P(Y=y)},\tag{**} $$ coincides with the $\Phi(y)$ in the measure theoretic definition $(*)$ (You may be able to check this.) If $X,Y$ are random variables for which their joint p.d.f. exists, it can also be calculated explicitly by $$ E[X|Y=y]=\frac{\int xf_{XY}(x,y)dxdy}{f_Y(y)}.\tag{***} $$ One can also prove that the one obtained from $(***)$ satisfies the definition of $(*)$. In this sense, $(*)$ is a generalization of those elementary definitions. However, if it is not the either cases, it is hard to obtain explicit formula for $\Phi$ and one should rely on the (abstract) definition $(*)$.
Now, the following seemingly obvious statement about conditional expectation can be shown directly by the definition: If $E|F(X)|<\infty$, then (almost surely) $$ E[F(X)|X] = F(X). $$Proof: Obviously, $F(X)$ is a function of $X$. By uniqueness it is sufficient to check $(*)$ holds. This is true since $$ E[F(X) f(X)] = E[E[F(X)|X] f(X)]=E[F(X)f(X)] $$ for all bounded $f$. $\blacksquare$
This is the formalization of "Uncertainty disappears". It remains to check if $$P(T(X)\in B|X) = 1_{\{T(X)\in B\}}=\begin{cases} 1\quad\text{if }\; T(X)\in B\\0\quad\text{otherwise}\end{cases}.$$ (Here, $1_{A}(x) = 1_{x\in A}$ denotes the indicator function.) It follows from $$ P(T(X)\in B|X)=E[1_{\{T(X)\in B\}}|X]= E[1_{\{T(\cdot)\in B\}}(X)|X]=1_{\{T(\cdot)\in B\}}(X)=1_{\{T(X)\in B\}}. $$ This shows $P(T(X)\in B|X=x) = 1_{\{T(x)\in B\}}$ and $T(X)|_{X=x}$ has a degenerate distribution at $T(x)$.