Let $X:(\Omega,\mathcal{A},\mathbb{P}) \to (E,\mathcal{E})$ and $Y:(\Omega,\mathcal{A},\mathbb{P}) \to (F,\mathcal{F})$ be random variables.
We suppose $Y$ is discrete (i.e $F$ is a countable space) and $X$ is real-valued ($E\subset\mathbb{R}$). Under this setting, the conditional expectation of $X$ knowing $\{Y=y\}$, where $y\in F$ is such that $\mathbb{P}(Y=y)>0$, is defined by : $$\mathbb{E}[X|Y=y]=\dfrac{\mathbb{E}[X1_{\{Y=y\}}]}{\mathbb{P}(Y=y)}$$
The conditional expectation of $X$ with respect to $Y$, denoted $\mathbb{E}[X|Y]$, is then defined as $$\mathbb{E}[X|Y]=\varphi \circ Y$$ where $\varphi : F\to\mathbb{R}\text{, } y \mapsto \mathbb{E}[X|Y=y]\text{ if }\mathbb{P}(Y=y)>0\text{, 0 otherwise}$
It is thus obvious that for all $\omega \in \Omega$, $Y(\omega)=y \Longrightarrow \mathbb{E}[X|Y](\omega)=\mathbb{E}[X|Y=y]$.
Can we say the same in the more general setting where $X$ and $Y$ are real-valued random variables and $X$ is either positive or $L^1$ ? In this setting, the definition of $\mathbb{E}[X|Y]$ is not given by an explicit formula, it is rather defined as the unique random variable verifying for all bounded measurable function $g:F\to\mathbb{R}$ : $\mathbb{E}[Xg(Y)]=\mathbb{E}[\mathbb{E}[X|Y]g(Y)]$