Suppose $(\Omega, \mathcal{A}, \mathbb{P})$ is a probability space $(E, \mathcal{E})$ is a nice measurable space. Let $X_i \colon \Omega \to E$ for $1 \le i \le n$ be i.i.d. random elements, and let $f \colon E^n \to \mathbb{R}$ be a measurable function such that $\mathbb{E}[|f(X_1, \ldots, X_n)|] < \infty$.
If we define $\varphi \colon E^k \to \mathbb{R}$ to be such that $\varphi(X_1, \ldots, X_k) = \mathbb{E}[f(X_1, \ldots, X_n) | X_1, \ldots, X_k]$ a.s., then I want to show that $$ \varphi(x_1, \ldots, x_k) = \mathbb{E}[f(x_1, \ldots, x_k, X_{k+1}, \ldots, X_n)] $$ is a valid choice for $\varphi$.
For reference, this comes up in the proof of McDiarmid's inequality. I have never seen this to be proved but is instead assumed to be obvious.
What I have tried
By the definition of $\varphi$, $\varphi(x_1, \ldots, x_k) = \mathbb{E}[f(X_1, \ldots, X_n) | X_1, \ldots, X_k](\omega)$ if $\omega \in (X_1, \ldots, X_k)^{-1}(x_1, \ldots, x_k)$ for $(x_1, \ldots, x_k)$-a.e.. By the definition of conditional expectation, if $A = (X_1, \ldots, X_k)^{-1}(x_1, \ldots, x_k)$, then $$ \int_A \mathbb{E}[f(X_1, \ldots, X_n) | X_1, \ldots, X_k] \, \mathrm{d}\mathbb{P} = \int_A f(X_1, \ldots, X_n) \, \mathrm{d}\mathbb{P} $$ The left hand side simplifies to $\int_A \varphi(x_1, \ldots, x_k) \, \mathrm{d}\mathbb{P} = \mathbb{P}(A) \varphi(x_1, \ldots, x_k)$. The right hand side simplifies to $$\int_\Omega \mathbb{1}_A(\omega) f(x_1, \ldots, x_k, X_{k+1}(\omega), \ldots, X_n(\omega)) \, \mathrm{d}\mathbb{P}(\omega) = \mathbb{P}(A) \int_\Omega f(x_1, \ldots, x_k, X_{k+1}(\omega), \ldots, X_n(\omega)) \, \mathrm{d}\mathbb{P}(\omega) $$ using independence. This can be written as $\mathbb{P}(A) \mathbb{E}[f(x_1, \ldots, x_k, X_{k+1}, \ldots, X_n)]$. Therefore, $$\mathbb{P}(A) \varphi(x_1, \ldots, x_k) = \mathbb{P}(A) \mathbb{E}[f(x_1, \ldots, x_k, X_{k+1}, \ldots, X_n)].$$ The problem is $\mathbb{P}(A)$ could be $0$.