I am trying to understand the following computation of the conditional expectation: $$ \mathbb{E}\exp(\lambda s_i g_i g_i') = \mathbb{E}(\mathbb{E}(\exp(\lambda s_i g_i g_i')|g_i)) = \mathbb{E}\exp(\lambda^2s_i^2 g_i^2/2), $$ where $g_i, g_i'$ are independent standard normal random variables, $s_i \in \mathbb{R}$. I understand that if we treat $g_i$ as a given constant, the last equality simply follows from the MGF of a Gaussian variable. However, why exactly can we treat $g_i$ as a constant in the second term of the equality?
My definition of conditional expectation is the measure theoretical definition, where the conditional expectation is a random variable such that the integral of the conditional expectation is equal to the integration of the variable we are taking the conditional expectation for on all $\sigma(g_i)$-measurable sets. How does this "substituting $g_i$ as a constant" intuition follow from this definition?
I believe I might just be missing some basic rules of conditional expectations here.
For any non-negative measurable function $f$ and indepenednent random variables $X$ and $Y$ we have
$$Ef(X,Z)=\int Ef(X,t)dF_Z(t)$$
This means we can first treat $Z$ as constant $t$ , compute the expectation and then integrate w.r.t. the distribution of $Z$. In terms of conditional expectation this can be written as $$ Ef(X,Z) =EE(f(X,Z)|Z)$$
Proof of above formula is a simple application of Fubini/Tonelli Theorem since the joint distribution of $(X,Z)$ is a product measure.