Given that Y and L are normally distributed, the expectation of L given Y is $\mu (Y)$ and the variance of L given Y is $\sigma ^2 (Y)$, why is the conditional probability $P(L > x| Y) = \Phi \left(\dfrac{ \mu (Y) - x } {\sigma^2(Y)} \right)$ ?
Also, when finding the unconditional probability $P(L > x)$, why is it sufficient to say that it's just $\mathbb{E} \left(\Phi \left(\dfrac{ \mu (Y) - x } {\sigma^2(Y)} \right) \right) $
First, you need to be careful about the statement of $L$. If we say "$L$ is normally distributed," that means the unconditional distribution of $L$ is normal. But what the question actually suggests is that $L$ conditioned on $Y$, that is, $L \mid Y$, is normal. Under this interpretation, $L$ itself may not be normal.
That said, suppose $Y \sim \mathrm{Normal}(\mu, \sigma^2)$ and $L \mid Y \sim \mathrm{Normal}(\mu(Y), \sigma^2(Y))$. Then $$\Pr[L > x \mid Y] = \Pr\left[\frac{L - \mu(Y)}{\sigma(Y)} > \frac{x - \mu(Y)}{\sigma(Y)} \left| \right. Y \right] = \Pr\left[ Z > \frac{x - \mu(Y)}{\sigma(Y)} \left|\right. Y\right],$$ since $L \sim \mathrm{Normal}(\mu(Y), \sigma^2(Y))$ implies $\dfrac{L - \mu(Y)}{\sigma(Y)} \sim \mathrm{Normal}(0,1)$. Then we note that $\Pr[Z > z] = \Pr[Z \le -z] = \Phi(-z)$.
For the second part, this is simply the law of total probability: $$\Pr[L > x] = \int_{y = -\infty}^\infty \Pr[L > x \mid Y = y] f_Y(y) \, dy = \mathrm{E}_Y[\Pr[L > x \mid Y]].$$