Let $X_i$ be a random sample of $N(\mu,\sigma^2)$. Each sample has value of one if $X_i >p$ and otherwise it has value of zero. Now, we select all $N$ available samples and calculate this
$$\sum_{i=1}^N (1\Pr(X_i >p))$$
Does the above show the expected value of the $N$ samples?
The samples are independent and have identical distribution.
Can we solve the above as below, considering that $X$ is a random variable of $N(\mu,\sigma^2)$?
$$\sum_{i=1}^N (1\Pr(X_i >p))=N \Pr(X >p) = N\left(1-\Phi\left(\frac{p-\mu}{\sigma}\right)\right)$$
I would rather not speak of random samples but of random variables $X_1,\dots,X_N$ that are defined on the same probability space.
Then we can define $B_i:=1_{(b,\infty)}(X_i)$ for each $i\in\{1,\dots,N\}$ and defined like that the $B_i$ are also random variables defined on that space.
Further it is evident that $B_i$ only takes values in $\{0,1\}$ so that: $$\mathbb EB_i=0\cdot P(B_i=0)+1\cdot P(B_i=1)=P(X_i>b)$$The last equality on base of: $$\{B_i=1\}=\{\omega\in\Omega\mid B_i(\omega)=1\}=\{\omega\in\Omega\mid X_i(\omega)>p\}=\{X_i>p\}$$
If $B:=B_1+\cdots+B_N$ then with linearity of expectation we find:$$\mathbb EB=\mathbb E\sum_{i=1}^NB_i=\sum_{i=1}^N\mathbb EB_i=\sum_{i=1}^NP(X_i>p)\tag1$$
This equality also holds if the $B_i$ are not independent.
If the $X_i$ have a common distribution then so do the $B_i$ and $(1)$ simplifies to: $$\mathbb EB=NP(X_1>p)\tag2$$
If this distribution is the normal distribution with parameters $\mu$ and $\sigma^2$ then we can write $X_1=\sigma U+\mu$ where $U$ has standard normal distribution so that $$P(X_1>p)=P(\sigma U+\mu>p)=P\left(U>\frac{p-\mu}{\sigma}\right)=1-\Phi\left(\frac{p-\mu}{\sigma}\right)$$ and we end up with:$$\mathbb EB=N\left(1-\Phi\left(\frac{p-\mu}{\sigma}\right)\right)\tag3$$
Used is now that the $X_i$ are defined on the same probability space and all have the same distribution which is the normal distribution with mean $\mu$ and variance $\sigma^2$. Independence is not necessary to achieve this result.