Given selection of i.i.d's ${X_1 \dots X_n} \sim \mathscr{N}(\theta, 1)$ ($\theta$ is unknown), how to express probability of $X_i > 0$? (and express it without normal distrbution' cdf which contains error function and thus unhandy?)
Some thoughts: Let some derived random value D be Bernoulli-distributed with parameter = $I(x > 0)$. Probability of $x > 0$ can be expressed through original normal distribution' cdf like this: $p(x > 0) = 1 - cdf(0)$.
So likelihood logarithm of the said Bernoulli parameter will be:
$lnLikelihood = ln(p(x > 0)) + ln(p(x <= 0)) = ln(1 - cdf(0)) + ln(cdf(0))$
To maximize the rightmost part, according to the Clement C. comment, replace $ln(1 - cdf(0)) + ln(cdf(0))$ by $(1 - cdf(0))cdf(0) = cdf(0) - cdf^2(0)$. After cdf(0) substitution (note that cdf(0) means cdf of $\mathscr{N}(\theta, 1))$ we get a trivial square equation ($-x^2 + x + 0 = 0$, which roots are $\frac{-1 \pm \sqrt(5)}{-2}$, and given a = -1 < 0, we conclude that maximum is in $\frac{1}{2}$.
Now there is a contradiction. How can not maximum depend on the unknown $\theta$? Somewhere here I need to express $\theta$ of the original normal distribution.
Consider $Y_{1},\dotsc, Y_n$, where $Y_i=I(X_i>0)$ indicators for the event that $Y_i$ is positive. Then $Y_{i}$ are i.i.d Bernoulli random variables with $p=P(X_1>0)$. Let $N=\sum Y_i$ which is a binomially distributed random variable with $n$ trials and probability of success $p$. Using maximum likelihood one can show that the MLE estimator of $p$ $$ \hat{p}=N/n $$