I'm trying to estimate $m$ in $y = mx+e$ from a single $(x,y)$ observation.
$m,x,e$ are all $N(0,1)$ and independent.
If I do a Monte-Carlo estimation on say $(1,3)$ I get about $1.5$. This seems about right.
What I'm struggling to do is estimate the MLE of $m$ analytically.
(I'm only interested in the single point case)
Since you haven't observed $\epsilon$ or $m$, you have the following joint distribution of $m$ and $\epsilon$:
$$f(m,\epsilon) = \frac{1}{2\pi}e^{-\frac{\epsilon^2+m^2}{2}}$$
However, you've also observed $x,y$ so we know that $m,\epsilon$ took values such that the linear relationship between $y,x$ holds.
To get a likelihood function for $m$, lets consider $m=k$ for some $k\in\mathbb{R}$:
For the equation to hold between $x,y$ $m=k$ requires:
$$\epsilon = y-kx$$
Therefore, the likelihood of $m$ depends on both how unlikely it is that $m=k$ and that $$\epsilon = y-kx$$ so we get the following likelihood function for $m$:
$$L(m|x,y)=\frac{1}{2\pi}e^{-\frac{(y-mx)^2+m^2}{2}}$$
From the basic properties of $e^{-u^2}$, we know $L(m|x,y)$ will be maximized when $(y-mx)^2+m^2$ is minimized. Using some basic calculus:
$$\frac{d}{dm} (y-mx)^2+m^2 = 2(mx^2+m-xy)$$
Since $(y-mx)^2+m^2$ is convex, we know that if we find the point where the its derivative is $0$, then we have found the global minimum:
$$2(mx^2+m-xy) = 0 \implies m = \frac{xy}{x^2+1}$$
Therefore, this is your MLE. I.e.,
$$\hat{m} = \frac{xy}{x^2+1}$$
For example, in your Monte Carlo study of $(1,3)$ we would estimate that $m=\frac{1(3)}{1^2+1} = \frac32$, which is in agreement with your results.