Let $x_1$ and $x_2$ be observed random variables. Let $y$ be defined as: \begin{align} y = \theta_1 x_1 + \theta_2 x_2 \label{1} \end{align} with $\theta_1$ and $\theta_2$ real-valued coefficients. I have prior knowledge that $y \sim N(0,\sigma^2)$, i.e. that $y$ is Gaussian with zero mean value and variance $\sigma^2$. How do I define a maximum likelihood estimator for $\theta_1$ and $\theta_2$, assuming $\sigma$ known?
SOLUTION ATTEMPT \begin{align} \log{L(\theta_1, \theta_2|y)} = \log{\prod_{i=1}^{N} p(y_i|\theta_1,\theta_2)} = -\frac{1}{2} N \log{(2 \pi \sigma^2)} -\frac{1}{2} \frac{1}{\sigma^2} \sum_{i=1}^{N} (\theta_1 x_{1,i}+\theta_2 x_{2,i})^2 \label{2} \end{align} Clearly the best solution is $\theta_1 = \theta_2 = 0$, but I expect a solution depending on $x_1$, $x_2$ and $\sigma$.
If you want to solve for MLE normally, you need the joint distribution of $Y,X_1$ and $X_2$. However, since we know nothing about the distribution of $X_1$ and $X_2$, we can't obtain the exact joint distribution or the likelihood function. The one you wrote is not the likelihood function.
If $X_1$ and $X_2$ are independent, at least we can write it as $$ f(x_1,x_2,y|\theta_1,\theta_2) = f_Y(y|x_1,x_2,\theta_1,\theta_2)f_{X_1}(x_1)f_{X_2}(x_2) $$ Fortunately, $$ EY = \theta_1EX_1 +\theta_2EX_2 = 0\\ Var(Y) = \theta_1^2Var(X_1) + \theta_2^2Var(X_2)=\sigma^2 $$ which must be true regardless of the distributions of $X_1$ and $X_2$ (If they are independent). Therefore, we can solve for $\theta_1$ and $\theta_2$ in terms of $EX_1,EX_2,Var(X_1)$ and $Var(X_2)$.
Now due to the invariance property of MLEs, the MLE for $\theta_1$ and $\theta_2$ are obtained by replacing $EX_1,EX_2,Var(X_1)$ and $Var(X_2)$ by their respective MLEs.
Edit: I have to correct my statement. You can still solve for MLE the normal way, since $f_{X_1}$ and $f_{X_2}$ do not depend on $\theta_1$ and $\theta_2$. However, if you just differentiate w.r.t to $\theta_1$ or $\theta_2$, you will get the same equations twice. To get the exact solution, we need the additional constraint, which is $Var(Y) = \sigma^2$. This method is equivalent to the one above, as we are implicitly computing the MLEs of $\theta_1$ and $\theta_2$ by computing the MLEs of $EY$ and $Var(Y)$.