Let's say that we have three random variables:
$$ X_1,..,X_n \rightarrow N(\mu_1, \sigma^2)$$
$$ Y_1,..,Y_n \rightarrow N(\mu_2, \sigma^2)$$
$$ W_1,..,W_n \rightarrow N(\mu_1 + \mu_2, \sigma^2)$$
If they were mutually independent we would find the MLE of $\mu_1$ and $\mu_2$ by multiplying their density functions like this (Assuming $\sigma^2$ is known):
$$ L(\mu_1, \mu_2) = \prod_{i=1}^n f(x_i) \prod_{i=1}^m f(y_i) \prod_{i=1}^v f(w_i)$$
I want to know how we would approach this if our random variables weren't mutually independent? How would $L(\mu_1, \mu_2)$ look like?
You need to know their joint distribution. The likelihood is still just the PDF, viewed as a function of the parameters, but it just doesn't factor that way. For instance, assuming that the individual samples are still i.i.d., but the joint distribution of $X,$ $Y$ and $W$ is $f_{X,Y,W}(x,y,w),$ the likelihood would be $$ \prod_{i=1}^n f_{(X,Y,W)}(x_i,y_i,w_i).$$