Let $(X_1,...,X_n)$ denote an i.i.d. random sample of size $n$ from the following distribution $f_\theta(x)$:
$$f_\theta(x)=(1-\theta)\mathbf1_{[-1/2;0]}(x)+(1+\theta)\mathbf1_{]0;1/2]}(x)$$
Let $x_i=(x_1,…,x_n)\in [-1/2;1/2]$, $u_n=\sum_{i=1}^n\mathbf1_{]-\infty;0]}(x_i)$ and $v_n=\sum_{i=1}^n\mathbf1_{]0;\infty[}(x_i)$
I don't understand why the likelihood is:
$$\mathcal{L}(x_1,...,x_n;\theta)=(1-\theta)^{u_n}(1+\theta)^{v_n}$$
And not:
$$\mathcal{L}(x_1,...,x_n;\theta)=(1-\theta)^{u_n}+(1+\theta)^{v_n}$$
Thanks.
The joint density of an iid sample is the product of the individual marginal densities, not the sum. So your likelihood cannot possibly be correct, since it is not formed by a product.
To understand what happened, we write things out step by step. Instead of using indicator functions to specify the density, we will use piecewise notation:
$$f(x \mid \theta) = \begin{cases} 1-\theta, & -1/2 \le x \le 0 \\ 1+\theta, & 0 < x \le 1/2 \\ 0, & \text{otherwise}. \end{cases}$$
This way, we have made explicit the idea that for any given observation $x_i$ in the support $[-1/2,1/2]$, the value $f(x_i \mid \theta)$ is only ever $1-\theta$ or $1+\theta$. It is never some combination of both. Moreover, the only pertinent information about the $x_i$ is whether it is positive or non-positive in $[-1/2,1/2]$; we don't need to know the actual value, just whether it is positive, because once we know this, the value of $f$ is uniquely determined.
As such, it is natural to define auxiliary statistics that count the number of observations that are positive. This is the purpose of introducing $u_n$ and $v_n$. In particular, $v_n$ counts the number of positive observations, and $u_n$ counts the non-positive ones. Of course, $u_n + v_n = n$ for all $n$, so we really only need to define one of these, say $v_n$.
Then our joint likelihood for the iid sample $(x_1, \ldots, x_n)$ can be written as
$$\mathcal L(\theta \mid x_1, \ldots, x_n) = \prod_{i=1}^n f(x_i \mid \theta) = (1 - \theta)^{u_n} (1 + \theta)^{v_n} = (1-\theta)^{n - v_n} (1+\theta)^{v_n},$$ because if $u_n$ represents the number of non-positive observations, each one of these contributes a multiplicative factor of $1-\theta$ to the likelihood; similarly, there are $v_n$ positive observations, each of which contributes $1+\theta$.
If this is still confusing, it is helpful to consider an example with a specified sample size. Say $n = 7$, and the sample is $$(x_1, \ldots, x_7) = (0.2, 0.1, -0.3, -0.4, 0.3, 0.4, 0.1).$$ Then for $i \in \{1, 2, 5, 6, 7\}$, the $x_i$ are positive, and for $i \in \{3, 4\}$, the $x_i$ are negative. The positive ones each satisfy $f(x_i \mid \theta) = 1+\theta$. The negative ones each satisfy $f(x_i \mid \theta) = 1-\theta$. So for example, $f(0.2 \mid \theta) = 1+\theta$. This is just using the definition we are given.
Now, how many of the observations are positive? $v_7 = 5$. And $u_7 = 2$. So the total likelihood is just $(1-\theta)^2 (1+\theta)^5$. As we can see, it doesn't matter which of the $x_i$ are positive, nor does it matter what their values are, only their sign.