I was doing a recap of the probability theory I had last year and even though this question shouldn't be hard, it is somehow confusing me immensly.
Clearly, if we have $X_1, X_2$ belonging to $U[0, 1]$, then $Y = X_1 + X_2$ must have a support of $Y$ belonging to $[0, 2]$, as both $X_1$ and $X_2$ belong to $[0,1]$. By definition, the pdf of $U[0, 1]$ is simply $1$. Now, if I try to find the cdf of $Y$, I find that:
$F_Y(y) = \int_0^y\int_0^{y-x2}f(x_1, x_2)dx_1dx_2 = \int_0^y\int_0^{y-x2}1dx_1dx_2 = \int_0^y(y-x_2)dx_2 = y^2/2$.
For a distribution function, if $y$ tends to its limit, then $F_Y(y)$ tends to $1$. However, if I let $y = 2$, then $F_Y(y) = 2$ as well, which can't be.
I must be doing something incredibly stupidly wrong. Please tell me what it is.
You've got (in your outer integral) $x_2$ going from $0$ up to $y$, which can't be right, as $x_2$ never gets bigger than $1$.
You haven't mentioned whether or not $X_1$ and $X_2$ are independent, but if that is the case then you should be able to break the integral down to two cases,
case 1) $0 < y < 1$
case 2) $1 < y < 2$
and be more careful about the limits of integration you use. Remember, $f(x_1, x_2) = 0$ outside of $[0,1] \times[0,1]$.
(If you can't assume that $X_1$ and $X_2$ are independent then you probably won't be able to get any nice closed-form expression for $F_Y$.)