I'm having trouble understanding the following.
$(T_n)_{n\in\mathbb{N}}$ is a sequence of random variables. Then:
$$ P(T_n \leq t)=\int_0^t P(y+(T_n-T_1) \leq t) \text{d} F_{T_1} (y) $$
Don't know if it matters, but $T_n=\sum_{i=1}^n W_i$ where $W_i$ are iid.
In short, what is going on here? We integrate the distribution with respect to itself, and what we get is that same distribution function?
This is the law of total probability for continuous random variables. In fact, \begin{align} \mathbb{P}(T_n\le t)&=\mathbb{E}(\mathbb{1}_{T_n\le t})\\ &=\mathbb{E}(\mathbb{E}(\mathbb{1}_{T_n\le t}|T_1))\\ &=\int_{\mathbb{R}}\mathbb{E}(\mathbb{1}_{T_n\le t}|T_1=y)\,{\rm d}F_{T_1}(y)\\ &=\int_{\mathbb{R}}\mathbb{P}(T_n\le t|T_1=y)\,{\rm d}F_{T_1}(y). \end{align}
The independence between $W_i$'s yields $$ \mathbb{P}(T_n\le t|T_1=y)=\mathbb{P}\biggl(y+\sum_{i=2}^nW_i\le t\biggr)=\mathbb{P}(y+\left(T_n-T_1\right)\le t). $$ Therefore, $$ \mathbb{P}(T_n\le t)=\int_{\mathbb{R}}\mathbb{P}(y+\left(T_n-T_1\right)\le t)\,{\rm d}F_{T_1}(y). $$
Finally, I suppose $W_i\ge 0$ for all $i=1,2,...,n$. In this case, \begin{align} T_1&\ge 0,\\ T_n-T_1&\ge 0. \end{align} These two facts reduce the integral domain from $\mathbb{R}$ to $\left[0,t\right]$, i.e., $$ \mathbb{P}(T_n\le t)=\int_0^t\mathbb{P}(y+\left(T_n-T_1\right)\le t)\,{\rm d}F_{T_1}(y). $$ This completes the proof.