I'm self-teaching myself continuous time markov chains out of JR Norris's book Markov Chains. He gives the transition probability definition of a Poisson process as follows: $(X_t)_{t\geq 0}$ has stationary independent increments, and for each $t$, $X_t$ has Poisson distribution of $\lambda t$.
Now, using this definition (and not the others), it easily follows that the first jump time $J_1$ is distributed exponentially with $\lambda$, which I've shown by noting that $$P(J_1 > t) = P(X_t = 0) = e^{-\lambda t} $$ And I've shown that $$P(t_1 < J_1 \leq t_2 < J_2) = P(t_1 < J_1, t_2 - t_1 < J_2) = P(t_1 <J_1)P(t_2 - t_1 < J_2)$$ $$= P(X_{t_1} = 0)P(X_{t_2 - t_1} = 1) = e^{-\lambda t_1}\lambda(t_2 - t_1)e^{-\lambda t_2 - t_1} $$ He now says to deduce that $J_2 - J_1$ is exponentially distributed and that $J_2 - J_1$ is independent of $J_1$. I think I'm missing something obvious, but I'm not quite sure how to connect $J_2 - J_1$ to the probability I've shown above so that I can show it's independent of $J_1$. Any suggestions?
If you've proved that $$P(t_1<J_1\le t_2<J_2)=e^{-\lambda t_1}\lambda(t_2-t_1)e^{-\lambda(t_2-t_1)}=\lambda(t_2-t_1)e^{-\lambda t_2},\tag1$$ then you know the probability that the pair $(J_1, J_2)$ lives in a semi-infinite strip $(t_1, t_2] \times (t_2,\infty)$ in the plane. The name of the game is to derive from (1) a series of useful probabilities for additional regions.
First, by setting $t_1=0$ in (1) you also know $$P(J_1\le t_2<J_2)=\lambda t_2e^{-\lambda t_2}.\tag2$$ Subtract (1) from (2) to find for $t_2> t_1$ $$P(J_1\le t_1, J_2>t_2)=\lambda e^{-\lambda t_2}[t_2-(t_2-t_1)]=\lambda t_1e^{-\lambda t_2}.\tag3$$ Develop (3) further by subtraction: If $t_0<t_1<t_2$, then $$P(t_0<J_1\le t_1, J_2>t_2)=\lambda t_1e^{-\lambda t_2}-\lambda t_0e^{-\lambda t_2}=\lambda(t_1-t_0)e^{-\lambda t_2}.\tag4$$ At this point we've obtained the probability that $(J_1, J_2)$ lives in a strip $(t_0,t_1]\times (t_2,\infty)$. (Note that (4) can be derived from first principles, similarly to (1).)
The next step is to write the probability $P(J_1>t, J_2-J_1>s)$ as approximately the sum of many small probabilities like those in (4), by carving up the region in the plane where $J_1>t$ and $J_2-J_1>s$ into thin, tall slices. In the limit (and this argument can be made rigorous) you should find $$P(J_1>t, J_2-J_1>s) = e^{-\lambda(t+s)}.\tag5$$ Deduce from the form of (5) that $J_1$ and $J_2-J_1$ are independent.
Another approach is to derive (3), then compute the joint CDF for $(J_1,J_2)$ using $$P(J_1\le t_1, J_2\le t_2)=P(J_1\le t_1) - P(J_1\le t_1,J_2 > t_2),$$ then differentiate to obtain the joint PDF for $(J_1,J_2)$, and finally use the change of variables formula to find the joint PDF for $(J_1, J_2-J_1)$.