Let $\sigma>0$ and $\mathcal N_{\sigma^2}$ denote the normal distribution kernel. If $X_i$ is a real-valued random variable and $(X_{i-1},X_i)\sim\mathcal L(X_i)\otimes\mathcal N_{\sigma^2}$, i.e. $\operatorname P\left[X_i\in\;\cdot\;\mid X_{i-1}\right]=\mathcal N_{\sigma^2}(X_i,\;\cdot\;)$, then $(X_{i-1},X_i-X_{i-1})=\mathcal L(X_{i-1})\otimes\mathcal N_{\sigma^2}(0,\;\cdot\;)$. In particular, $\xi_i:=X_i-X_{i-1}\sim\mathcal N_{\sigma^2}(0,\;\cdot\;)$ and $X_{i-1}$ and $\xi$ are independent.
How can we conclude that $\xi_1+\xi_2\sim\mathcal N_{2\sigma^2}(0,\;\cdot\;)$?
Remark: What I actually want to do is the following: Starting with an initial value $x_0$, I'm consecutively sampling $x_i\sim\mathcal N_{\sigma^2}(x_{i-1},\;\cdot\;)$. If the conclusion in the question would be correct, I could compute $x_n$ from $x_0$ by sampling $x_n\sim\mathcal N_{n\sigma^2}(x_0,\;\cdot\;)$ instead of applying all the intermediate sampling steps.
Can we show the desired result in general or am I missing an independence assumption which is implicit in the described sampling scheme?
In short: You are talking about a Gaussian random walk, and the core proof is here. If you only need to sample $X_n$, then you can skip the intermediate steps.
On the sum of independent normal distributed random variables: As you asserted already, stochastic independence is critical. If independence is missing, there is an counter-example: If $X \sim \mathcal N(0,\sigma^2)$ and $Y \sim \mathcal N(0,\sigma^2)$, then it could happen that $X = -Y$, which leads to $X+Y=0$.
However, if $X$ and $Y$ are independent, then the sum fulfills $X + Y \sim \mathcal N(0,2 \sigma^2)$; see here.
Therefore, if $\xi_1$ and $\xi_2$ are independent, then the claim from your question is true.
About your remark: Since the process you describe is a stochastic process and also a discrete martingal, it is helpful to consider it as a martingale transformation. However, you don't need to know what it is to understand the rest.
Let $\xi_1,\xi_2,\dots$ be stochastical independent samples of a normal distributed random variable $\mathcal N(0,\sigma^2)$.
For a fixed initial point $x_0 \in \mathbb R^n$, your iteration is equivalent to $$x_n = x_{n-1} + \xi_{n}, \quad \text{for all } n = 1,2,\dots$$
We could also write $$ x_n = x_0 + \sum_{i=1}^n \xi_i. $$
If you want to translate this into a theoretical object, we would consider $n$ independent random variables $\Xi_i$ which are all identical distributed $\Xi_i \sim \mathcal N(0,\sigma^2)$.
Now we can transform the sequence of random increments $(\Xi_i)_{i=1,\dots,n}$ into a random variable which describes the $n$th iterate $$ X_n = x_0 + \sum_{i=1}^n \Xi_i. $$ (It is often useful to construct a stochastic process by transforming a simpler one, here we transform $(\Xi_i)_{i\in \mathbb N}$ into $(X_i)_{i\in \mathbb N}$, which are both martingales if w.l.o.g. $x_0=0$.)
You already pointed out the essential independence, i.e. $X_{n-1}$ and $\xi_n$ are independent! This allows us to conclude by induction that $$X_1 = x_0 + \xi_1 \sim \mathcal N(x_0,\sigma^2),\\ X_2 = X_1 + \xi_2 \sim \mathcal N(x_0,2\sigma^2),\\ X_3 = X_2 + \xi_3 \sim \mathcal N(x_0,3\sigma^2),\\ \dots$$
For each step we use the proof about the sum of independent normal distributed random variables. We obtain your claim $X_n \sim \mathcal N(x_0,n \sigma^2)$.
Note: My answer has a different notation than yours since I am not used to work directly with the kernel of the normal distribution.