In statistics we describe a sample as $X_1,...,X_n$ i.i.d random variables, where $X_i\colon (\Omega,\mathcal{A},P)\rightarrow (\mathbb{R},\mathcal{B}(\mathbb{R}))$. It is clear to me that identically distributed does not mean that the random variables are equal. However, I don't see why they should not be equal when drawn as random sample.
Suppose, I conduct the same fair coin flip experiment twice, meaning $X_1\sim\mathrm{Ber}(p), X_2\sim\mathrm{Ber}(p)$. In terms of probability $X_1$ and $X_2$ should be identical mappings, correct? We do the same thing (tossing a coin) but twice, so the cannot differ. I don't understand in what terms they differ from each other and why they generate different $\sigma$-algebras.
My particular problem is the fallacy that, if I would assume them to be equal, we have that $\sigma(X_1,X_2)=\sigma(X_1)$ which leads to nonsensical results.
No.
If you model the tossing of $2$ coin flips using the same coin then we usually go for outcome space $\Omega=\{T,H\}^2$.
This with $\sigma$-algebra $\wp(\Omega)$ and a suitable probability measure $P$ that is determined by its values on singletons $\{\omega\}$.
Since we are dealing with the same coin we meet the condition: $$P(\{(H,H),(H,T)\})=P(\{(H,H),(T,H)\})$$ in words: the probability on heads for the first flip is the same as the probability on heads for the second flip.
Then we have the random variables $X_1$ and $X_2$ prescribed by $\omega=(\omega_1,\omega_2)\mapsto\omega_1$ and $\omega=(\omega_1,\omega_2)\mapsto\omega_2$ respectively.
These functions do not have $\mathbb R$ as codomain (as is mostly requested for a random variable) but that is just a technical problem that can be solved by e.g. identifying $H$ with $1$ and $T$ with $0$.
Then $X_1,X_2$ are iid, but we do not have $X_1=X_2$.
Final remark: if we are dealing with two coins that are both flipped once and the probability on heads is the same for the coins then we use this model as well.