I'm currently trying to understand the proof that probability distributions are uniquely defined by their characteristic functions.
One of the relevant lemmas starts like this
\begin{aligned} &\text { Proposition } 10.12 \text { Let } P \text { be a probability distribution on } \mathbb{R}^{d} \text { . Let } P^{\sigma}=P*\mathcal{N}\left(0, \sigma^{2} I\right)=\\ &\mathcal{L}(X+\sigma Z) \text { where } X, Z \text { are independent with } \mathcal{L}(X)=P \text { and } \mathcal{L}(Z)=\mathcal{N}\left(0, \sigma^{2} I\right), \sigma>0 \end{aligned}
For two independent random variables the convolution is given by
$$ P(X+Z\in B) = P_X*P_Z(B)$$
I don't understand why, given that $\mathcal{L}(Z)=\mathcal{N}\left(0, \sigma^{2} I\right)$ and $\mathcal{L}(X)=P$, the law for $P^\sigma$ is given by $\mathcal{L}(X+\sigma Z)$ and not $\mathcal{L}(X+Z)$.