A distribution/generalized function is an element of the dual space of $$S=\{f\in C^{\infty}(\mathbb{R})\colon \|f\|_{\alpha,\beta}<\infty \text{ for all } \alpha ,\beta\}$$ Where $\|f\|_{\alpha,\beta}=\sup_{x\in \mathbb{R}}|x^{\alpha} f^{(\beta)}(x)|$. We know that all probability measures are elements of $S^*$, or more specifically the linear functional $L_{\mu}\colon S\rightarrow \mathbb{R}$ where $L_{\mu} (f)=\int_{-\infty}^{\infty}fd\mu$.
In this sense, a probability measure is just a linear functional $L\colon S\cup\{1\}\rightarrow \mathbb{R}$ with $L(1)=1$.
Using the Riesz Representation Theorem, any linear functional (with a few technical considerations) has a unique associated measure. So if $L$ is any distribution on $S\cup \{1\}$, where $L(1)=1$, it is a probability distribution.
Is this correct? A "probability distribution" (in the sense of the Radon-Nikodym derivative of a probability measure with respect to Lebesgue measure) is really just a normalized (or is uniquely associated with a) normalized Schwartz distribution?
There is an issue here which is that $\mathcal{S}\cup\{1\}$ is not a vector space so we can't talk about linear functionals on $\mathcal{S}\cup \{1\}$. If $f\in\mathcal{S}-\{0\}$, then $f+1\notin\mathcal{S}\cup \{1\}$.
And even if we change the question by instead taking the vector space generated by $\mathcal{S}$ and $1$ and defining the topology blah blah. We could still define something like $$L:=\delta_1+\delta_2-\delta_3$$ Which will be a linear functional on your space and we'll have $L(1)=1$, but I am sure you will agree this is not a probability measure.