The Normal and Cauchy distributions belong to a class of distributions known as the Stable distributions. For the Normal and the Cauchy, if you design a hierarchical model where their location parameters are also distributed like the original distribution, then you end up with the distribution with which you started. For example:
\begin{align} X &\overset{iid}{\sim} N(\mu,\sigma)\\ \mu &\overset{iid}{\sim} N(\nu, \sigma) \end{align}
then
\begin{equation} X \overset{iid}{\sim} N(\nu,2\sigma) \end{equation}
What I would like to determine is to which set of Stable distributions this applies to or if the Normal and the Cauchy distributions are the complete set of distributions within the stable to which this applies.
\begin{equation} f(x) = \frac{1}{2\pi}\int_{-\infty}^\infty \varphi(t) e^{-ixt} dt \end{equation}
where
\begin{equation} \varphi(t) = e^{it\mu - |\gamma t|^\alpha \left(1-i\beta \text{sgn}(t) \Phi(t)\right)} \end{equation}
and
\begin{equation} \Phi(t) = \begin{cases} \tan \frac{\pi \alpha}{2} & \alpha \neq 1 \\ -\frac{2}{\pi} \ln |\gamma t| & \alpha = 1 \end{cases} \end{equation}
For simplicity, I attempted to begin to solve this problem by setting $\beta = 0$ and $\gamma = 1$. This provides me the parameters $\alpha$ and $\mu$ for this exercise. Once I had an approach figured out then I planned to reintroduce $\beta$ and $\gamma$.
We can define the distribution for the random variate of $X$ as $p(x; \alpha, \beta, \gamma, \mu$)$ as follows:Applying the hierarchical approach, we obtain:
\begin{equation} p(x;\alpha,\mu) =\frac{1}{2\pi}\int_{-\infty}^\infty \varphi(s) e^{-ixs} ds \end{equation}
We also want that $\mu \overset{iid}{\sim} Stable(\alpha, \omega)$. Therefore, the distribution of $\mu$ can be written as:
\begin{equation} g(\mu;\alpha,\omega) = \frac{1}{2\pi}\int_{-\infty}^\infty \varphi(t) e^{-i\mu t} dt \end{equation}
We therefore want to determine if the following equation reduces to a form of the Stable distribution:
\begin{equation} h(x; \alpha, \omega) = \frac{1}{4\pi^2}\int_{-\infty}^\infty p(x;\alpha,\mu) g(\mu;\alpha,\omega) d\mu \end{equation}
We can now expand this and group the $\mu$ terms for further investigation:
\begin{align} h(x; \alpha, \omega) &= \frac{1}{4\pi^2}\int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty \varphi(s) \varphi(t) e^{-ixs-i\mu t} ds dt d\mu\\ &= \frac{1}{4\pi^2}\int_{-\infty}^\infty \int_{-\infty}^\infty e^{it\omega - |s|^\alpha - |t|^\alpha - ixs} \int_{-\infty}^\infty e^{i\mu (s-t)} d\mu ds dt \\ &\overset{?}{=} \frac{1}{4\pi^2}\int_{-\infty}^\infty \int_{-\infty}^\infty e^{it\omega - |s|^\alpha - |t|^\alpha - ixs} \frac{2i}{s-t} ds dt \end{align}
At this point, I am not entirely sure how to proceed. I do know that $\gamma$ does change and is not longer 1.
I have run multiple simulations for this setup and the results suggest that the answer is indeed another stable distribution. However, empirical results are not proof.
I would appreciate any insights the community could offer. Thanks!
Maybe I'm missing something here, but this seems trivial.
Let $A(\mu)$ be some distribution with location parameter $\mu$ and let $B$ be some other distribution.
The statement $X \sim A(\mu)$ is equivalent to the statement $X \sim A(0) + \mu$, because that's the definition of a location parameter.
So if $\mu \sim B$, we have $X \sim A(0) + B$. See what's happening here? You're just adding two distributions.
So if $A$ and $B$ are stable distributions of the same kind (may only differ in scale and location), then it follows directly from the definition of stable distributions that $X$ is distributed the same, up to scale and location.