During a part of my thesis I came across the following problem:
Let $\Theta \sim \nu(\theta)$ be a discrete RV, with PMF $\nu(\theta)$ that represents the prior probability function for our expectatncy. In our problem setting we look at $X\sim N(\mu,1)$, where $\mu$ changes according to $H_i$: $$H_0: X|\theta_0 \sim N(\theta_0,1) \ \ \ \ \ \ \ , \ \ \ \ \ \ H_1: X\sim f_X(x) = \int_{\mathbb{R}}\phi(x-\theta)\nu(\theta)d\theta\ \ , \theta \sim \nu(\theta)$$
In my problem we are interested in a discrete normal prior with finite support, therefore:
$$\theta \in \chi = \{-3, -2.8,2.6,...,2.8, 3 \}$$ $$\mathbb{P}_\Theta(\theta) = \frac{\phi(\theta)}{\sum_{\theta \in \chi}{\phi(\theta)}} \mathbf{1}(\theta \in \chi)$$
Since i have a discrete prior $\nu$, the marginal distribution of $X$ under $H_1$ can be discribed using Dirac's Delta function: $$(1) \ \ f_X(x) \overset{H_1}{=} \int_{\mathbb{R}}\phi(x-\theta)\nu(\theta)d\theta = \int_{\mathbb{R}}\phi(x-\theta)\mathbb{P}(\Theta=\theta)\delta(x-\theta)d\theta = \sum_{\theta\in \chi}{\phi(x-\theta)P(\Theta = \theta)}$$
Which is nothing but a weighted mixture of gaussians.
appendix - optional reading
I wanted to understand this distribution a little so i looked for the first 2 moments, so i can understand the mean and the variance:
$$(2) \ \ \ \ \mathbb{E}[X] = \int_{-\infty}^{\infty}{xf_X(x)dx}\overset{(1)}{=}\int_{-\infty}^{\infty}{x\sum_{\theta\in\chi}{\phi(x-\theta)P(\Theta = \theta)dx}}$$
$$= \sum_{\theta\in \chi}\int_{-\infty}^{\infty}{x{\phi(x-k)P(\Theta = \theta)dx}}=\sum_{\theta\in \chi}P(\Theta = \theta)\mathbb{E}[X|\theta] =\sum_{\theta\in \chi}\theta P(\Theta = \theta)$$
Meaning that the expectation of this mixture is the weighted sum of the expectations from our support. In our specific case it's $0$ from the symmetry around zero.
$$(3) \ \ \ \ \mathbb{E}[X^2] = \int_{-\infty}^{\infty}{x^2f_X(x)dx} \overset{(1)}{=}\int_{-\infty}^{\infty}{x^2\sum_{\theta\in \chi}{\phi(x-k)P(\Theta = \theta)dx}}$$
$$=\sum_{\theta\in \chi}\int_{-\infty}^{\infty}{x^2{\phi(x-\theta)P(\Theta = \theta)dx}}=\sum_{\theta\in \chi}P(\Theta = \theta)\mathbb{E}[X^2|\theta]$$
$$=\sum_{\theta\in \chi}P(\Theta = \theta)(1^2+\theta^2)$$
$$(4)\ \ \ \ Var(X) \equiv \mathbb{E}[X^2] - \mathbb{E}^2[X]\overset{(2)+(3)}{=} \sum_{\theta\in \chi}P(\Theta = \theta)(1^2+\theta^2) - \Big(\sum_{\theta\in \chi}\theta P(\Theta = \theta)\Big)^2$$
Now my question is: if I were to take some independent random variables $X_i \overset{\text{ind}}{\sim} N(\theta_i,1)$ and create a vector our of them - it would distribute: $$\vec{X} \sim N_p(\vec{\theta},I_p)$$ and therefore, a linear combination: $$a^TX \sim N(a^T\theta, a^Ta)$$ But that is exected behaviour since I operated on the random variables.
I just discovered by plotting the marginal distribution and studying its moments, that it is assymptotically behaving like $N(0,2)$ which would be the marginal distribution if my prior was just continuous $N(0,1)$, and that is understandable, but it still confuses me a little that the linear combination im looking at is a weighted sum of densities, and not random variables.
I tried to look and read about it, but I did not come across an answer to this specific situation, but to me it seems as if the weighted sum of the densities act like the density of a weighted sum of the RV's:
$$a^T\vec{f(x)} = f_{(a^TX)}(x)$$
I wonder if that's maybe really the case since the Normal distribution has many surprisingly convinient qualities. If anyone can help my get a little clarification - that would be much appreciated.