A Gambler Model: Analytic pdf of Gaussian mixture

84 Views Asked by At

Gambling Process

The basic data generating process of a gambling session is as follows: the agent bets $b_i$ in period $i$ and earns the monotonic outcome $y_i = g(x_i, b_i) := b_ix_i+\mu$, with $x_i \sim \mathcal{N}(\mu, 1)$, $i=1,2$ i.i.d. . Thus, $g^{-1}(y) = \frac{y-\mu}{b}$ with $b$ fixed.

In the first period, the gambler always bets $b_1=1$. If $y_1>0$, he stops playing, and his second period outcome is $y_2 = 0$. However, if $y_1\le 0$, he bets $b_2 = (-\alpha y_1)$ in the second period, receiving outcome $y_2 = b_2x_2 + \mu$. We are interested in the final outcome $$R = y_1+y_2$$

Can be characterize this distribution analytically?

One way I thought would be to define

$$R = B\cdot x_1^+ + (1-B)[x_1^{-}(1-\alpha x_2)]$$ where $B$ is a Bernoulli with $p = \mathbb{P}(X_1>0)$, $X_1^+$ is $X_1$ truncated at $(0, \infty)$ and $X_1^{-}$ is $X_1$ truncated at $(-\infty, 0]$.

Is there hope in deriving this distribution analytically? In particular, the integral involving the 2nd term in the expression above.

What I've done

What I developed so far was not quite using the mixture approach above directly, but trying to use the transformation of random variables (although I think they are analogous).

Let $R = y_1+y_2$, $f_i$ be the pdf associated with $y_i$, $\phi(\cdot)$ is the standard normal pdf.

\begin{equation*} f_1(y_1) = \phi(g^{-1}(y_1))\frac{d}{dy_1}g^{-1}(y_1) = \phi(y_1-\mu) \end{equation*} \begin{align*} & f_2(y_2) = \phi(g^{-1}(y_2))\frac{d}{dy_1}g^{-1}(y_1) \\ & = \phi\left(\frac{y_2-\mu}{b_2}\right)\frac{1}{b_2} \end{align*}

We have two cases, $R\ge 0$ and $R<0$. For $R\ge 0$: \begin{align*} P(R\le k) =& P(y_1 \in [0,k])+P[(y_1 \le 0) \cap (y_1+y_2 \in [0,k]) = \\ & \int_0^kf_1(y_1)dy_1 + \int_{-\infty}^0\int_{-y_1}^{k-y_1}f_2(y_2)f_1(y_1)dy_2dy_1 \\ & \left[\frac{d}{dk}\right] \implies f_R(k)^+ = f_1(k) + \underbrace{\int_{-\infty}^0 f_2(k-y_1)f_1(y_1)dy_1}_{ C(\theta)} \end{align*} Analogously, for $R<0$: \begin{align*} P(R\le k) =& P[(y_1 \le 0) \cap (y_1+y_2 \in [0,k]) = \\ & \int_{-\infty}^0\int_{-y_1}^{k-y_1}f_2(y_2)f_1(y_1)dy_2dy_1 \\ & \left[\frac{d}{dk}\right] \implies f_R(k)^- = \underbrace{\int_{-\infty}^0 f_2(k-y_1)f_1(y_1)dy_1}_{ C(\theta)} \end{align*} This $C(\theta)$ is similar to a convolution, and that's the difficult integral to solve.

Substituting our previous definitions, we get: \begin{align*} f_R(k) = \begin{cases} \phi(k-\mu) + \int_{-\infty}^0\phi\left[\frac{k-y_1-\mu}{-\alpha y_1}\right]\cdot \left(\frac{-1}{\alpha y_1}\right)\cdot \phi(y_1-\mu)dy_1, \quad \text{if} \quad k>0 \\ \int_{-\infty}^0\phi\left[\frac{(k-y_1)-\mu}{-\alpha y_1}\right]\cdot \left(\frac{-1}{\alpha y_1}\right)\cdot \phi(y_1-\mu)dy_1, \quad \text{otherwise} \end{cases} \end{align*} We then scale $R$ by $\frac{1}{\sigma_R}$ and use this normalized random variable instead. This means, our final variable will be $\tilde R = \frac{R}{\sigma_R}$ with pdf: \begin{align*} & f_{\tilde R}(k) = f_R(\sigma_Rk)\sigma_R = \\ & \begin{cases} \sigma_R\left[\phi(\sigma_Rk-\mu) + \int_{-\infty}^0\phi\left[\frac{\sigma_Rk-y_1-\mu}{-\alpha y_1}\right]\cdot \left(\frac{-1}{\alpha y_1}\right)\cdot \phi(y_1-\mu)dy_1\right], \quad \text{if} \quad k>0 \\ \sigma_R\left[\int_{-\infty}^0\phi\left[\frac{(\sigma_Rk-y_1)-\mu}{-\alpha y_1}\right]\cdot \left(\frac{-1}{\alpha y_1}\right)\cdot \phi(y_1-\mu)dy_1\right], \quad \text{otherwise} \end{cases} \end{align*}

Is there hope in solving that last term $$\left[\int_{-\infty}^0\phi\left[\frac{(\sigma_Rk-y_1)-\mu}{-\alpha y_1}\right]\cdot \left(\frac{-1}{\alpha y_1}\right)\cdot \phi(y_1-\mu)dy_1\right]$$

analytically? Either manually or with the help of some computer algebra system. By the way, previously I was using a Student's t distribution instead of a normal, and I don't think there is any hope in that case.

Comments

I want to know if using the Gaussian distribution (or some other tractable symmetric distribution perhaps?) would allow us to find a closed form expression for the pdf of this distribution. I am not really interested in the distribution per se, but in some moments (mean, skewness, kurtosis, etc.), so if we can find a closed form expression for these moments, even if we can't for the pdf, it would also be ok.

1

There are 1 best solutions below

4
On BEST ANSWER

I'll just post the calculation of the mean. If I understood correctly, we have $$\begin{aligned}Y_1&=X_1+\mu \\ Y_2&=\begin{cases} 0 & \textrm{if }\,Y_1>0\\ -\alpha Y_1X_2+\mu & \textrm{if }\,Y_1\leq0\\ \end{cases}\end{aligned}$$ $$X_1,X_2\sim \mathcal{N}(\mu,1),\,\textrm{indep.}$$ thus $$\begin{aligned}Y_1+Y_2&=\begin{cases} X_1+\mu & \textrm{if }\,Y_1>0\\ X_1-\alpha Y_1X_2+2\mu & \textrm{if }\,Y_1\leq0 \end{cases}\\ &=\begin{cases} X_1+\mu & \textrm{if }\,X_1>-\mu\\ X_1-\alpha(X_1+\mu)X_2+2\mu & \textrm{if }\,X_1\leq-\mu \end{cases}\\ &=\begin{cases} X_1+\mu & \textrm{if }\,X_1>-\mu\\ X_1-\alpha X_1X_2-\mu\alpha X_2+2\mu & \textrm{if }\,X_1\leq-\mu \end{cases}=\\ &=(X_1+\mu)\mathbb{I}_{\{X_1>-\mu\}}+(X_1-\alpha X_1X_2-\mu\alpha X_2+2\mu)\mathbb{I}_{\{X_1\leq -\mu\}} \end{aligned}$$ So $$E[Y_1+Y_2]=E[X_1\mathbb{I}_{\{X_1>-\mu\}}]+\mu P(X_1>-\mu)+\\ +E[X_1\mathbb{I}_{\{X_1\leq -\mu\}}]-\alpha E[X_2]E[X_1\mathbb{I}_{\{X_1\leq -\mu\}}]-\\+\mu\alpha E[X_2]P(X_1\leq -\mu)+2\mu P(X_1\leq -\mu)$$ All these quantities are known in closed form.