Exponential PDF Mystery

166 Views Asked by At

The envelope function of the family of exponential PDFs of the form

$$f_\lambda(x)=\lambda e^{-\lambda x}$$

is

$$g(x)=\frac{1}{ex}$$

for $x > 0, \lambda > 0$. The point of tangency between $f_\lambda(x)$ and $g(x)$ is

$$\left(\frac{1}{\lambda}, \frac{\lambda}{e}\right)$$

Now suppose that $X$ is an exponentially distributed random variable with a PDF equal to $f_\lambda(x)$. Then

$$E[X] = \frac{1}{\lambda}$$

My question is this: why is the $x$ coordinate of the point of tangency between $f_\lambda(x)$ and $g(x)$ equal to $E[X]$?

Visualization: https://www.desmos.com/calculator/uhllhcd5um

Note: The same pattern holds for a Continuous Poisson PDF, though it does not hold for some other PDFs that I checked (such as the Half-Normal distribution, the Gumbel Distribution, and the Lomax distribution).

Visualization: https://www.desmos.com/calculator/gk2uwaikz2

EDIT:

The Gamma Distribution obeys this rule as well:

Visualization: https://www.desmos.com/calculator/0ix5es6pqu

EDIT #2:

The Chi Squared Distribution obeys this rule as well, as a consequence of being a special case of the Gamma distribution. Also, since the exponential distribution is a special case of the Gamma distribution as well it is not surprising that it obeys this rule. However, the continuous Poisson distribution is not a special case of the Gamma distribution, yet it obeys the rule, which is yet to be explained (as is the reason why the Gamma distribution obeys the rule in the first place).

Visualization: https://www.desmos.com/calculator/oe9sg3mf1q

1

There are 1 best solutions below

2
On

So the point of tangency is defined by the adjacency of the nearby distributions to first order, $$f_{\lambda+d\lambda}(x) \approx f_\lambda(x),$$ which we can just solve:$$\frac{\partial}{\partial\lambda}\big(\lambda ~e^{-\lambda x}\big) = (1 - \lambda x)~e^{-\lambda x} = 0~~~\Rightarrow~~~1 - \lambda x = 0~~~\Rightarrow~~~x = 1/\lambda.$$ Note that if we use a slightly different definition of the parameter, say $\lambda = \lambda(y)$ for some other parameter $y,$ we would still come up with $x=1/\lambda(y)$ due to the chain rule and dividing out the resulting $\frac{d\lambda}{dy}$ term in our $\partial/\partial y$ derivative. So for a given one-parameter distribution this is a robust finding with respect to that parameter -- if it holds for one parametrization it should hold for others.

With that said, it does seem roughly coincidental: the parameters used to describe a family of things seem very different from the things themselves. Can we use that to build a counterexample?

Shifting normals

One possibility: consider $s\mapsto \operatorname{Normal}(s, s^2),$ the subset of normal distributions whose standard deviations are equal to their means. Their PDFs are of course, $$f_s(x) = \frac{1}{\sqrt{2\pi s^2}}\exp\left(-\frac{(x - s)^2}{2s^2}\right)$$ with mean $s$ and standard deviation $s$. But after a bit of work, $${\partial f_s\over\partial s} = \frac{1}{\sqrt{2\pi s^2}}\exp\left(-\frac{(x - s)^2}{2s^2}\right)\left({s^2 + s x - x^2\over s^3}\right),$$ which is equal to zero when $x = \phi~s,$ where $\phi$ is the golden ratio. Of course the golden ratio is not $1$ and thus we do not have $x = E[X]$ as desired.

Just to prove visually that this is indeed the proper envelope, here is a mock-up I built out of your existing demos: https://www.desmos.com/calculator/stkuixtk8o , with $y=f_{x/\phi}(x)$ as the envelope.

Now there is an interesting thing here because that expression does have two roots which sum to the mean value; the other root is $x = (1-\phi) s.$ Can we get rid of that, too?

Half-normals

The half normal distribution is defined using the standard normal PDF $n(x)$ as $$h_s(x) = \begin{cases} \frac{2~n(x/s)}{s} & \text{if } x > 0,\\ 0 & \text{otherwise.} \end{cases} $$ Here $s$ is no longer the standard deviation but is just a parameter.

Working it out, I believe $${\partial h_s \over \partial s} = \frac{2~n(x/s)}{s}\left(\frac{x^2 -s^2}{s^3}\right),$$ and thus on the envelope, $x = s$. But that is not the mean of this distribution, which Wikipedia lists as $s~\sqrt{2/\pi}.$