Sum of 'inverse' Normal (1/X) random variables. Equivalent resistance calculation

1k Views Asked by At

Consider the case of $N$ resistances $R$ connected in parallel. The equivalent resistance of such a circuit is calculated as follows

$$ \frac{1}{R_{eq}} = \underbrace{\frac{1}{R} + \frac{1}{R} + ..... + \frac{1}{R}}_{\text{N times}} \quad\quad (1)$$

In the case (here) of equivalent resistances, this trivially simplifies to

$$ R_{eq} = \frac{R}{N} \quad \quad (2) $$

Edit following the comments: This reasoning holds only for the resistance being an algebraic variable. Now, I would like to perform the same calculation but by replacing $R$ by $\bar{R}$, where $\bar{R}$ is now a random variable following the law $\mathcal{N}(\mu,\sigma^2)$, and obviously equation (2) doesn't hold in general anymore for random variables.

So, consider now that the resistance is a normally distributed random variable $$ \bar{R} \sim \mathcal{N}(\mu,\sigma^2) $$

Given this, I would like to calculate the (asymptotic) distribution of $\bar{R}_{eq}$. Not being an expert in statistics, I cannot say anything about the law behavior of the sum in (1). I know that the reciprocal distribution of a normal variable is bimodal, but cannot take it from there. Apparently I could not even apply the central limit theorem on sum (1) since the bimodal distribution doesn't have a first nor a second moment.

From a statistical analysis, it seems that $$ \bar{R}_{eq} \overset{N}{\rightarrow} \mathcal{N}\left(\frac{\mu}{N},\frac{\sigma^2}{N^2}\right) $$

which indicates that equation (2) holds even as a law behavior. Can you see any way of deriving this last relationship analytically ?

2

There are 2 best solutions below

0
On

The Question

Let $(R_1, \dots, R_n)$ denote an IID sample of size $n$, where $R_i \sim N(\mu, \sigma^2)$, and let:

$$Z = \frac{1}{R_1} + \frac{1}{R_2} + \dots + \frac{1}{R_n}$$

Find the asymptotic distribution of $R_{eq} = \large\frac{1}{Z}$.

OP asks

From a statistical analysis, it seems that $$ R_{eq} \overset{n}{\rightarrow} \mathcal{N}\left(\frac{\mu}{n},\frac{\sigma^2}{n^2}\right) $$

... Can you see any way of deriving this last relationship analytically ?

Answer:
No, because the relationship is wrong, and does not hold.

Theoretically, even if one could apply the Central Limit Theorem, it would be the pdf of $Z$ that would be asymptotically Normal ... not the pdf of $1/Z$.

To illustrate that it does not work, here is a one-line Monte Carlo simulation of $Z$ (in Mathematica), as a function of $n$, when say $\mu = 300$ and $\sigma = 5$:

Zdata[n_] := Plus@@Table[RandomReal[NormalDistribution[300,5], 10^5]^-1, {n}];

The following plot compares the:

  • the empirical pdf of $R_{eq} = \large\frac{1}{Z}$ (squiggly BLUE curve)
  • the OP's proposed fit model (dashed red curve)

Plainly, the fit does not work.

A better fit

Suggested better fit ...

As above, the asymptotic Normal model is not the correct model ... however, if $\mu$ is large relative to $\sigma$, then a Normal fit of form: $\mathcal{N}\left(\frac{\mu}{n} - blah,\frac{\sigma^2}{n^3}\right)$ appears to perform reasonably well.

For the same example as above, with $n = 100$ (and blah = 0), the fit is:

For $n = 800$ (and blah again 0), the fit is worse:

Plainly, as $n$ increases, a mean adjustment of some function $blah(\mu, \sigma, n)$ is also required.

5
On

Let's suppose $\mu$ is large compared to $\sigma$ so that negative values have vanishing probability to occur and allow ourselves to perform the taylor expansion described below.

One can approximate the function $x \to \frac{1}{x}$ around $\mu$ by its first order taylor expansion

$$ \frac{1}{\bar{R}} \simeq \frac{1}{\mu} - \frac{1}{\mu^2}\left(\bar{R}-\mu\right) \sim \mathcal{N}\left(\frac{1}{\mu},\frac{\sigma^2}{\mu^4} \right)$$

Therefore, $\frac{1}{\bar{R}_{eq}}$ follows approximately

$$ \frac{1}{\bar{R}_{eq}} := \frac{1}{\bar{R}}+\ldots+\frac{1}{\bar{R}} \underbrace{\simeq}_{\text{normal property}} \frac{N}{\bar{R}} \sim \mathcal{N}\left(\frac{N}{\mu},\frac{N\sigma^2}{\mu^4}\right) $$

One can then invert the first relation, assuming $\frac{\sqrt{N}\sigma}{\mu^2} \ll \frac{N}{\mu}$ (so that it still holds):

$$ \frac{1}{\bar{R}_{eq}} \simeq \frac{N}{\bar{R}} \simeq \frac{N}{\mu} - \frac{N}{\mu^2}(\bar{R}-\mu) \simeq \frac{N}{\mu} - \frac{N^2}{\mu^2}\left(\bar{R}_{eq} - \frac{\mu}{N}\right) $$ Therefore $$\bar{R}_{eq} \simeq -\frac{\mu^2}{N^2}\frac{1}{\bar{R}_{eq}} + 2\frac{\mu}{N} \sim \mathcal{N}\left(\frac{\mu}{N},\frac{\sigma^2}{N^3}\right) \quad \quad (\dagger) $$

Simulation:

I have drawn $N$ random variables $\bar{R}_{i}$ from $\mathcal{N}(\mu,\sigma^2)$ with $\mu = 300, \sigma = 30$ and proceded to calculate $$\bar{R}_{eq} = \left( \sum \limits_{i=1}^N \frac{1}{\bar{R}_i}\right)^{-1}$$

50'000 times to get the distribution of $\bar{R}_{eq}$. I binned the results in 100 bins and fitted the distribution with a gaussian. This has been repeated for $N$ ranging from 1 to 200 to get a behavior of the distribution of $\bar{R}_{eq}$ in function of $N$.

For each $N$, the $\mu$ and $\sigma$ of the gaussian fit are represented, normalized, on a log-log scale. The results fit $(\dagger)$

For each $N$, the $\mu$ and $\sigma$ of the gaussian fit are represented, normalized, on a log-log scale. The results fit $(\dagger)$