Proof: Kish's effective sample size is no greater than the number of samples

218 Views Asked by At

Given a dataset $X=\{x_1,x_2,\dots,x_N\}$, and two densities $p(x)$ and $q(x)$ with $p(x)$ being absolutely continuous with respect to $q(x)$, the effective sample size is defined as the number of samples from $p(x)$ that would provide an estimator with a performance equal to that of the importance sampling estimator $w(x)=p(x)/q(x)$ with $N$ samples. The Kish's effective sample size is defined to be

$$ N_{ESS}={\Vert w(X)\Vert_1^2\over \Vert w(X)\Vert_2^2}={\left(\sum_xw(x)\right)^2\over(\sum_xw(x)^2)} $$

How can I prove that $N_{ESS}\le N$?

Update

I've come up with a solution that utilizes the nonnegativity of variance. I regard $w(x)$ as a random variable $Y$. Then, $\Vert w(X)\Vert_1^2=N^2\mathbb E[Y]^2$ and $\Vert w(X)\Vert_2^2=N\mathbb E[Y^2]$. Because $\mathbb E[Y^2]-\mathbb E[Y]^2=Var(Y)\ge0$, ${\mathbb E[Y]^2\over\mathbb E[Y^2]}\le 1$ and $N_{ESS}\le N$. I'm still open to any analytical solution.

1

There are 1 best solutions below

2
On BEST ANSWER

Hint:

For a given sum $S=\sum_x w(x)$, you can show that you minimise $\sum_xw(x)^2$ when the $w(x)$ are all equal, i.e. all $w(x)=\frac1NS$, the average of the weights

That $\sum_xw(x)^2$ will then give an upper bound on $N_{ESS}$