I am trying to figure out the maximum number of experimental measurements to make to determine the true value of a variable $k$.
I have a function $f(x,y,k)$ where $x,y$ are independent variables and $k$ is a constant I want to find.
I have this text, and in it an author defines a variable, $p$:
$$p=\frac{f(X,Y,k)}{X} \thinspace if\thinspace X\leq Y\\\\p=\frac{f(X,Y,k)}{Y}\thinspace if \thinspace X\geq Y$$
The author states that the amount of information we can gain, statistically, from 1 measurement of $f(X,Y,k)$ is some function $I(p)$ S.T. $0\leq I(p)\leq1$
The author states the total information obtained in n measurements is : $I=\sum_{i=1}^{n}I(p_i)$
Then, the author states "It would appear at first sight that it is possible to increase indefinitely the value of $I$ by simpy increasing the number of measurements. This is not really the case, because observations differing by less than twice the standard deviation $\sigma_p$ of an individual measurement of p convey the same information."
He defines $$I_{max}=\sum_{i=1}^{1/(2\sigma_p)}I(p_i)$$
The problem is the author does not define $\sigma_p$. I know the variance, $Var(p)$ should equal : $E(p^2)-E(p)^2$
Am I right that $$ E(p)=E[\frac{f(X,Y,k)}{X}]=\int_{k=0}^{k=\infty} \frac{f(X,Y,k|k=k)}{X}p(k=k) dk $$
Is this correct? If I know the way K is distributed, say K is normally distributed, is there some shortcut to calculate this?