How is $P_{\theta}(x-\epsilon<X<x+\epsilon)$ approximately $2\epsilon f(x\mid\theta)=2\epsilon L(\theta\mid x)$?

94 Views Asked by At

In a Statistical inference textbook, it is written that: if $X$ is a continuous, real-valued random variable and if the pdf of $X$ is continuous in $x$, then, for small $\epsilon$, $P_{\theta}(x-\epsilon<X<x+\epsilon)$ is approximately $2\epsilon f(x\mid\theta)=2\epsilon L(\theta\mid x)$ (this follows from the definition of a derivative), where $L(\theta\mid x)$ is the likelihood function.

But I don't understand how $P_{\theta}(x-\epsilon<X<x+\epsilon)$ is approximately $2\epsilon f(x\mid\theta)=2\epsilon L(\theta\mid x)$?

1

There are 1 best solutions below

0
On

Assuming that the random variable is absolutely continuous with Lebesgue density $f(x|\theta)$, then for all $\epsilon>0$

$$\Pr(x- \epsilon <X < x + \epsilon) = \int_{x-\epsilon}^{x+\epsilon} f(y|\theta) dy.$$

Applying now the mean-value theorem for integrals yields the existence of a $c \in (x-\epsilon,x+\epsilon)$ such that

$$\Pr(x- \epsilon <X < x + \epsilon) = 2\epsilon f(c|\theta),$$

which, assuming some local regularity (e.g. continuity) of $f$ at $x$, is approximately equal to $2\epsilon f(x|\theta)$ for small enough $\epsilon>0$.