In Newman and Barkema's Monte Carlo Methods in mathematical physics, on page 23-24, the following claim is made:
"Assume we have a function f(x) and the integral $I(x)=\int_0^xf(x')dx'$. Then pick a uniform random number $h\in(0,x)$ and another one $v\in(0,1)$. The probability that the point $(h,v)$ is below the graph $(x,f(x))$ is then given by $P(x)=I(x)/x$."
Trying to understand this I made up some examples, and this one confuses me: $f(x)=x$. Then we have $P(x)=I(x)/x=\frac{1}{2}x^2/x=\frac{1}{2}x$. In other words, $x>2$ gives $P>1$. Do they assume some sort of normalization here?
Also, is there an easy explanation of why their claim is true?
I suspect the intent is for $f(x)$ to be a probability distribution function, a CDF, or some other function that is always between $0$ and $1$. If that constraint is added, then the function $I(x)$ just gives you the area under the curve $f(x)$ from $0$ to $x$, and that area is contained in the box bounded by $X=(0,x)$ and $Y=(0,1)$. If you then pick a uniform random number from that box, the probability that it falls under the curve $f$ is the area under the curve, which is $I(x)$, divided by the total area of the box, which is $1\times x=x$.
If $f$ goes outside of $[0,1]$, then the statement isn't guaranteed to make sense. I don't have the book, but if I were you I'd check the rest of the chapter for comments that they'll be limiting the conversation to functions whose values are between $0$ and $1$.