I am getting slightly lost with a proof, there is a 'trivial' jump that I am missing.
Some background: Say I have $r$ observations, the sample size observed by the insurance company is $n$ so $n>r$, and I define the Random Variable $W= X-M $ where $(W>0)$. $W$ is the claim amount observed by the reinsurance company (For X the Random Variable denoting the whole claim and $M$ the retention limit). The issue I have is more to do with an understanding (or lack thereof) of probability theory.
When denoting the CDF and PDF, it is stated as:
$G(z) = P(W\leq z) = P(X\leq M+z |X>M) = P(M<X\leq M+z)/P(X>M) $
Hence this can be written in terms of the CDF of $X$, (denoted by $F$):
$\Rightarrow G(z) = (F(M+z)-F(M))/(1-F(M)) \Rightarrow g(z) = f(M+z)/(1-F(M))$.
I don't understand the seemless jump to solve the pdf of $W, g(z)$. It isn't clear to me how $\frac{d}{dz} \{F(M+z)-F(M)\} = f(M+z)$.
This is the only part I am struggling with.
By the chain rule for differentiation, and the definition of the CDF - PDF relationship (i.e. PDF is derivative of CDF), we have
\begin{align*} \frac{d}{dz} F(M + z) &= \left( \frac{d}{dz}F(M + z) \right) \left( \frac{d}{dz}(M+z) \right) \\ &= f(M+z), \end{align*} and since $F(M)$ is independent of $z$
$$\frac{d}{dz}F(M) = 0.$$
The result now follows (using linearity of derivatives).