How to take the derivative with respect to a function without a clear substitution?

80 Views Asked by At

In Statistical Inference, the Karlin-Rubin Theorem requires that a given statistical model has a Monotone Likelihood Ratio with respect to a sufficient statistic $T(X)$.

In order for the model to satisfy this condition we need the function $\frac{L(\theta _2 | x)}{L(\theta _1 |x)}$ to be monotone with respect to $T(X)$.

In simple cases, where the ratio may, for example, simplify to $\frac{L(\theta _2 | x)}{L(\theta _1 |x)}=4x^2+4x+\theta_1\theta_2$ with sufficient statistic $T(X)=X^2+X$, then clearly we can substitute for $T(X)$ to give us $\frac{L(\theta _2 | x)}{L(\theta _1 |x)}=4T(X)+\theta_1\theta_2$ which has derivative with respect to $T(X)$ that is equal to $4$. This is monotone with respect to $T(X)$ as we require.

However, for some function, for example $\frac{L(\theta _2 | x)}{L(\theta _1 |x)}=\ln(x)$, it is not clear how we differentiate this function with respect to $T(X)=X^2+X$ as there doesn’t seem to be a clear substitution here.

Is there a way in which I can test the monotonicity of this function with respect to this choice of $T(X)$?

Of course, this is just an example to illustrate the problem that I am having and so if possible some general comments on how to tackle problems like this would be appreciated (ie. how to differentiate with respect to any function without a clear substitution).

I would be grateful for any guidance.

1

There are 1 best solutions below

0
On BEST ANSWER

First of all, differentiating "with respect to a function" is, a priori, undefined. We really need to stop thinking about "variables" and instead think about arguments.

Let's say $f:\Bbb R\to\Bbb R$ is a single-variable function. Its derivative $\mathrm Df:\Bbb R\to\Bbb R$ is defined as a function which takes the values $$(\mathrm Df)(x)=f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$ Though I have used the symbol $x$ as a placeholder, the derivative operator $\mathrm D$ does not care about what symbol you use. The key here is that the derivative $\mathrm D$ is with respect to the functions argument, not the variable $x$. So why do we often talk about variables? When we are dealing with lots of functions mixed together, it is much easier to talk about variables than arguments. But, keep in mind that these are just names. If we give the first argument of $f$ the name $x$, then we can start making sense of expressions like $$\frac{\mathrm df}{\mathrm dg}$$ Where $g$ is some function.

To tackle this, let's suppose there was a function $p$ such that $$f(x)=p(g(x))$$ Or, without using variables, $$f=p\circ g$$ Though the concept is not yet defined, we want whatever definition we use to satisfy $$\frac{\mathrm df}{\mathrm d g}(x)=p'(g(x))$$ Again, without variables, $$\frac{\mathrm df}{\mathrm dg}=p'\circ g \tag{1}$$ But, using the chain rule, we know that $$\frac{\mathrm d\big(x\mapsto p(g(x))\big)}{\mathrm d x}(x)=\frac{\mathrm df}{\mathrm dx}(x)=f'(x)=p'(g(x))g'(x)$$ Once again, without variables, $$\mathrm D(p\circ g)=\mathrm Df=f'=\big((\mathrm Dp)\circ g\big)\cdot \mathrm Dg=(p'\circ g)\cdot g'\tag{2}$$ Comparing $(1)$ and $(2)$, it seems we should use the definition $$\frac{\mathrm df}{\mathrm dg}=\frac{1}{g'}\mathrm D(p\circ g)=\frac{f'}{g'}$$

A wrong, but very easy way to remember this is to "divide numerator and denominator by $\mathrm dx$", $$\frac{\mathrm df}{\mathrm dg}=\frac{\frac{1}{\mathrm dx}\mathrm df}{\frac{1}{\mathrm dx}\mathrm dg}=\frac{\frac{\mathrm df}{\mathrm dx}}{\frac{\mathrm dg}{\mathrm dx}}$$

So in your example, letting $T(x)=x^2+x$, then $$\frac{\mathrm d \ln }{\mathrm d T}(x)=\left(\frac{\ln '}{T'}\right)(x)=\frac{\ln'(x)}{T'(x)}=\frac{1/x}{2x+1}=\frac{1}{2x^2+x}.$$