I have a stochastic variable modelled as $x \sim \mathbb{N}(\mu,\sigma^2)$, and I am considering thresholds in $x$ based on the probability of reaching these values, e.g. find $x$ such that $p = P(X \geq x) = 0.8 \,/\, 0.9 \, / \, 0.95$ etc.
Naturally this requires looking at a CDF (i.e. $\Phi(x)$) - but since $x$ is a function of $p$, it is the inverse CDF that I am considering: $\Phi^{-1}(p)$ with $p \in [0,1]$. This is referred to as the probit function when $\mu = 0, \sigma = 1$., and characterises completely the value of $x$ for a given $p$.
Along with this, consider some payoff function e.g. $f(x,p) = x \cdot p - 3p$, and since $x(p) = \Phi^{-1}(p)$, the expected payoff depends wholly on $p$. I want to understand the gradient of this function. This sets up an optimisation problem in $p$ to maximise the payoff function.
Now, whilst $\Phi(x)$ has a well-known equation: $$\Phi(x) = \frac{1}{\sigma\sqrt{2\pi}} \int_{-\infty}^{x} e^{ \frac{1}{2}(\frac{x-\mu}{\sigma})^2}$$
And so presumably:
$$\frac{\partial{\Phi(x)}}{\partial{x}} = \frac{1}{\sigma\sqrt{2\pi}} e^{\frac{1}{2}(\frac{x-\mu}{\sigma})^2}$$
I cannot find a similar 'known' result for:
$\Phi^{-1}(p)$ and $\frac{\partial{\Phi^{-1}(p)}}{\partial{p}}$
If there is not one, $\frac{\operatorname{d}f(x,p)}{\operatorname{dp}}$ cannot be solved analytically, but given that:
$$ x = \Phi(p)$$ $$ p = \Phi^{-1}(x)$$
can we solve the optimisation problem instead in $x$? So find the stationary point at $\frac{\operatorname{d}f(x,p)}{\operatorname{dx}} = 0$, and the simply recover the optimal probability with $\hat{p} = \Phi^{-1}(\hat{x})$?