When Do I Have An Invertible Matrix of Partial Derivatives?

288 Views Asked by At

I have some function $f:\mathbb{R}^n \to \mathbb{R}$. Moreover $f(a,b,c, \dots, n)=1$. I am using the implicit function theorem to define a function for one argument $a$ in terms of another, $b$. I know that in order to use the IFT, it needs to be the case that the Jacobian of partial derivatives is invertible. What does this looks like? When is this the case? When all derivatives are non-zero?

Do I only consider the matrix of all partial derivatives, even when I want only to define $a$ as a function of $b$?

2

There are 2 best solutions below

3
On BEST ANSWER

Following up on our discussion, if you have $f: \mathbb{R}^n \to \mathbb{R}$, and let $a = (a_1, a_2, \dots, a_{n - 1}) \in \mathbb{R}^{n - 1}$ and $b \in \mathbb{R}$, then you need to consider \begin{align} J(f(a,b)) &= \left[\begin{array}{c c c | c} \frac{\partial f}{\partial x_1} & \frac{\partial f}{\partial x_2} & \dots & \frac{\partial f}{\partial x_n} \end{array}\right] \end{align} By the IFT, you need $\frac{\partial f}{\partial x_n}$ to be "invertible". For a $1 \times 1$ matrix (i.e, just a scalar), this just means that $\frac{\partial f}{\partial x_n} \ne 0$. What exactly does $\frac{\partial f}{\partial x_n}$ represent? $x_n$ is going to be the variable that you're writing as a function of the remaining variables, and the IFT says you can do this so long as $\frac{\partial f}{\partial x_n} \ne 0$.

2
On

A linear map (given by a matrix, $M$) is invertible when $\det M \neq 0$.