Implicit function theorem for overdetermined system of nonlinear equations

108 Views Asked by At

Consider a sufficiently regular ($C^1$ ?) function $$F:\mathbb{R}^{m}\times\mathbb{R}^{n}\to\mathbb{R}^{n+k}$$ with $k>0$. And assume an implicit function $y(x)$ is locally well defined by the condition $$F(x,y(x))=0$$ I am interested in implicit differentiation techniques which apply in this contest, extending the implicit function theorem.

EDIT My situation of interest is the following.
There is a $C^1$ function $$G:\mathbb{R}^{m}\times\mathbb{R}^{n}\to\mathbb{R}^{n}$$ where I think of $x\in\mathbb{R}^{m}$ as parameters and $y\in\mathbb{R}^{n}$ as variables.
Of course, for each $\bar{x},\bar{y}$ such that $$G(\bar{x},\bar{y})=0$$ The IFT ensures there exists a local function $$f:U\ni \bar{x}\to V\ni \bar{y}$$ such that $$G(x,f(x))=0\quad\forall x\in U$$ Moreover, it tells that $$D_x f(\bar{x})=-[D_y G(\bar{x},f(\bar{x}))]^{-1}[D_x G(\bar{x},f(\bar{x}))]$$

Now, I am interested in the value of $D_x f(\bar{x})$ at points which not only satisfy $G(\bar{x},\bar{y})=0$, but also an additional feasibility condition
$$s(\bar{x},\bar{y})=0\quad\text{where}\quad s:\mathbb{R}^{m}\times\mathbb{R}^{n}\to\mathbb{R}$$

My doubt: Is it enough to consider $D_x f(\bar{x})=-[D_y G(\bar{x},f(\bar{x}))]^{-1}[D_x G(\bar{x},f(\bar{x}))]$ at points where $s(\bar{x},\bar{y})=0$ or does the additional constraint changes the shape of the implicit function $f$, so that we need a different approach?

To get this other with, I thought of considering the function $$F\equiv\binom{G}{s}:\mathbb{R}^{m}\times\mathbb{R}^{n}\to\mathbb{R}^{n+1}$$ which already "selects" the zeros I am interested in.
Observe that it still makes sense to consider $(\bar{x},\bar{y})\in \mathbb{R}^{m}\times\mathbb{R}^{n}$ such that $$F(\bar{x},\bar{y})=0$$ and, assuming there exists $f:U\ni \bar{x}\to V\ni \bar{y}$ such that $F(x,f(x))=0\quad\forall x\in U$, to ask for implicit differentiation methods to compute the jacobian $D_xf$. Of course, in this case, the IFT cannot be applied off-the-shelf to ensure $f$ exists, nor to compute such a Jacobian. Hence my question.

1

There are 1 best solutions below

3
On

The implicit function theorem states that if you have a function $$F: \mathbb{R}^m \times \mathbb{R}^n \to \mathbb{R}^n$$ $$ (x,y) \to F(x,y)$$ Then there exists a function $$y: \mathbb{R}^m \to \mathbb{R}^n$$ such that $F(x,y(x))=0$. Provided the $n \times n$ jacobian matrix given by the entries of $\dfrac{\partial F_i}{\partial y_j}$ (with $i,j=1,...n$) is invertible.

In your case just write $F$ as:

$$F: \mathbb{R}^{m-1} \times \mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$$ $$ (x,y) \to \big(G(x,y),s(x,y)\big)$$

Hence you will be able to find a function

$$y: \mathbb{R}^{m-1} \to \mathbb{R}^{n+1}$$

such that $G(x,y(x))=0, s(x,y(x))=0$

Provided the $n+1 \times n+1$ Jacobian given by the entries of $\dfrac{\partial F_i}{\partial y_j}$ (with $i,j=1,...n+1$) is invertible.

Also, note that you can reorder your variables however you like. So for instance, you can pick whatever $m-1$ variables you like (out of the original $m+n$ variables in total) and write the other remaining $n+1$ variables in terms of these $m-1$; of course now you will have to check invertibility of a different Jacobian matrix but the exact same reasoning applies. So just to be clear: First choose which variables you want to write in terms of which, and then apply the above process.