Does this guarantee invertibility in higher-dimensional functions?

43 Views Asked by At

Let $f(\mathbf{x}):\mathbb{R}^N\mapsto\mathbb{R}^N$ be structured as:

$$ f(\mathbf{x})= \left[\begin{matrix} f_1(x_1,...,x_N) \\ f_2(x_1,...,x_N) \\ \vdots \\ f_N(x_1,...,x_N) \\ \end{matrix}\right] $$

where each $f_n(x_1,...,x_N):\mathbb{R}^N\mapsto\mathbb{R}$ is monotone in $x_{n}$, that is to say, $\partial_{x_n}f_n > 0$ for all $n=1,\dots,N$.

Does this monotonicity property suffice to ensure that $f(\mathbf{x})$ is invertible / bijective for any $\mathbf{x}\in\mathbb{R}^N$? If not, why?

2

There are 2 best solutions below

1
On BEST ANSWER

Take for example $f_1=f_2=x_1+x_2$. This satisfies $\partial_{x_1}f_1=\partial_{x_2}f_2=1$ but is not invertible anywhere.

What you need is instead that the determinant of the matrix of partial derivatives is non-zero. See Inverse function theorem.

0
On

$f(x,y) =(2x+3y, 2x+3y)$ is a simple counterexample.