If we know objective function $f:\mathbb{R}\to\mathbb{R}$ is concave-up, decreasing, and has a solution $x^*$ on interval $I$, (or equivalently, $f$ is concave-down, increasing, and has a solution $x^{*}$ in interval $I$), The Newton-Raphson method $x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$ will converge starting at any $x_0\in I$ because with each iteration, the linear approximation of $f$ at $x_n$ will undershoot $x^*$.
first 5 iterations of Newton-Raphson on f(x)=3e^{-x}-0.3 starting at x0=0.1
Can something similar be said for the generalized Newton–Raphson method where we are trying to solve vector-valued functions of $k$ variables? I.e.
$\vec{f}:\mathbb{R}^{k}\to\mathbb{R}^k, \vec{f}(\vec{x})=\begin{bmatrix}f_1(\vec{x})\\f_2(\vec{x})\\f_3(\vec{x})\\\vdots\\f_k(\vec{x})\end{bmatrix}$
and the Newton iteration is given by
$\vec{x}_{n+1}=\vec{x}_n-\left[\textbf{J}_f(\vec{x}_{n})\right]^{-1}\times \vec{f}(\vec{x}_{n})$
where $\textbf{J}_f(\vec{x})$ is the Jacobian matrix of $\vec{f}$ evaluated at $\vec{x}$ given by $\textbf{J}_f(\vec{x})= \begin{bmatrix} \frac{\partial f_1}{\partial x_1}(\vec{x}) & \frac{\partial f_1}{\partial x_2}(\vec{x}) & \frac{\partial f_1}{\partial x_3}(\vec{x}) & ... & \frac{\partial f_1}{\partial x_k}(\vec{x})\\ \frac{\partial f_2}{\partial x_1}(\vec{x}) & \frac{\partial f_2}{\partial x_2}(\vec{x}) & \frac{\partial f_2}{\partial x_3}(\vec{x}) & ... & \frac{\partial f_2}{\partial x_k}(\vec{x})\\ \frac{\partial f_3}{\partial x_1}(\vec{x}) & \frac{\partial f_3}{\partial x_2}(\vec{x}) & \frac{\partial f_3}{\partial x_3}(\vec{x}) & ... & \frac{\partial f_3}{\partial x_k}(\vec{x})\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ \frac{\partial f_k}{\partial x_1}(\vec{x}) & \frac{\partial f_k}{\partial x_2}(\vec{x}) & \frac{\partial f_k}{\partial x_3}(\vec{x}) & ... & \frac{\partial f_k}{\partial x_k}(\vec{x})\\ \end{bmatrix}$
If $\textbf{J}_f(\vec{x})$ is the $k$-dimensional analog of $f'(x)$, what, if anything, is the $k$-dimensional analog of $f''(x)$? How can we test the convexity of vector-valued $\vec{f}(\vec{x})$? Is there a generalized undershooting theorem?
For a bit of context, I'm working with a particular infinite family of systems of equations. For all instances I've looked at, the Newton-Raphson method quickly converges at starting point $\vec{x}_{0}=\textbf{0}$. I suspect there's some kind of analog to convexity in higher dimensions because experimentally, $\vec{x}_n$ always undershoots $\vec{x}^{*}$ in each of its components. Not all starting points converge though, and its entirely possible that I am wrong in thinking it will always work starting at $\textbf{0}$. Either way, I'm very stuck on this problem.
Any and all book recommendations/resources are greatly appreciated. Thanks!