For differentiable one-dimensional $f: \mathbb{R} \rightarrow \mathbb{R}$ or $f: \mathbb{C} \rightarrow \mathbb{C}$, after finding a root $x_0$ it's possible to split out a differentiable linear factor using $f(x) = f^*(x)(x - x_0)$, allowing to search for subsequent roots on $f^*$ since $f(x) = 0 \iff x = x_0 \vee f^*(x) = 0$, whilst $f^*(x_0) \neq 0$ unless $f'(x_0) = 0$.
Is a similar mechanism possible for the multidimensional case, e.g. for differentiable $f: \mathbb{R}^n \rightarrow \mathbb{R}^n$, or at least for some class of multidimensional functions, so that a differentiable function $f^*$ can be derived from $f$ and a known root $x_0$ so that $f(x) = 0 \iff x = x_0 \vee f^*(x) = 0$ holds, but $f^*(x_0) \neq 0$ unless the Jacobian matrix of $f$ at $x_0$ is also zero.
For a continuously differentiable function $f:\mathbb R^n\to\mathbb R^n$ and $x_0\in\mathbb R^n$, the fundamental theorem of calculus for $\varphi(t)= f(x_0+t(x-x_0))$ gives $$f(x)-f(x_0)=\varphi(1)-\varphi(0)=\int_0^1 \varphi'(t)dt=\langle g(x),x-x_0\rangle $$ for $g(x)=\int\limits_0^1 f'(x_0+t(x-x_0)dt$ by the chain rule.