This was asked before here.
However the answer seems convoluted to me. Assume we have a Jacobian (i.e. we know without a shadow of a doubt it is the total derivative of a vector valued function). Assume we want to recover the original function up to a constant (i.e. we don't really care about the function but the family of functions that gives this jacobian).
Well, I know that the very first entry in this matrix is $\frac{\partial f_1}{\partial x_1}$. Well then $f_1 = \int \mathbb{J}_{0,0} dx_1 + C$ no? That is to say the first coordinate function is just the improper integral of the first entry of the jacobian with respect to the first variable. And then we can recover all the other ones by following the same procedure right? We really only need the first column in the jacobian.
Am I doing something wrong? The answer in the linked question seems more complicated than this.
$ \def\bbR#1{{\mathbb R}^{#1}} \def\n{\nabla}\def\o{{\tt1}}\def\p{\partial} \def\E{{\cal E}}\def\F{{\cal F}}\def\G{{\cal G}} \def\L{\left}\def\R{\right}\def\LR#1{\L(#1\R)} \def\vec#1{\operatorname{vec}\LR{#1}} \def\diag#1{\operatorname{diag}\LR{#1}} \def\Diag#1{\operatorname{Diag}\LR{#1}} \def\trace#1{\operatorname{Tr}\LR{#1}} \def\qiq{\quad\implies\quad} \def\grad#1#2{\frac{\p #1}{\p #2}} \def\m#1{\big[\begin{array}{r}#1\end{array}\big]} $Let $\{e_k\in\bbR n\}$ denote the standard basis vectors, then the identity matrix can be written as $$I = \sum_{k=1}^n e_ke_k^T$$ Given the gradient/jacobian matrix, using $e_k$ you can extract its $k^{th}$ column $$G = \grad{f}{x}\qiq g_k=Ge_k\qiq G = \m{g_1&g_2&\ldots&g_n}$$ Write the total differential as a sum of these column vectors and integrate $$\eqalign{ df &= G\cdot dx \\ &= G\cdot I\cdot dx \\ &= \sum_{k=1}^n G\cdot e_ke_k^T\cdot dx \\ &= \sum_{k=1}^n g_k\; dx_k \\ \\ f &= \int_{x_0}^x df\\ &= \sum_{k=1}^n \int_{x_0}^x g_k\; dx_k \\ &= \Big[h_1(x) + h_2(x) + \ldots\Big]_{x_0}^x \\ &= H(x)\Big|_{x_0}^x \\ &= H(x) - H(x_0) \\ }$$ where I have assumed that you are able to integrate the individual component vector functions of the gradient $\big(g_k\big)$ with respect to the corresponding scalar component of the independent variable $\big(x_k\big)$, either analytically or numerically.
So you were on the right track, but you need all of the columns of the jacobian.