Let U be a domain in $\mathbb{R}^n$ and $f_1$, $f_2$: $U \to \mathbb{R}$ differentiable functions on $U$. Let $h_1$, $h_2$ be two "arbitrary" functions from $\mathbb{R} \to \mathbb{R}$ (assume they are from a function space suitable for the fundamental lemma of calculus of variations). Assume that for every combination of such $h_1$, $h_2$ one has for some kind of integral ($f_1$, $f_2$ fixed, $h_1$, $h_2$ arbitrary): \begin{equation} \tag{1} \int_U h_1(f_1) \ h_2(f_2) \ dU \quad = \quad \int_U h_1(f_1) \ dU \quad \int_U h_2(f_2) \ dU \end{equation} i.e. $f_1$ and $f_2$ behave like two independent variables, e.g. it would apply for $n = 2$ to $f_1(x, y) = x$, $f_2(x, y) = y$. For such independent variables one has everywhere on $U$ \begin{equation} \tag{2} (\nabla f_1)^T \nabla f_2 \quad = \quad 0 \end{equation}
My question: Is (1) generally equivalent to (2) holding almost everywhere on $U$? Or perhaps better phrased: To what generality is (1) equivalent to (2) holding (almost) everywhere on $U$? The direction (2) $\to$ (1) seems to be straight forward to prove but I am struggling with (1) $\to$ (2), even for $n = 2$. On the other hand I have a feeling that this may be a basic fact about independent functions in sense of (1). Or are there subtle counter examples? I know that if one fixes $h_1$ or $h_2$ in (1) or places some constraint, e.g. $h_1 = h_2$ then the equivalence would not apply.
Also, please feel free to give hints, pointers to literature, further reading. I did not specify what kind of integral is used; if it helps, assume integration by Hausdorff measure. If the anticipated equivalence requires a specific integral or measure, please educate me!