In Ordinary Differential Equations Arnold justifies the use of $dx, dy \dots$ in differential equations such as \begin{equation} \frac{dy}{dx} = \frac{f(y)}{g(x)} \end{equation} by saying that the differential 1-forms $g(x)dx$ and $f(y)dy$ are equal on an integral curve of the equation. However, the integral curve (call it $\gamma$) is 2-dimensional, so a parameterization of $\gamma$ would look like $\varphi : [a, b] \to \gamma, \varphi(t) = (\varphi_1(t), \varphi_2(t))$ and the only differential 1-forms for which an integral along $\gamma$ is defined belong to $C\big(\Omega, (\mathbb{R}^2)^{*}\big)$, where $\gamma \subseteq \Omega \subseteq \mathbb{R}^2$. So, the integrals of the differential forms $g(x)dx$ and $f(y)dy$ on $\gamma$ cannot be defined, since $f$ and $g$ only take one argument each. This is where I'm stuck.
I thought about a possible work around but it involves tweaking the definition of a differential 1-form on a continuous curve. We define the function $\overline{g}(x, y) = g(x)$ and simply integrate as follows: $$ \int_{\gamma} g(x)dx = \int_{\gamma} \overline{g}(x,y)dx + 0dy $$ Is it possible that this procedure is what Arnold means?