In solving for scalar function $f$ given gradient vector $F$, where do functions such as $g(y)$ come into play?

807 Views Asked by At

So, I understand that one can solve for scalar function $f$ given a conservative gradient vector.

As an example, find $f(x,y)$ such that $∇f = F$, given:

$F(x,y) = <xy^2, x^2y>$

$\int\frac{\partial f}{\partial x} = \int xy^2 dx$

$f = \frac{x^2y^2}{2} + g(y)$

$\frac{\partial f}{\partial y} = \frac{\partial}{\partial y}[\frac{x^2y^2}{2} + g(y)]$

$= x^2y + g'(y) = x^2y$

$g'(y) = 0$

$g(y) = 0 + C$

$f(x,y) = \frac{x^2y^2}{2} + C$

Now, my main question is for what purpose do we need $g(y)$ after integrating $xy^2$ with respect to $x$? I understand the general process of finding the scalar function, but can anyone maybe explain it a bit more in-depth?

2

There are 2 best solutions below

0
On

This is how I understand it:

We have a function $f(x,y)$ such that $\nabla f(x,y) = (\frac {\partial f}{\partial x}, \frac {\partial f}{\partial y}) = (xy^2, x^2y)$.

This gives us the two equations:

$\frac {\partial f}{\partial x} = xy^2$

$\frac {\partial f}{\partial y} = x^2y$

Let's consider the first equation. $\frac {\partial f}{\partial x}$ is the derivative of $f$ with respect to $x$. It represents how much the variable $x$ affects the function $f$ given some fixed $y$.

Normally, when we integrate with respect to one variable, we add a constant $C$, because there may have been a constant that wouldn't have affected the derivative. In the multivariate case, rather than a constant $C$, we add a function $g(y)$ because the constant at a fixed $y$ may depend on the value of $y$.

So now we know that $f(x,y) = \frac {x^2y^2}{2} + g(y)$. But we can improve this answer by finding out what $g(y)$ is. As you did, we take the derivative with respect to $y$ on both sides:

$\frac {\partial f}{\partial y} = \frac {1}{\partial y} [\frac {x^2y^2}{2} + g(y)]$.

Using our second equation, we get:

$x^2y = x^2y + g'(y)$. So $g'(y)=0$.

Now, we integrate $g'(y)$. But since this time, $g$ is a single variable function, the best we can do is add a constant $C$ to get $g(y)=0 + C$. This should leave you with the answer that you got.

0
On

The unknown function $g$ is the “constant of integration” for the first integral.

The antiderivative of a single-variable function is determined up to an arbitrary constant $C$, since $\frac d{dx}\left(f(x)+C\right) = \frac d{dx}f(x)$. The situation is similar when you’re taking partial derivatives: if $g$ is a function that depends only on $y$, then $\frac\partial{\partial x}g=0$, therefore $\frac\partial{\partial x}\left(f+g\right) = \frac{\partial f}{\partial x}$.

So, when you integrate the first component of $F$ with respect to $x$, instead of that antiderivative being determined up to an arbitrary constant that you then need to compute, it’s determined up to some function $g$ that doesn’t depend on $x$. In this case, there’s only one other independent variable, so we write $g(y)$ for this unknown function, but if there were three independent variables, say, $x$, $y$ and $z$, we’d write $g(y,z)$ instead—the unknown function $g$ can depend on any or all of the other variables.

To find the unknown function $g(y)$, you then differentiate with respect to $y$ and compare the result to the second component of $F$, which is $\partial f/\partial y$. When more variables are involved, you keep going back and forth between integrating and computing partial derivatives, with the unknown function that’s introduced by the integration depending on fewer variable with each iteration.