Existence of Partials Imply the Existence of Gradient Vector?

1.5k Views Asked by At

Let $f$ be a scalar function of three variables. Then the gradient vector is defined by:

enter image description here

I read here that the existence of partial derivatives at some point $(x_0, y_0, z_0)$ does not imply the existence of the gradient vector at $(x_0, y_0, z_0)$. How is this possible, since the gradient vector is just a specific type of sum of the partial derivatives?

2

There are 2 best solutions below

4
On BEST ANSWER

Like @Chilango commented, the vector $f_x(x_0, y_0)i+f_y(x_0,y_0)j$ exists even if $f$ is not differentiable at $(x_0, y_0)$.

The link you gave conflates differentiability with existence of the gradient vector, but both my calculus textbook and my real analysis textbook give a different definition for differentiability: A function $f$ is differentiable at $(x_0,y_0)$ if $\Delta z = f_x(x_0,y_0)\Delta x + f_y(x_0,y_0)\Delta y + \varepsilon_1 \Delta x + \varepsilon_2 \Delta y $ where both $\varepsilon_1$ and $\varepsilon_2 \to 0$ as $(\Delta x, \Delta y) \to (0,0)$.

The books also show sufficient conditions for differentiability: existence and continuity of both partial derivatives.

Neither book specifies that we only call the vector $f_xi+ f_yj$ the gradient when $f$ is differentiable, but I don't know if this is standard.

Some applications of the gradient do fail if $f_x$ or $f_y$ is not continuous, because they depend on the differentiability of the function. Two examples are the interpretation of the gradient as the direction of maximum increase and the fact that the gradient is normal to the level curves; both of these rely on the existence of the directional derivative. This means that you can't do much with the vector if your function is not differentiable.

0
On

The correct definition of the gradient vector is the following.

Let $f: X \subset \mathbb{R}^n \to \mathbb{R}$ be a function that is differentiable at point $\mathbf{x}_0 \in \operatorname{int} X$. Then gradient of $f$ at point $\mathbf{x}_0$ is the following vector $$\nabla f(\mathbf{x}_0) = \sum_{i=1}^n \mathbf{e}_i \frac{\partial f}{\partial \mathbf{e}_i}(\mathbf{x}_0) = \sum_{i=1}^n \mathbf{e}_i \frac{\partial f}{\partial x_i}(\mathbf{x}_0) = \begin{bmatrix} \frac{\partial f}{\partial x_1}(\mathbf{x}_0) \\ \frac{\partial f}{\partial x_2}(\mathbf{x}_0) \\ \vdots \\ \frac{\partial f}{\partial x_n}(\mathbf{x}_0) \end{bmatrix},$$ where $\{\mathbf{e}_i\}_{i=1}^n$ is some orthonormal basis in $\mathbb{R}^n$.

We do need the requirement of differentiability of $f$ at point $\mathbf{x}_0$ because it is easy to show that only in this case the expression above will be invariant when passing from one orthonormal basis to another (hence, it will be correctly defined vector object in $\mathbb{R}^n$). I mean that this requirement guarantees that if we change one orthonormal basis $\{\mathbf{e}_i\}_{i=1}^n$ in $\mathbb{R}^n$ to another orthonormal basis $\{\mathbf{h}_i\}_{i=1}^n$ in $\mathbb{R}^n$, then the expression for the gradient vector will change from $\sum_{i=1}^n \mathbf{e}_i \frac{\partial f}{\partial \mathbf{e}_i}(\mathbf{x}_0)$ to $\sum_{i=1}^n \mathbf{h}_i \frac{\partial f}{\partial \mathbf{h}_i}(\mathbf{x}_0)$ , such that these two expressions will be exactly equal: $\sum_{i=1}^n \mathbf{e}_i \frac{\partial f}{\partial \mathbf{e}_i}(\mathbf{x}_0) = \sum_{i=1}^n \mathbf{h}_i \frac{\partial f}{\partial \mathbf{h}_i}(\mathbf{x}_0)$, i.e. they describe the same object in different basises.

If all partial derivatives exist at point $\mathbf{x}_0$, but $f$ is not differentiable at this point, then there will no be such invariance property and the set $\left(\frac{\partial f}{\partial x_1}(\mathbf{x}_0), \ldots, \frac{\partial f}{\partial x_n}(\mathbf{x}_0)\right)^\mathrm{T}$ technically will not be a correctly defined vector object at all.