Condition for differentiable functional - Why is it defined in this exact way?

44 Views Asked by At

I'm kind of struggling to understand, why the condition

$$J\left[y+h\right]-J\left[y\right] = \Psi \left[h\right]+\varepsilon \left(h\right){\lVert}h{\rVert} $$

where $\Psi$ is a linear functional, and $\varepsilon(h)$ is a functional such that $\varepsilon(h) \rightarrow 0 $ for ${\lVert}h{\rVert} \rightarrow 0$

is the condition to check if we want to know if our given functional $J\left[\cdot \right]$ is differentiable or not.

Should we divide both sides by $\varepsilon(h)$, and take the limit ${\lVert}h{\rVert} \rightarrow 0$, and see what happens?

Also, I know, that if $\varepsilon(h) \rightarrow 0 $ for ${\lVert}h{\rVert} \rightarrow 0$, then $\varepsilon(h)$ is a continuous functional - but why? Shouldn't continuity be checked by taking the difference ${\lVert}\varepsilon(f_n) - \varepsilon (f){\rVert}$ and checking if it's $\rightarrow 0$ for ${\lVert}f_n - f{\rVert} \rightarrow 0$?

1

There are 1 best solutions below

1
On

apping Ψ; typically denoted as DJy or dJy or in the calculus of variations, δJy, or simply δJ. Calculating Ψ means figuring out what is Ψ(h) for all possible h. One possible way of doing to is that Ψ(h)=DJy(h) (the Frechet derivative of J at h) and by the chain rule, this is equal to dds∣∣∣s=0J(y+sh)=lims→0J(y+sh)−J(y)s. i.e Ψ(h)=DJy(h)=dds∣∣∣s=0J(y+sh)=lims→0J(y+sh)−J(y