This question is probably pretty stupid, but I can't figure it out so...
I'm trying to read Gelfand and Fomin's "Calculus of Variations" and on page 12, for some functional $J$ on a normed vector space $S$ they first define: $$\Delta J [y;h] = J[y+h] - J[y], y,h\in S$$ and then say if we can write: $$\Delta J[y;h] = \delta J[y;h]+ \epsilon ||h||$$ where $\delta J$ is a functional linear in $h$ and $\lim_{||h||\to 0} \epsilon = 0$, then we call $\delta J[y;h]$ the variation of $J$ at $y$. Quick sanity check for myself: in general, $\epsilon$ is a functional of $h$, right? Why don't we write: $$\Delta J[y;h] = \delta J[y;h] + \epsilon[y;h] ||h||?$$ Anyway, a few pages later, in the proof of theorem 2, they assert that since $\lim_{||h||\to 0} \epsilon =0 $ then there must exist some $\delta$ s.t. $\forall h \in S$ s.t. $||h||<\delta$ we must have that the sign of $\delta J[y;h]$ is the same as the sign of $\Delta J[y;h]$. Why is this the case?
After looking at peek-a-boo's link, I have come up with a complete proof of the theorem which I am happy with. I will post it here for future reference.
Theorem: A necessary condition for a differential function $J : S \to \mathbb{R}$ to have an extremum at $y_0 \in S$ is that its variation $\delta J_{y_0}(h) = 0$ for all $h \in S$.
Proof: For definiteness, suppose $J$ has a maximum at $y_0$, the proof for minimum follows similarly. Then by definition, $\exists \delta > 0 \forall h \in S: ||h|| < \delta \implies \Delta J_{y_0} (h) \leq 0$. Now suppose $\delta J \neq 0$, i.e. $\exists h_0 \in S: \delta J (h_0) \neq 0$. By the definition of differentiablility, we have: $$\Delta J(h) = \delta J(h) + \epsilon(h)||h|| (\epsilon \to_{||h||\to 0} 0)$$ Which is to say $\forall \mu > 0 \exists \gamma>0 \forall h \in S : ||h||<\gamma \implies |\epsilon(h)|<\mu$, or equivalently, $\forall \mu < 0 \exists \gamma > 0 \forall h \in S: ||h||<\gamma \implies |\Delta J(h) -\delta J (h)|< \mu ||h||$. Take $\mu = \frac{|\delta J(h_0)|}{2||h_0||}$. Then there is some $\gamma > 0$ s.t. $\forall h \in S: ||h||<\gamma \implies |\Delta J(h) - \delta J(h)| < \frac{|\delta J(h_0)| ||h||}{2||h_0||}$. Take $\alpha \neq 0$ s.t. $0<\delta J (\alpha h_0) = \alpha \delta J(h_0)$ and $||\alpha h_0|| = |\alpha|||h_0|| < \gamma$ and $||\alpha h_0|| < \delta$. Then we have: $$|\Delta J(\alpha h_0) - \delta J(\alpha h_0)| < \frac{|\delta J(h_0)|||\alpha h_0||}{2||h_0||} = \frac{\alpha\delta J(h_0)}{2}$$ Thus we have: $$\frac{\alpha \delta J (h_0)}{2} < \Delta J(\alpha h_0) < \frac{3 \alpha \delta J(h_0)}{2}$$ But we choose $\alpha \delta J(h_0)$ to be positive, thus $\Delta J(\alpha h_0)$ is positive. But we have $||\alpha h_0|| < \delta$ hence by choice of $\delta$, $\Delta J(\alpha h_0) \leq 0$. Contradiction. Thus $\delta J (h) = 0, \forall h \in S$.