Taylor series with functions as parameters (as opposed to variables)

3.3k Views Asked by At

I'm doing my own research on the Euler-Lagrange equation and came across a proof in van Brunt's textbook "The Calculus of Variations". However, there is something I don't quite understand.

Here is an excerpt from the second chapter (with a little bit of paraphrasing):

Let $J : C^2[x_0, x_1] \to \mathbb{R}$ be a functional of the form $\displaystyle J(y) = \int_{x_0}^{x_1} \! f(x, y, y') \, \mathrm{d}x$, where $f$ is assumed to have at least second order partial derivatives with respect to $x, y, y'$. Assume $y$ has fixed endpoints, i.e. $y(x_0) = y_0$ and $y(x_1) = y_1$.

Now assume that $J$ has a local maximum at $y$. Then there is an $\epsilon > 0$ such that $J(\hat{y}) - J(y) \le 0$ for all $\hat{y} \in \{y \in C^2[x_0, x_1]: y(x_0) = y_0 \text{ and } y(x_1) = y_1\}$ such that $\|\hat{y} - y\| < \epsilon$.

For any $\hat{y}$ there is an $\eta$ such that $\hat{y} = y + \epsilon \eta$, and for $\epsilon$ small Taylor's theorem implies that

$\begin{align} f(x, \hat{y}, \hat{y}') &= f(x, y + \epsilon \eta, y' + \epsilon \eta')\\ &= f(x, y, y') + \epsilon \left( \eta \frac{\partial f}{\partial y} + \eta' \frac{\partial f}{\partial y'} \right) + O(\epsilon^2) \end{align}$.

I'd like to ask: Why is the Taylor series valid? I've seen Taylor series for functions of several variables, but never for a function $f$ of functions $\hat{y}(x), \hat{y}(x)'$.

The textbook states

Here, we regard $f$ as a function of three independent variables $x, y,$ and $y'$ and the partial derivatives in the above expression are all evaluated at the point $(x, y, y')$.

But is it okay to just regard functions as independent variables like this?

1

There are 1 best solutions below

2
On BEST ANSWER

For one variable you can write Taylor series around $x=a$ as below $$f(a+\epsilon)-f(a)=\epsilon \frac{df(x=a)}{dx}+\frac{\epsilon^2}{2}\frac{d^2f(x=a)}{dx^2}+O(\epsilon^3)$$ For functional case the same can be written as $$J(y)=\int_a^bf(x,y,y')dx$$ and for weak variations it follows that $$\hat y=y+\epsilon t\Rightarrow J(\hat y)=J(y+\epsilon t)=\int_a^bf(x,y+\epsilon t,y'+\epsilon t')dx$$ If you want to have stationary $y$ below equation must be zero, since we don't want any variations for integral value around $y$ $$J(y+\epsilon t)-J(y)=\int_a^b\bigg(f(x,y+\epsilon t,y'+\epsilon t')-f(x,y,y')\bigg)dx$$ The integral is zero independent of $dx$ if below condition is satisfied for $[a,b]$ $$f(x,y+\epsilon t,y'+\epsilon t')-f(x,y,y')=0$$ If you compare it with the Taylor expansion formula you can see that they have same formulation ifyo replace $\epsilon$ by $\epsilon t$ and it follows that $$f(x,y+\epsilon t,y'+\epsilon t')-f(x,y,y')=\epsilon t \frac{\partial f(x,y,y')}{\partial y}+\epsilon t' \frac{\partial f(x,y,y')}{\partial y'}+O(\epsilon^2)=0$$

Edit: They treat the functions as independent inside $f$. Consider the optimization case with constraints. For objective function you treat every variable as independent but with constraint equations you impose some relations between independent variables. It is the same here: the variables are treated as independent but the condition is imposed by $\hat y=y+\epsilon t$ and $\hat y'=y'+\epsilon t'$