Proving that a polilinear operator is differentiable

82 Views Asked by At

A Polilinear map operator is $P:X^1 \times ... \times X^n \to Y$ such that the foolowing applies: $\lambda, \mu \in R$

$$ P( \lambda x_1^1 + \mu x_2 ^1, x^2,...,x^n)= \lambda P(x_1^1,x^2,...x^n)+ \mu P(x_2^1,x^2,...x^n) \\...... \\\ ...... \\ P(x^1, x^2,...,\lambda x_1^n+ \mu x_2^n)= \lambda P(x^1,x^2,...x_1^n)+ \mu P(x^1,x^2,...x_2^n)$$

My definition of differentiability:

Let $X$ and $Y$ be normed vector spaces upon the same field $\mathbb R$ or $\mathbb C$ and $U$ an open set in $X$. For a function $f:U \to Y$ it is said to be differentiable in point $x \in U$ if there exists a continuous linear map $A_x:X \to Y$ such that: $$f(x+h)-f(x)=A_xh+R(h)$$ where $$\lim_{h \to 0}\frac{R(h)}{\|h\|}=0. \text{ or } R(h)=o(h)$$

Now to this example and my question:$$P(x^1+h^1,...,x^n+h^n)-P(x^1,...,x^n)=P(h^1,x^2,...,x^n)+...+P(x^1,...,x^{n-1},h^n)+ \sum P(y^1,...,y^n)\leftarrow\text{In this last sum } y^i=x^i \text{ or }y^i=h^i \text{ where }\\ y^i=h^i \text{ is at least for two indexes }i=1,...,n$$

Now it says that : $P(h^1,x^2,...,x^n)+...+P(x^1,...,x^{n-1},h^n)$is linear which is clear why and then it says:

$$\| \sum P(y^1,...,y^n) \|\leq \sum \|P\| \|y^1 \|...\|y^n \| \leq M \|P\|\|h\|^2\|x\|^{n-2}, M=2^n-n-1 \implies P(y_1,...,y_n)=o(h)$$

My question and confusion now that I have is the following:

It seems I can make many different functions that are linear and have $R(h)=o(h)$ for example: $A_x=0$ and $$R(h)=P(y^1,...,y^n)\leftarrow\text{In this last sum } y^i=x^i \text{ or }y^i=h^i \text{ where }\\ y^i=h^i \text{ is at least for on index }i=1,...,n$$ $$\| \sum P(y^1,...,y^n) \|\leq \sum \|P\| \|y^1 \|...\|y^n \| \leq M \|P\|\|h\|\|x\|^{n-1}, M=2^n \implies P(y_1,...,y_n)=o(h)$$

It is not unique, which it should be, or am I not understanding the idea. Any help?

1

There are 1 best solutions below

0
On BEST ANSWER

If your $R(h)$ term contains individual $h^i$ (to the first power) terms than it is not $o(h)$. Let's use a relatively simple example that fits into your requirements. Consider the bilinear mapping $L : \mathbb{R} \times \mathbb{R} \to \mathbb{R}$ given by $$L(x, y) = axy$$ where $a \in \mathbb{R}$

As you stated, for $L$ to be differentiable we need: $$L(x_1+h_1, x_2+h_2)-L(x_1,x_2)=A_{(x_1, x_2)}(h_1, h_2) +R((h_1, h_2))$$ where $$\lim_{h \to 0}\frac{R(h)}{\|h\|}=0. \text{ or } R(h)=o(h)$$

We can calculate the derivative of $L$ explicitly: \begin{eqnarray} L(x_1+h_1, x_2+h_2)-L(x_1,x_2) &=& L(x_1+h_1, x_2+h_2)-L(x_1,x_2+h_2) + L(x_1,x_2+h_2) - L(x_1, x_2) \\ &=& L(h_1, x_2+h_2) + L(x_1,h_2) \\ &=& L(h_1, x_2) + L(x_1,h_2) + L(h_1, h_2)\\ \end{eqnarray} Here $A_{(x_1, x_2)}(h_1, h_2) = L(h_1, x_2) + L(x_1,h_2)$ and $R(h) = L(h_1, h_2)$.

If we were to instead move either terms of the derivative (e.g. $L(h_1, x_2)$) into $R(h)$ it would no longer be $o(h)$.