When is a balance assumption consistent?

61 Views Asked by At

From Asymptotic analysis and perturbation theory by Paulsen:

Find the behavior of the function defined implicitly by $$x^2+xy-y^3=0$$ as $x\to\infty$.

[...]

The final case to try is to assume that $xy$ is the smallest term. Then $x^2 ∼ y^3$, which tells us that $y ∼ x^{2/3}$. To check to see if this is consistent, we need to check that $xy \ll x^2$. Indeed, $xy ∼ x^{5/3}$ which is smaller than $x^2$ as $x → ∞$.

Note that in this book $f\ll g \iff f\in o(g)$.

I don't understand why it says that the assumption is consistent if $xy \ll x^2$.

Say, $$f(x)+g(x)+h(x)=0$$ and we assume $h \sim g$. Then we immediately have $f\in o(g)$ and $f\in o(h)$.

So, if I'm not mistaken, as soon as we have $x^2 ∼ y^3$, it follows $xy \in o(x^2)$.

Moreover what does it mean to assume "that $xy$ is the smallest term" if not $xy \in o(x^2)$?

The whole reasoning seems circular.

What am I missing here?

1

There are 1 best solutions below

2
On BEST ANSWER

Basically, if $x \gg 1$ and the whole expression is zero, then you need to either have three terms of the same order or two terms of the same order which are also much larger than the third term. So you have cases:

  • Three terms of the same order. This is impossible (the exponents would have to satisfy an overdetermined linear system).
  • $x^2$ is on the order of $xy$. In this case $y$ is on the order of $x$, so $y^3$ is on the order of $x^3$ and cannot be balanced.
  • $xy$ is on the order of $y^3$. In this case either $y=0$ or else $y$ is on the order of $x^{1/2}$. Either way, $x^2$ is too large to be balanced.
  • $x^2$ is on the order of $y^3$. In this case $y$ is on the order of $x^{2/3}$. In this case $x^2$ and $y^3$ balance each other while $xy$ is on the order of $x^{5/3}$. This is lower order, so it can be canceled by the two larger terms.

That said, these perturbative arguments do have an air of circularity about them; it's all about making an assumption and following it through, stopping when you encounter a contradiction and continuing until then. The assumption only really gets justified after you've already cooked up the theorem, when you go to derive an error estimate.