I'm reading a book that reproduces Euler's arguments, and I have a few questions about a few things he does. Below are parts of the argument:
Let $a > 1$. Consider an "infinitely small quantity" $\omega$.
$a^\omega$ $\approx$ $1$.
Let $a^\omega$ = $1 + \psi$, for $\psi$ an "infinitely small number".
Then, wishing to relate $\psi$ and $\omega$. He says let $\psi$ = $k$$\omega$ for real number $k$.
So we have $a^\omega$ = $1 + k$$\omega$.
At this point apparently Euler computed some examples:
for $a = 10$ and $\omega = 0.000001$ $k = 2.3026$.
and for $a = 5$ and $\omega = 0.000001$ $k = 1.60944$.
He then concluded that $k$ is a finite number that depends on the value of the base $a$. *
Now for a finite number $x$ he sought the expansion of $a^x$. To do this he said let $j = \frac{x}{\omega}$ and expressed $x$ as $x = \omega j$, and continued.
After he succeeded in finding an expansion for $a^x$ he sought the expansion for the natural logarithm (the inverse function of $a^x$ where the base $a$ is the one for which $k = 1$, in our (and Euler's) notation $a = e$).
1) How should one think of infinitely small and infinitely large numbers?
2) It's not clear to me that the value of $k$ in the derivation of a power series for $a^x$ doesn't also depend on $\omega$. As in for $a = 10$ if we take $\omega$ to be a different (small) value, it's not clear to me that $k$ wouldn't change. Unless the idea is that we let $\omega$ go to $0$ and and in the limit $a^\omega = 1 + k\omega$?
3)Not clear to me that a finite positive number $x$ can be expressed as $x = \omega j$ for some $j$, since $\omega$ is a mysterious "infinitely small" quantity
4)It's not clear to me that there should exist a unique base value $a$ for which $k = 1$ apriori, which Euler seems to assume exists, although I suppose the expansion of $a^x$ is in terms of $k$, and setting $x = 1$ and $k = 1$ we can compute the base $a$ for which $k = 1$ and see that it is the value we take our constant $e$ to be. Is this how Euler could've known there exists such a base?
In his derivation for the expansion of $ln(1+x)$, he writes:
Thus for "infinitely small $\omega$" $e^\omega = 1 + \omega$.
Thus $ln(1 + \omega) = \omega$.
So $j\omega = ln(1 + \omega)^j$ But $\omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+\omega)^j$ will exceed $1$.
So for any positive $x$, we can find $j$ so that $x = (1 + \omega)^j - 1$.
From this we he concludes that $1 + x = (1 + \omega)^j$, and so $ln(1 + x) = j\omega$. And since $ln(1 + x)$ is finite and $\omega$ is infinitely small, $j$ must be infinitely large.
5) in deriving an expansion for $ln(1+x)$, Euler argues that $\omega$ although infinitely small is positive, so the larger the number chosen for $j$, the more $(1+\omega)^j$ will exceed $1$. So for any positive $x$, we can find $j$ so that $x = (1 + \omega)^j - 1$.
This makes the infinitely small notion even more confusing, as $1 + \omega$ can be made arbitrarily large by raising $1 + \omega$ to higher powers, and so $\omega$ contributes a nonzero amount, and so how can it be infinitely small? It turns out that $j$ must be infinitely large, but we were told $(1 + \omega)^j$ is larger when a larger number is chosen for $j$. How can a larger number be chosen than an "infinitely large" number?
Short answer. You ask
to which my answer is
Instead
or, if you're brave,
Edit in response to a comment.
Essentially, I agree with @JairTaylor . We all think naively before rigorizing. I meant my answer as a tribute to Euler, who thought his way through to correct conclusions before rigor in analysis was invented.