I am making my way through an analysis textbook and we have defined $e$ as the $\lim_{n \to \infty} (1 + \frac{1}{n})^n$. We also defined $e^x$ as $\lim_{n \to \infty} (1 + \frac{x}{n})^n$.
I am quite uncertain as to where this exponentiation definition comes from. For example, in the book we have shown how $\sqrt{2}$ can be represented as an infinite decimal. The method we used was to take increasing decimal expansions that were just under $2$, adding one decimal at a time. So for example:
$$1^2 < 2 < 2^2$$ $$1.4^2 < 2 < 1.5^2$$ $$1.41^2 < 2 < 1.42^2$$
and so on. We then define $\sqrt{2}$ to be the supremum of the set of improving approximations of $\sqrt{2}$, sup $\{1, 1.4^2, 1.41^2, 1.414^2,...\}$ and we showed that $2$ is indeed the least upper bound.
So it seems to me that we have already established a way to take irrational numbers to integer powers. So now my question is, why not just do the same for $e^x$? I don't understand why we introduce the limit definition to define exponentiation. Once we are given the definition of $e$, which is just a constant, why can't we use the same procedure as described above to define exponents of $e$? (By same procedure I mean taking increasingly accurate finite decimal approximations of $e$ and multiplying them together $x$ times).
For example, my confusion is along these lines: Say we have defined exponentiation of natural numbers to natural numbers. We define it in the usual intuitive sense, so $3^5 = 3\cdot3\cdot3\cdot3\cdot3$. Then all of a sudden we say introduce a limit to define $7^8$, instead of using the established exponentiation.
I know that for any (integer at least) value of $x$, $\lim_{n \to \infty} (1 + \frac{x}{n})^n$ converges to the same value that we would get if using the other method, so obviously the definition is consistent. However, where exactly does it come from? Do we use it simply because it 'works', and converges to $e^x$ had we used the other method?
Note: I know a caveat to the other method presented here is that in the described form, it can only be used for integer powers. In my book it does say this can be generalised to real powers, but using $e^x$, which doesn't give me much insight.
The definition in your book isn't great. It would make more sense to define the function $e^x = \exp(x)$ as, for example, the unique solution of the differential equation $y' = y$ with $y(0) = 1$. Directly from that definition, it's not hard to prove that $\exp(x + x') = \exp(x) \exp(x')$ (by noting that for fixed $x'$, the function $\exp(x + x')/\exp(x')$ also satisfies the given differential equation), that $e^x$ is defined for the entire real line, that $e^x = \sum x^n/n!$, and so on. With that in mind, the number $e$ is just $\exp(1)$.
More specifically, you asked why we can't define $e^x$ by extending the function $y^x$ for $x, y\in \mathbb{Q}^+$ to real $x, y$ with $y > 0$. You could, but you're not really gaining much from it. You'd have to prove that that extension exists and is well-defined, that it has nice properties like $(yy')^x = y^x (y')^x$ and $y^{x + x'} = y^x y^{x'}$, etc. It's not insurmountable, but it's not as easy as setting $y^x = e^{x \log y}$ and getting most of the properties for free.
As for why $e$ is defined in the text as $\lim_{n\to\infty} (1 + 1/n)^n$, I don't think there's any justification for it aside from being an easy definition to make. It's not motivated, it's not useful, and it's not even interesting.