I'm trying to take a different approach to show the exponential function and $e$ should exist in calculus. Say, we want there to be a hypothetical function $f$ which should have the property that it is its own derivative. And, say that our knowledge of calculus is rudimentary, so we only know how to apply the power rule (for now). Using this, we can pretty easily formulate a (infinite) power series that satisfies our constraints:
$$ f(x) = \sum_{k = 0}^\infty \frac{x^k}{k!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \cdots$$
where our only intution is that each term, when differentiated, should fill into the slot of the term before, and that's why we use the factorial style coefficients. We also don't want the derivative to have a lower degree than the original function, and the only we can do that using the current rule by having infinitely many terms.
Now comes the real problem: how do we prove that there exists a number, say $a$, which satisfies the property that $a^x = f(x)$ for any real $x$ that we can feed into our function?
The idea i'm trying to communicate is to reverse the relationship of the exponential function from the way its usually taught in Calculus 1 and trying to motivate a different way of "creating" the constant $e$. I've seen many uses of defining the exponential function as $e^x$, taking the Taylor series, and using that to define it in other spaces (like matrices and complex numbers), but never for the reals.
You can derive some properties of $f(x)$, and then use those properties to show it is an exponential function. I'm assuming this is at an introductory calculus level, so you don't need to rigorously justify convergence, I think.
Unfortunately, showing that $f(x+y) = f(x)f(y)$ with the power series expansions directly requires some painful mucking around with binomial identities. I think it is easier to instead do it like this.
Consider the function $g(x) = \dfrac{f(x+c)}{f(x)}$, for some particular choice of $c \in \mathbb{R}$. Then $g'(x) = \dfrac{f'(x+c)f(x) - f(x+c)f'(x)}{f(x)^2} = \dfrac{f(x+c)f(x) - f(x+c)f(x)}{f(x)^2} = 0$ for all $x$. Since the derivative of this function is $0$ for all $x$, it must be constant, and since we know $g(0) = f(c)$, that constant must be $f(c)$. So doing some rearranging, $g(x) = f(c) = \dfrac{f(x+c)}{f(x)} \implies f(x+c) = f(c)\cdot f(x)$.
Now let $f(1)$ be some particular number, which we will call $e$. (So we're defining $e = \sum\limits_{k=0}^{\infty} \dfrac{1}{k!}$). Then $f(n)$, for $n \in \mathbb{Z}$, is equal to $f(1)\cdot f(n-1) = e\cdot f(n-1)$, and by induction, we have that $f(n) = e^n$. We also have that $f(-n)f(n) = f(-n+n) = f(0) = 1$ (from the power series directly). So $f(-n) = 1/f(n) = e^{-n}$.
Now let $\frac{x}{y}$ be some fraction where $x, y \in \mathbb{Z}$. Then $f(x/y)^y = f(x/y + x/y.... x/y) = f(x) = e^x$, so raising both sides to the $1/y$ power we get $f(x/y) = (e^x)^{1/y} = e^{x/y}$. We define $e^x$ for $x$ irrational using the power series anyways, but you can use a argument about all real numbers being limits of a series of fractions to show that $f(x) = e^x$ in that case, if you want.