When trying to derive, from first principles, the fact that exponential functions $a^x$ (where $a>1$ is real) are differentiable, we easily see that $$ \lim_{h\to0} \frac{a^{x+h}-a^x}h = a^x \lim_{h\to0} \frac{a^h-1}h, $$ provided the latter limit exists. It's even pretty easy to see that $$ \lim_{h\to0} \frac{a^h-1}h = ( \log_b a ) \lim_{h\to0} \frac{b^h-1}h $$ for any other real $b>1$, provided the latter limit exists. (And then one can define $e$ to be the number such that $\lim_{h\to0} \frac{e^h-1}h = 1$ and continue.)
So my question, which doesn't seem to have an answer on this site (though I'd be happy to be proved wrong) nor in the textbooks I've consulted: how can one justify the existence of any limit of the form $\lim_{h\to0} \frac{b^h-1}h$ $(b>1)$, without using the as-yet-underived fact that $b^x$ is differentiable? (Edited to add: I also want to avoid infinite series.)
This is just to address some comments by Greg Martin. I place it here for it is long for the comment section.
It is easy to check that convexity of a function $\phi$ is equivalent to any of the inequalities $$ \begin{align} \frac{\varphi(u)-\varphi(x)}{u-x}\leq\frac{\varphi(y)-\varphi(x)}{y-x}\leq \frac{\varphi(y)-\varphi(u)}{y-u}\tag{1}\label{convex-equiv} \end{align} $$ For fixed $a<x<b$, inequalities $\eqref{convex-equiv}$ show that the map $u\mapsto \tfrac{\varphi(u)-\varphi(x)}{u-x}$ decreases as $u\searrow x$ and increases as $u\nearrow x$. Consequently,
the maps $$ \begin{align} \alpha(x):=\sup_{a<u<x}\frac{\varphi(u)-\varphi(x)}{u-x}; \quad \inf_{x<v<b}\frac{\varphi(v)-\varphi(x)}{v-x}:=\beta(x)\tag{2}\label{convex-derivative} \end{align} $$ satisfy $$\begin{align} \alpha(x)\leq\beta(x)\leq\alpha(y),\quad a<x<y<b\tag{3}\label{leftrightderivative} \end{align} $$
Lemma: The functions $\alpha$ and $\beta$ are monotone increasing and left continuous and right continuous respectively. Furthermore, $\alpha(x+)=\beta(x)$ and $\alpha(x)=\beta(x-)$.
Proof: Let $x\in(a,b)$ be fixed, and consider the sequence $x_n=x+\tfrac{1}{n}$. From $\eqref{leftrightderivative}$, it follows that $\beta(x)\leq\alpha(x+\tfrac1n)\leq \beta(x+\tfrac1n)\leq n(\varphi(x+\tfrac2n)-\varphi(x+\tfrac1n))$. Letting $n\nearrow\infty$, we obtain $\beta(x)\leq\alpha(x+)\leq\beta(x+)\leq\beta(x)$. The corresponding statement for left limits follows by using $x_n=x-\tfrac1n$ instead.
Since the functions $\alpha$ and $\beta$ are nondecreasing, we conclude that, except for a countable set of common discontinuities where jumps are equal, $\alpha=\beta$ on $(a,b)$.
Theorem: If $\varphi:(a,b)\rightarrow\mathbb{R}$ convex, then $\varphi$ is continuous; moreover, $\varphi$ is differentiable everywhere, except on a countable set, and
\begin{aligned} \varphi(y)-\varphi(x)=\int^y_x\beta(t)\,dt=\int^y_x\alpha(t)\,dt \end{aligned} for all $a<x<y<b$.
Proof: Suppose $a<x<y<b$ and let $x=x_0<\ldots<x_n=y$. Then $$ \beta(x_{m-1})(x_m-x_{m-1})\leq\varphi(x_m)-\varphi(x_{m-1}) \leq \alpha(x_m)(x_m-x_{m-1}) $$ Adding all terms gives $$ \sum^n_{m=1}\beta(x_{m-1})(x_m-x_{m-1})\leq\varphi(y)-\varphi(x) \leq \sum^n_{m=1}\alpha(x_m)(x_m-x_{m-1}). $$ Consequently, $\varphi(y)-\varphi(x)=\int^y_x\beta(t)\,dt=\int^y_x\alpha(s)\,ds$; hence, $\varphi$ is continuous on any closed interval, and differentiable everywhere except in the countable set $N$ of discontinuities of $\beta$.
Comment 1: There is no need to appeal to integral calculus to show continuity of $\phi$. I am sure the OP knows many ways to achieve this.
Comment 2: Using the fact that the left and right derivatives $\alpha$ and $\beta$ are monotone along with the left-right continuity relations between them, one can conclude that $\phi$ is differentiable at every point with the exceptions of a countable set where $\alpha$ and $\beta$ have jump discontinuities. All this, I believe, makes the arguments suitable for a course of differential calculus prior the introduction of Riemann integration.
Suppose $\phi(x)=a^x$ is differentiable at $x_0$ (such $x_0$ exists from the discussion above. From the existence of $\lim_{h\rightarrow0}\frac{\phi(x_0+h)-\phi(x_0)}{h}=\lim_{h\rightarrow0}\phi(x_0)\frac{\phi(h)-1)}{h}$, it follows the existence of $\lim_{h\rightarrow0}\frac{\phi(h)-1}{h}$. From this, the differentiable it’s at any point follows.
Alternative method:
I undust a couple of my old soviet textbooks (Kudriavtsev, L. D., Curso de Análisis Matemático, Vol 1, and Nilkosky, S. M., A Course of Mathematical Analysis, Vol. I) and this is more or less how the derivative of exponential functions are presented without the defining the log function as an integral:
then, for $a>1$
the $\log_a:(0,\infty)\rightarrow\mathbb{R}$ function, being the inverse of a strictly monotone increasing and continuous function $\phi_a$, is itself continuous and strictly monotone increasing.
$\lim_{x\rightarrow0}\frac{\log_a(x+1)}{x}=\lim_{x\rightarrow0}\log_a\Big(\big(1+x\big)^{1/x}\Big)=\log_ae$.
The punch line: To compute $\lim_{h\rightarrow0}\frac{e^h-1}{h}$, let $t=e^h-1$ so that $h=\ln(t+1)$, $t>-1$. Then $h\rightarrow0$ is equivalent to $t\rightarrow0$. From this, $$\lim_{h\rightarrow0}\frac{e^h-1}{h}=\lim_{t\rightarrow0}\frac{t}{\ln(1+t)}=1$$