The derivative at a point $x$ is defined as: $\lim\limits_{h\to0} \frac{f(x+h) - f(x)}h$
But if $h\to0$, wouldn't that mean: $\frac{f(x+0) - f(x)}0 = \frac0{0}$ which is undefined?
The derivative at a point $x$ is defined as: $\lim\limits_{h\to0} \frac{f(x+h) - f(x)}h$
But if $h\to0$, wouldn't that mean: $\frac{f(x+0) - f(x)}0 = \frac0{0}$ which is undefined?
The definition $\mathbf{strictly}$ means $h\to 0$ and NOT $h=0$. Thats where you are mistaken. So you can't put $h=0$ directly in the formula, rather you have to calculate the limit of that expression for a particular $f(x)$ as $h\to 0$.
You seem to conflate the two notions of "take the limit as $h$ approaches $0$" and "plug in $0$ for $h$". In some situations, the latter is an easy way of doing the former. But in other situations, including the definition of derivative, the latter is (as you noticed) nonsense, and so you have to actually do the former. (The first step toward doing the former is, of course, to make sure you thoroughly understand the concept of limit.)
Think of how fast some terms of $f(x+h)-f(x)$ vanish compared to $h$ (no matter how small $h>0$ is), while others are independent of it.
In particular, $f(x+h)-f(x)$ consists of the derivative times $h$ (hence independent of $h$), plus a sublinear term $u(h)$ that tends faster to 0 than $h$, i.e. $\lim_{h \to 0} \frac{u(h)}{h} = 0$.
To illustrate this, take $f(x) = x^3$. Then $$f(x+h)-f(x) = x^3 + 3xh^2 + 3x^2h + h^3 - x^3 = 3x^2 h + h^2(3x + h)$$
The derivative term is $3x^2$, while $u(h) = h^2(3x+h)$ satisfying $\lim_{h \to 0} \frac{u(h)}{h} = 0$. Thus, $$\lim_{h \to 0}\frac{f(x+h)-f(x)}{h} = \lim_{h \to 0} 3x^2\frac{h}{h} + \lim_{h \to 0} \frac{u(h)}{h} = 3x^2 + 0$$
When I taught Calculus, I got this question a lot. One simple exercise that helped many students understand the concept of a limit is the following: Pick your favorite function, e.g. $\sin(x)$, then evaluate the fractional part of the limit with a calculator choosing some small $h$ and some value for $x$. For example with $x=1$ and $h=0.1$, we get: $${\sin(1.1) - \sin(1) \over{0.1} }= 0.497364$$ Then choose a smaller $h$, but keep $x$ the same. For example, for $h=0.01$: $${\sin(1.01) - \sin(1) \over{0.01} }= 0.536086$$ And continue with smaller and smaller values for $h$. \begin{array}{|c|c|} \hline h & (\sin(x+h) - \sin(x))/h \\\hline 0.1 & 0.497364 \\\hline 0.01 & 0.536086 \\\hline 0.001 & 0.539881 \\\hline 0.0001 & 0.540260 \\\hline 0.00001 & 0.540298 \\\hline 0.000001 & 0.540302 \\\hline \end{array} You can clearly see that the value is converging to the derivative which is: $$\cos(1) = 0.5403023059$$
Actually, the $ \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} $ is the value which $\frac{f(x+h) - f(x)}{h}$ approaches when you keep reducing $h$ as much as you can.
Here is how my teacher explained me the idea:
Think of drawing a tangent. It's a line touching a curve at only one point.
But how could that be possible? Two points determine a line.
The tangent is the limiting case of a secant where the two points close in on each other.
Like he said, and you correctly pointed out, the expression is meaningless if $h=0$.
Here is another paradox that Zeno used:
Suppose an Arrow is shot which travels from A to B.
Consider any instant.
Since, no time elapses during the instant, the arrow does not move during the motion.
But the entire time of flight consists of instances alone.
Hence, the arrow must not have moved.
The key here is to introduce the idea of speed, which is once again, a limit, the distance travelled per duration as the duration approaches an instant (tends to zero).
It might be helpful to consider the alternate notation in this case as written in: https://en.wikipedia.org/wiki/Derivative
Consider $\lim_{\Delta x \rightarrow 0} \frac{\Delta y}{\Delta x} = \frac{f(x + \Delta x) - f(x)}{\Delta x}$ where $y = f(x)$. Under this notation you'll note that the denominator is not a variable per say, but a difference. In your notation $h = \Delta x$. This notation can be misleading, leading you to conclude that $h$ is a variable that can be substituted. In fact (as eluded to by others) $\Delta x = 0$ can't be "reached", hence the need for a limit.
Instead of $h$ here I will use $\Delta x$ as I think it makes more sense to consider a change in $x$.
For $$\lim\limits_{\Delta x\to 0} \frac{f(x+\Delta x) - f(x)}{\Delta x}\tag{1}$$ at no point do we ever substitute $\Delta x=0$. We can only say that in the limit $\Delta x$ tends to zero; this is why there is the limit notation directly in-front of the fraction.
Consider the graph below which is an identical graphical representation of Equation $(1)$:
You can see from the graph that the Secant line, which is the line that intersects the purple curve at $(x,f(x))$ and at $(x+\Delta x,f(x+\Delta x))$ is an approximation to the derivative (tangent) of the purple curve at $(x,f(x))$.
Now imagine $\Delta x$ getting smaller and smaller until eventually it becomes vanishingly small; which we call 'infinitesimal' (loosely speaking this is as small as you can possibly get but still greater than zero).
At this point you can see that the Secant line approaches the Tangent line or derivative at $(x,f(x))$ as $\Delta x$ tends to zero. Thus, the approximation of the Secant line to the Tangent line is optimal at $(x,f(x)+\Delta x)$ and this is the exact meaning of Equation $(1)$.