I'm taking Calculus 1 in college, and one aspect about the limit definition of a derivative really confuses me and I wasn't able to find an answer on the site that made complete sense to me.
Concerning the $$\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$$
definition of the derivative, why does factoring out the h give a different result? If h is approaching $0$, shouldn't plugging in $0$ for $h$ at any time give an undefined answer?
This probably seems pretty elementary, but I'm just confused as to why the same equation, in essence, can yield a different outcome.
I think your confused about what a “limit” really means.
When I say $\lim_{x \to 1} \frac{x^2-1}{x-1}=2$, I mean to say $f(x)=\frac{x^2-1}{x-1}$ gets (arbitrarily) closer to $2$ when $x$ gets closer to $1$ (but is not exactly $1$).
As an example of what I mean, notice $f(.9)=1.9$ and $f(.99)=1.99$ and $f(.999)=1.999$. And similarly if we get closer to $1$ (approach $x=1$) from numbers larger than $1$, we get something similar. $f(1.1)=2.1$, $f(1.01)=2.01$, $f(1.001)=2.0001$.
In fact it seems for $x \neq 1$, that $f(x)=x+1$, in which case I think it’s pretty clear that when $x$ gets closer to $1$, but is not $1$, that $x+1$ gets closer to $2$.
In your specific question $\lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$ (if it exists) is a representation of the value that $\frac{f(x+h)-f(x)}{h}$ gets (arbitrary) closer to as $h$ gets closer to zero (but is not zero).