In a linear regression of the form Y=bX we often have ln transformed Y and X. ie., lnY=b*lnX This is interpreted as a 1% change in X resulting in a b% change in Y (approximately)
The derivation of this interpretation comes from taking the derivatives of both sides: d (ln Y) = d (b*lnX) because of the fact that the derivative of ln is it's inverse gives 1/Y dy = b*1/X dx ie., dy/Y = b* dx/X or %y=b*%x or b= %y/%x ie., a 1% change in X results in a b % change in Y
is this true that you can take the derivative of both sides as in the above? for instance does it mean that:
for y^2=x^2 (note:this simplifies to y=+-x and taking the derivative of this is dy/dx=+-1) however taking the derivative of both sides of the original equation leads to: 2y dy = 2x dx or y dy = x dx or y dy/dx = x and dy/dx= x/y which is different from dy/dx=+-1
Two functions, $f(x),g(x)$ have the same determinant iff they differ by a constant factor. This is just from the fundamental theorem of calculus. Thus if you know that two functions are equal, their derivatives are equal (since they differ by the constant factor 0).
$y^2=x^2\Rightarrow 2ydy=2xdx\Rightarrow y\frac{dy}{dx}=x\Rightarrow \pm y=x$. Notice that the solution set of the first and the last (and every equation I wrote along the way) are the same. That's what the theorem is saying. To use this to compute the derivative, you wind up at $\frac{dy}{dx}=\frac{x}{y}$ and then have to go back to the original equation and notice that if $y^2=x^2$, then $y=\pm x$ and then go plug that into the previous formula to get $\pm 1$
In a more abstract sense, $d(f+g)=df+dg$. So if $f=g$, then $(f-g)=0\Rightarrow d(f-g)=d(0)=0\Rightarrow df-dg=0\Rightarrow df=dg$