I am interested in a rigorous understanding of how to find the slope of an implicit function, of two variables, at a given point.
So far, I have only found answers as deep as "its an application of the chain rule".
While this may work in practice, I am not convinced this is a rigorous justification by itself, and that there is a deeper justification needed.
Taking the standard definition of differentiation as $\frac{d}{dx}f(x) = lim_{h \to 0}\frac{f(x+h) -f(x)}{h}$, I find myself unable to apply this directly to implicit functions, given the inability to express the relation as a function, $f(x)$.
I don't doubt the idea of differentiation is very much the same in the implicit, or non-implicit case - but mechanically, given the definition available, I think more work is needed.
My current efforts revolve around seeing the implicit function, $R(x, y) = 0$ as a non-implicit function of two variables, $z = f(x,y)$, then trying to reason about the relation between $\frac{\partial}{\partial x}f(x,y)$ and $\frac{\partial}{\partial y}f(x,y)$ when we constrain $f(x,y) = z_0$.
Intuition, and a rigorous $\epsilon,\delta$ explanation are both welcomed
For intuition, consider the function $z = f(x,y)$. We can take the total differential of $z$ to be $dz = \frac{\partial z}{\partial x} dx + \frac{\partial z}{\partial y} dy$. Now since $z$ is constant, $dz = 0$, which gives upon rearrangement that $\frac{dy}{dx} = -\frac{z_x}{z_y}$, which is generally how we use multivariable calculus to justify implicit differentiation. Also if you learn the implicit function theorem, the result follows as a special case. However the proof behind the theorem is a little involved.