I find Implicit function theorem (limited to scalar function) both fascinating and a bit puzzling. Simply said, total derivative of some $f(x,y(x))$ according to $x$ is:
$$ \frac{d f(x,y(x)) }{ dx } = f_{x} \frac{\partial x}{\partial x} + f_{y} \frac{\partial y}{\partial x} = 0$$
It is straight forward to see, that such total derivative of $f(x,y(x))$ can be expressed through partial derivative:
$$\frac{\partial y}{\partial x} = - \frac{f_{x}(x, y(x))}{f_{y}(x,y(x))} \tag {1}$$
where for brevity $f_{x} = \frac{\partial f(x,y(y))}{\partial x}$ and $f_{y} = \frac{\partial f(x,y(y))}{\partial y}$
So far, so good. Now lets try to continue by taking partial derivative with respect to $y$. This can naturally happen when one needs to evaluate implicit function $f(x,y(x))$ for variable $y$ at variable $x$, it's first derivative $\frac{\partial y}{\partial x}$ and naturally one would expect second derivative $\frac{\partial ^2 y}{\partial ^2 x}$ and $\frac{\partial ^2 y}{\partial x \partial y}$ to exist, e.g. for calculating Hessian matrix.
Case 1: apply quotient rule on {1}
One could blindly do the following in order to obtain $\frac{\partial}{\partial y}\frac{\partial y}{\partial x}$ to get $\frac{\partial ^2 y}{\partial x \partial y}$:
$$\frac{\partial ^2 y}{\partial x \partial y} = - \frac{\partial (\frac{f_{x}}{f_{y}})}{\partial y}$$ which holds, if higher derivatives of $f_x$ and $f_y$ exists.
Case 2: analyze where the expression becomes constant
Obviously $\frac{\partial y}{\partial y} = 1$. This happens within $\frac{\partial ^2 y}{\partial y \partial x}$ and since $\frac{\partial ^2 y}{\partial x \partial y} = \frac{\partial ^2 y}{\partial y \partial x}$, then it is expected to happen for any mixed second-order partial derivative.
This way, one should have always $\frac{\partial ^2 y}{\partial y \partial x} = 0$ for an implicit function, since deriving $\frac{\partial y}{\partial y} = 1$ is the derivation of a constant, thus 0.
Question
Which of the 2 cases is wrong and why?
More details
To provide more details, but still keep the question general, lets assume the form of $f(x,y(x))$, to provide some visual clues.
$$f(x,y(x)) = e^{(x+y)} + x + y = 0$$
Background (to give the equations intuitive motivation related to experimental observations)
Variables $x$ and $y$ are physical quantities (current, voltage). $f(x,y(x)) = 0$, thus this function is used to evaluate the quantities $x$ and $y$ only. For instance, I need the value of $y$ for $x=c$, then I evaluate $f(c,y) = 0$ to get $y$. This also means, that when one variable changes (e.g. $x$) then $y$ changes as well. Therefore, when I calculate derivatives, then naturally I take $\frac{\partial x}{\partial y}$, because that is what I observe as a derivative at the level of the physical system (e.g. lowering current, rises the voltage). This means, that in the Jacobian and Hessian I expect to see $\frac{\partial x}{\partial y}$, rather than $\frac{\partial f(x,y(x))}{\partial y}$.
Be careful about the order.
$\frac{\partial^2 y}{\partial x\partial y}$ means $\frac{\partial}{\partial x}\left(\frac{\partial y}{\partial y}\right)$. If you want to calculate it the other order (differentiate with respect to $x$ first), it is $\frac{\partial^2 y}{\partial y\partial x}$.
Recall in the proof of symmetry of partials that we need to be able to change $x$ and $y$ independently (and the twice-differentiability of $f\colon U\to E$). Now symmetry of partial derivatives $\frac{\partial^2 f(x,y,\dots)}{\partial x\partial y}=\frac{\partial^2 f(x,y,\dots)}{\partial y\partial x}$ does not hold if $x,y$ are dependent in general. For example, on $\mathbb{R}^+$ with local coordinate $x$, consider also another local coordinate $y=x^2$. The two differential operators $\frac{\mathrm{d}}{\mathrm{d}x}$ and $\frac{\mathrm{d}}{\mathrm{d}y}$ do not commute: We have $\frac{\mathrm{d}}{\mathrm{d}y}=\frac1{2x}\frac{\mathrm{d}}{\mathrm{d}x}$, so
\begin{align*} \frac{\mathrm{d}}{\mathrm{d}y}\frac{\mathrm{d}}{\mathrm{d}x} &=\frac1{2x}\cdot\frac{\mathrm{d}^2}{\mathrm{d}x^2} \\ \frac{\mathrm{d}}{\mathrm{d}x}\frac{\mathrm{d}}{\mathrm{d}y} &=\frac{\mathrm{d}}{\mathrm{d}x}\frac1{2x}\frac{\mathrm{d}}{\mathrm{d}x}\\ &=\frac1{2x}\frac{\mathrm{d}}{\mathrm{d}x}\frac{\mathrm{d}}{\mathrm{d}x} +\left(\frac{\mathrm{d}}{\mathrm{d}x}\frac1{2x}\right)\frac{\mathrm{d}}{\mathrm{d}x}\\ &= \frac1{2x}\frac{\mathrm{d}^2}{\mathrm{d}x^2} -\frac1{2x^2}\frac{\mathrm{d}}{\mathrm{d}x}\\ \end{align*} In other words, for any nonconstant twice-differentiable function $f\colon\mathbb{R}^+\to\mathbb{R}$ we have $$ \frac{\mathrm{d}}{\mathrm{d}x}\frac{\mathrm{d}}{\mathrm{d}y}f(x)-\frac{\mathrm{d}}{\mathrm{d}y}\frac{\mathrm{d}}{\mathrm{d}x}f(x)=-\frac1{2x^2}f'(x)\neq 0. $$