I have encountered many calculus textbooks that state the following theorem:
If $x=f(t)$ and $y=g(t)$ are differentiable at $t=t_{0}$, $f'(t_{0}) \neq 0$, and $y=F(x)$ holds for some open interval containing $x_{0}=f(t_{0})$, then the function $y=F(x)$ is differentiable at $x=x_{0}$ with $$F'(x_{0})=\frac{g'(t_{0})}{f'(t_{0})}.$$
along with the following proof:
According to the chain rule, $$\frac{dy}{dt}=\frac{dy}{dx}\frac{dx}{dt}$$ and therefore, $$\frac{dy}{dx}=\frac{dy/dt}{dx/dt}$$ provided that $dx/dt \neq 0$.
It is clear that this proof is erroneous, for in order to use the chain rule the existence of $dy/dx$ at $x=x_{0}$ must first be given.
Question: Is the theorem wrong, and must we also include the differentiability of F as one of the conditions for the theorem to hold? (Then I would be delighted to be presented with a counterexample) Or is it possible to construct an alternative proof that soundly validates the above theorem?
Here are some of my thoughts.
If $x=f(t)$ is continuously differentiable at $t=t_{0}$ then by the inverse function theorem there exists an open interval containing $t_{0}$ in which $f$ is invertible, and the inverse $f^{-1}$ is continuously differentiable. Therefore, since $$F(x)=g(t)=g(f^{-1}(x)),$$ applying the chain rule we have $$F'(x)=g'(f^{-1}(x))(f^{-1})'(x)=\frac{g'(t)}{f'(t)}$$ and we arrive at the desired formula.
However, considering the following example $$x=t^{2}\sin(\frac{\pi}{t})+t \quad (t \neq 0)$$ $$=0 \quad (t=0)$$ $$y=\{ t^{2}\sin(\frac{\pi}{t})+t \}^2 \quad (t \neq 0)$$ $$=0 \quad (t=0)$$ in which the above formula holds precisely at $t=0$ even though $x=f(t)$ is not continuously differentiable at the point, I have strong suspicions that only differentiability, not continuous differentiability, is required for the theorem to hold.
Meanwhile, Stewart (6th ed., p. 630) includes the differentiability of F as one of the conditions for the theorem to hold. However, in the examples and exercises he doesn't seem to bother himself with investigating whether $y$ can be expressed as a differentiable function of $x$ before he dives into computing $dx/dt$ and $dy/dt$ right away. I think it is impractical to begin investigating every time we try to differentiate a parametrically defined function, especially since we usually try to apply the theorem in situations where it is difficult to attain the explicit form of the function.
So I suspect that the theorem itself is true and what is lacking is simply a valid proof of the statement. My major isn't mathematics so I'm no expert at analysis. Could anybody provide some help on this matter? Thanks.