So intuitively this makes complete sense, as $\forall.x\in \mathbb{R}, [f(x)]^2\geq 0$, however when using $\epsilon-\delta$, I come to a strange conclusion:
$[f(x)]^2$ is continuous so for $c\in \mathbb{R}, \epsilon > 0, \exists.\delta>0$ such that $$|x-c| < \delta \Rightarrow |[f(x)]^2 - [f(c)]^2| = |f(x) - f(c)||f(x) + f(c)| < \epsilon$$
So, for any $\epsilon_1 > 0$ and $c\in \mathbb{R}$, let $\epsilon_2 := MIN(\epsilon_1, \epsilon_1|f(x) + f(c)|)$. There exists $\delta_2$ such that $$|x-c| < \delta_2 \Rightarrow |f(x) - f(c)| < \frac{\epsilon_2}{|f(x) + f(c)|} \leq \epsilon_1$$ And thus for any $c\in\mathbb{R}$ and $\epsilon>0$, there exists $\delta>0$ such that $$|x-c| < \delta \Rightarrow |f(x) - f(c)| < \epsilon$$ And this would mean that $f(x)$ is continuous. But if $$\forall. x\geq 0,f(x) = 1 $$ $$\forall. x<0, f(x) = -1$$ $[f(x)]^2$ is continuous, but $f(x)$ is not. Where did this proof go wrong?