Consider the following problem
I did it as follows. We want to optimize $$\begin{align*} I(x)=∫ |\dot{x } |² d t. \end{align*}\tag{1}$$ By Lagrange multipliers, this is the same as optimizing $$\begin{align*} ζ (x )& =∫_{}^{} |\dot{x } |² d t - λ ∫ (|x|-1)² d t \\ & = ∫ |\dot{x } |² - \lambda(|x |-1)² d t \end{align*}\tag{2}$$ where for $$p(x)= ∫ (|x |-1)² d t,\tag{3}$$ we have $p=0$ defining the set of points which is our constrain. Applying the Euler-Lagrange equations: $$\begin{align*} 0 & = \frac{d}{d t}(\frac{∂ f}{∂ \dot{x } })- \frac{∂ f}{∂ x } \\ & =2 [\ddot{x }-λ x + \frac{λ }{|x |x } ], \end{align*}\tag{4}$$ meaning $$\begin{align*} \ddot{x }=λ x (1 - \frac{1}{|x |} ). \end{align*}\tag{5}$$ This can’t be right. The model answer considers $p= ∫ x · x -1$ giving an end result $$\begin{align*} \ddot{x }+ |\dot{ x } |² x =0 \end{align*}\tag{6}$$ which is as required.
It has been mentioned to me that this has to do with something in the order at which we approach zero, but I am not sure.
Can someone clarify as to what is going on here? That is, why does the first choice of $p$ not work?

TL;DR: The problem with OP's constraint function $$ \chi ~:=~(\sqrt{x^2}-1)^2 $$ is that the gradient $\nabla\chi=0$ vanishes on the constrained hypersurface $\chi=0$. This forces the Lagrange multiplier $\lambda$ in the method of Lagrange multipliers to be mathematically ill-defined/infinite. E.g. this leads to $\infty\cdot 0$ in OP's EL eq. (5). See also e.g. this related Phys.SE post.
Here is a better approach. Let us square the constraint$^1$ in the following way
$$ \forall t:~~x(t)^2~=~1\tag{A}$$
to avoid square roots. Repeated differentiations wrt. $t$ yield
$$ x\cdot \dot{x}~\stackrel{(A)}{=}~0,\tag{B}$$
and
$$ x\cdot \ddot{x}+\dot{x}^2~\stackrel{(B)}{=}~0.\tag{C}$$
Using the method of Lagrange multipliers the extended functional reads$^1$
$$ \widetilde{I}[x,\lambda]~:=~\int\! dt ~\widetilde{L}, \qquad \widetilde{L}~:=~\dot{x}(t)^2 +\lambda(t) (x(t)^2-1). \tag{D}$$
The Euler-Lagrange (EL) equation becomes
$$ \ddot{x}~=~\lambda x, \tag{E}$$
so that the Lagrange multiplier is
$$ \lambda~\stackrel{(A)}{=}~\lambda x^2~\stackrel{(E)}{=}~x\cdot \ddot{x}~\stackrel{(C)}{=}~-\dot{x}^2.\tag{F}$$
Altogether, we obtain OP's sought-for equation
$$ \ddot{x}~\stackrel{(E)+(F)}{=}~-\dot{x}^2 x. \tag{G}$$
$^1$ Since the constraint (A) applies for all $t$, note that the Lagrange multiplier $\lambda(t)$ is in principle a function of $t$. [Eq. (A) is really infinitely many constraints, one for each $t$; so we need infinitely many Lagrange multipliers $\lambda(t)$, one for each $t$.] This fact seems to clash with OP's definition (v3) of $p$ as an integral over $t$.