Here is a theorem regarding convergence of Gradient Descent from Ovidiu Calin's Deep Learning Architecture. However, I think the converse direction is wrong.
The convergence for Cauchy, as far as I recall, should be uniform in $p$. For example, take one dimensional \begin{align*} x_n = \sum_{k=1}^n \frac{1}{k} \end{align*} Then clearly for every fixed $p$, $\frac{1}{n+1}+\cdots+\frac{1}{n+p}\rightarrow 0$ as $n\rightarrow \infty$, while $x_n$ is not convergent.

