By definition, a sequence $x_k \in \mathbb{R}, k \in \mathbb{N}$ converges with order $p \in [1,\infty)$ to $x := \lim_{k\to\infty} x_k$ if \begin{align} \exists C \in [0,\infty): \forall k \in \mathbb{N}: \|x_{k+1} - x\| \le C \|x_k - x\|^p \end{align}
Assume $x_k$ is a sequence of approximations to $x$, then $\|x_k - x\|$ denotes the error of approximation at $k$-th iteration, and goes to zero as $k \to \infty$. I can understand the inequality $\|x_{k+1} - x\| \le C \|x_k - x\|$ that implies the error must go smaller and smaller along the iterations. But I don't understand the motivation to put a power of $p$ on the previous step. Any practical meaning of taking $p$-th power of error $\|x_k - x\|^p$?
Consider a sequence of positive numbers $x_n$ such that $x_{n+1} = x_n^p$ with $p > 1$ and $x_0 < 1$. That's a special case, where $C = 1$.
Now take the logarithm. You should then be able to find an explicit formula for $x_n$. As $p$ varies, how does the rate change at which $x_n$ converges to 0? For example, how many steps are needed to guarantee $x_n < \varepsilon$ for a given $\varepsilon$?
By the way, the estimate $\|x_{k+1} - x\| \le C \|x_k - x\|$ does not imply that the error goes to zero. This happens only if $C < 1$.