Reading this paper On Deferred Statistical Convergence of Sequences by Kucukaslan and Yilmazturk published in KYUNGPOOK Math. J.56(2016), 357-366
I am stuck at theorem $2.2.7$ I get $\{n^{(1)} ,n^{(2)},n^{(3)}.......n^{(m)},n^{(m+1)}...\}$ is a decreasing sequence of the natural numbers. At one point it defines a sequence $$b_{n,m}={{n^{(m)}-n^{(m+1)}}\over{n}}; m=0,1,2,...h\\=0 ; \text{otherwise}$$ where $n^{(0)}=n.$ and then says the matrix satisfies Silverman-Toeplitz theorem so $${1\over {n^{(m)}-n^{(m+1)}}}\cdot \left|\{n^{(m+1)}\lt k\le n^{(m)}: |x_k-l|\ge \epsilon \}\right|\rightarrow 0$$ when $n\rightarrow \infty.$
I cannot find the exact statement of the said theorem and I do not how what is satisfied to come to that conclusion. And it is the conclusive step.
The book Elements of Fundamental Analysis by Maddox is referred to but I do not have that either.
Silverman-Toeplitz Theorem is a result in summability theory which describes when a matrix summability method is regular. Being regular means any convergent sequence is transformed to a sequence which converges to the same limit.
This result can be found on Wikipedia (current revision) and in many books on this topic. Since you asked especially about Maddox's Functional Analysis, I'll copy here the formulation from that book - although it is not substantially different from formulations you can find elsewhere. (In fact, I think that the formulation given on Wikipedia is a bit clearer - it's less heavy on symbols and adds also informal descriptions.)
If you look at Wikipedia this can be said briefly as: columns converge to zero, row-sums converge to one, absolute row sums are bounded.
In your case want to check whether this is true for the matrix $$b_{n,m}= \begin{cases} \frac{n^{(m)}-n^{(m+1)}}n, & m=0,1,2,\dots,h \\ 0, & \text{otherwise}. \end{cases} $$ where $n^{(m)}$ and $h$ are defined in the paper.
We know that $n=n^{(0)}>n^{(1)}>n^{(2)}>\dots>n^{(h)}>n^{(h+1)}=0$.
Which means that we always have $b_{n,m}>0$ and thus $$\sum_{m=0}^\infty |b_{n,m}| = \sum_{m=0}^\infty b_{n,m} = \frac{n^{(0)}}n = \frac{n}n=1.$$ So from this get that (i) and (iii) are fulfilled.
However, I have to admit that I do not see how we get that $$\lim\limits_{n\to\infty} b_{n,m} = 0$$ from the conditions given in the paper. Unless I missed something, we only know that $p(n)<n$ and that $n^{(k+1)}=p(n^{(k)})$ (with this process stopping as soon as we get $n^{(h+1)}=0$.)
For example, for $m=0$ we should have $$\lim\limits_{n\to\infty} \frac{n-p(n)}n.$$ I do not see among assumptions of Theorem 2.2.7 anything about $p(n)$ which implies that this limit is zero. (Of course, I might have missed something.) It seems that we would need $\lim\limits_{n\to\infty} p(n)/n =1$ to get the desired conclusion.