I'm looking for some help understanding how this peace of code checks convergence in a matrix.
The code just loops the inner matrix (1 - n-1) and applies the formula 0.2 ( C + A + B + D + E) storing the difference in a variable and later dividing by the matrix size and compares against a defined tolerance.

Source: http://nsfcac.rutgers.edu/people/irodero/classes/13-14/ece451-566/assignment1.pdf
void solve(double **A, int n)
{
int convergence=FALSE;
double diff, tmp;
int i,j, iters=0;
int for_iters;
for (for_iters=1;for_iters<21;for_iters++)
{
diff = 0.0;
for (i=1;i<n;i++)
{
for (j=1;j<n;j++)
{
tmp = A[i][j];
A[i][j] = 0.2*(A[i][j] + A[i][j-1] + A[i-1][j] + A[i][j+1] + A[i+1][j]);
diff += fabs(A[i][j] - tmp);
}
}
iters++;
if (diff/((double)N*(double)N) < Tolerance){
convergence=TRUE;
}
printf("iteration is %u\n", for_iters);
} /*for*/
}
The thing is what is the theory under this convergent check?
Any comment will be useful!
Thanks
This looks like the discretization of $\Delta A=A$ on a grid with step size $h=1$. The resulting linear system is diagonally dominant, the algorithm is an implementation of the Gauß-Seidel iteration, which converges at least geometrically with factor $0.8$.