One of my peers just said me that :
"If rank of a matrix is $r$ then it has a maximum of $r$ non zero eigen values".
When I asked him how did he arrive at it, he said that he used rank-nullity theorem. I do not like rigorous mathematics without understanding the intuition behind the same.
Intuitively what I know is that (with out hardcore definitions):
If we have an $n\times n$ matrix of rank $r$, then the matrix can be reduced to a form [row reduced echelon form] where we have $r$ pivot columns and $n-r$ free columns.
The diagram above is based on what I have learned from MIT 18.06 Gilbert Strang's lectures. So the $n\times n$ matrix having rank $r$, has a null space having $n-r$ vectors in its basis. So this is what I guess the rank-nullity theorem has to say: rank of the matrix ($r$)+ number of vectors in the basis of the null space ($n-r$) equals $n$. [Corresponding to the equation $AX=O_{n\times 1}$]
Now what I can understand is that, if zero is the eigen value of a matrix $A$, then the equation,
$$AX=\lambda X$$
Turns out to be
$$AX=O_{n\times 1}$$
and we get an equation similar to that while finding the null space of a matrix. But now how to arrive at the required fact intuitively?

Given your intuitive understanding in the comments, I think I have an answer for you.
Think about the process of multiplying a vector $x = (x_1, \ldots, x_n)^\top$ to a matrix $A$, consisting of columns $v_1, \ldots, v_n$. If you do a bit of calculation, you can see that $$Ax = x_1 v_1 + \ldots + x_n v_n.$$ That is to say, you obtain a linear combination of the columns of $A$, with the entries of $x$ being your coefficients.
So, if we have $x$, an eigenvector of $A$ corresponding to $0$, then this corresponds to a non-trivial linear combination of the columns of $A$ that produces the $0$ vector. The existence of such a combination is equivalent to the columns of $A$ not being linearly independent, and thus having deficient (column) rank.
If your matrix has rank $r < n$, then it means that we can pick out $n - r$ vectors, and express them as linear combinations of the previous vectors in the order $x_1, \ldots, x_n$, until the columns become linearly independent. For example, $x_i$ would be one such vector if there exist $a_1, \ldots, a_{i - 1}$ such that $$v_i = a_1 v_1 + \ldots + a_{i - 1} v_{i - 1}.$$ We can then rearrange this to form a $0$-linear combination: $$a_1 v_1 + \ldots + a_{i - 1} v_{i - 1} + (-1)v_i = 0.$$ This produces an eigenvector $x = (a_1, \ldots, a_{i - 1}, -1, 0, 0, \ldots, 0)^\top$ for $A$, corresponding to $0$. We can repeat this process for each of the $n - r$ vectors that we can removed from the columns so that the remainder are linearly independent. This produces $n - r$ eigenvectors corresponding to $0$.
Furthermore, these eigenvectors are linearly independent. Note that the eigenvectors formed by the above process take a form similar to rows in row-echelon form: they end in a "leading" (or, I guess, "trailing") $-1$, and have $0$s below. where the $-1$ resides depends on which vector you take out (take out $v_i$, and $-1$ lies in the $i$th position). So, each such eigenvector has a different trailing $-1$, and it shouldn't be difficult to see that these vectors are linearly independent.
So, we have $n - r$ linearly independent eigenvectors corresponding to $0$, and so the multiplicity of $0$ is at least $n - r$. Each $n \times n$ matrix has at most $n$ eigenvalues, so the total multiplicities of non-zero eigenvalues is at most $r$.