Let $A$ be a non-singular square matrix. We know that $A \cdot \operatorname{adj}A = \det A \cdot I$. This implies that $\det\left(\operatorname{adj} A\right) = \left(\det A\right)^{n-1}$. Hence $\operatorname{adj} A$ is non-singular.
But how to understand it in terms of linearly independent rows or columns?
It means that when each entry of a non singular matrix is replaced by its co-factor, linear independence of rows and columns is preserved. I have no idea how to think about it. Any help is highly appreciated. Thanks!
Not what you asked but I want to sat that there is a more general theorem saying that
a. If $A$ is invertible then adj$(A)$ is invertible.
b. If rank$(A)=n-1$, ($A$ is an $n\times n$ matrix), then rank$($adj$(A)=1$.
c. If rank$(A)<n-1$, adj$(A)=0$.
If rank$(A)<n-1$ then the determinant of all the $n-1\times n-1$ submatrices is zero.
If rank$(A)=n-1$, then since adj$(A)\cdot A=0$ it follows that col$(A)\subseteq$ker$(adj(A))$. Since dim(col$(A))=rank(A)$ the result follows.