I have noticed that for eigenvalues and eigenvectors:
$$Av=\lambda v \\ Av=\lambda I v \\ Av-\lambda I v=0 \\ (A-\lambda I)v=0$$
In the books definition, we have to find $\det (A-\lambda I)=0$. But why does $\det(A-\lambda I)=0$ implies $(A-\lambda I)v=0$? I am asking this because for a lot of linear transformations, $v$ is a vector in $n-$coordinates and hence, we generally can't take the determinant of it. If that were the case, we could make:
$$\det(A-\lambda I)\det(v)=0$$
And use cancelation to deduce it. And as we're talking about all the vectors which could have any value for $\det(x)$, then $\det(A-\lambda I)$ must be $0$ for the equation to make sense.
If $\det(A-\lambda I)\ne 0$, the matrix $A-\lambda I$ is invertible, and the only solution to the equation
$$(A-\lambda I)v=0\tag{1}$$
is $v=0$. Eigenvectors of $A$ are non-zero vectors $v$ such that $Av=\lambda v$ for some scalar $\lambda$, i.e., such that $(A-\lambda I)v=0$ for some scalar $\lambda$; this $\lambda$ is then an eigenvalue of $A$. If the only solution to $(1)$ is $v=0$, the scalar $\lambda$ can’t be an eigenvalue: it has no eigenvectors.
If, on the other hand, $\det(A-\lambda I)=0$, then $A-\lambda I$ is not invertible, and $(1)$ has non-zero solutions; these non-zero solutions are eigenvectors, and $\lambda$ is the corresponding eigenvalue.