I have been reading a book (Linear Algebra, Michel Queysanne) in which there is an explanation about the motivation of defining eigenvalues and eigenvectors.
I was wondering if anyone could explain this in a clearer way.
Basically, the author argues that we first need to find a matrix for an endomorphism f which is associated to a basis $(a_1,a_2, \dots, a_n)$. Then, from this point on, I get completely lost. The author mentions a matrix $A$ which is made of $4$ blocks where one is $$A’= M(g, a_i) \qquad(1\le i\le n),$$ but no explanation about the other three blocks is provided. Then he says $g$ is the linear map induced by $f$ on $F$ (which is an eigensubspace) Why do we have to talk about a function g now, and what is with induced linear maps?
Later, he talks about a family of stable subspaces $f$ through $f$ such that $E$ (vector space) equals the direct sum of $F_1, \dots, F_m.$
Finally, he gathers the bases of $F_1, \dots, F_m.$ in a diagonal matrix $A =M(f,a_i)$. Why is it diagonal? He makes the point that the matrices $A_j$ of order strictly lower than that of $M(f)$ have to be simpified. He then connects this fact to homotecic transformations, i.e. $x\mapsto cx$ for some scalar $c$.
Could some shed some light on this topic please?
Unfortunately, I can't really understand your actual questions about your book, they are pretty heavily stripped of context. For example, I have no idea what the definition of $A'$ means. So I will just give you a general discussion. I will choose to use some algebraic terminology like "endomorphism" instead of "square matrix" because you did so in your question. If you're more comfortable with more elementary terminology then I can change it.
As a rule, in math we want to take a general thing and do the best we can to reduce it to a simple thing. In the case of eigenvalue/eigenvectors, the general thing is "linear transformations" and the simple thing is "scalar multiplication". Thus we want to reduce linear transformations to scalar multiplication. This is kind of a non-starter in the rectangular case (scalar multiplication is an endomorphism), so we look at endomorphisms.
Even in this case, we can't just perform the reduction, linear transformations are more complicated than that. The trick is to find restrictions of our linear transformation to subsets of their domain on which they act like scalar multiplication. That is we look for subsets of the domain where $Ax=\lambda x$. These subsets are called eigenspaces; their nonzero elements are called eigenvectors; the corresponding scalars are called eigenvalues. We call them eigenspaces because they are vector subspaces, as is easily checked from the definition.
Our original endomorphism is now obtained on the direct sum of all its eigenspaces by the requirement of linearity: for $x_i$ eigenvectors and $c_i$ scalars, you have $A(\sum_{i=1}^n c_i x_i)=\sum_{i=1}^n c_i \lambda_i x_i$. Hopefully, the direct sum of all the eigenspaces is the whole domain. In this case we say the endomorphism is diagonalizable, because it can be written as a diagonal matrix if we choose an eigenvector basis (a union of bases of each eigenspace). If the endomorphism is not diagonalizable, then we are not done, there are still some parts of the domain where we don't understand how $A$ works. We sometimes choose to get out of this by passing to the Jordan form.