I found in the proof of Proposition 2.4 the phrase "We extend linearly and multiplicatively to all of $L(E)$". So, I want (if possible) some references that point the extending of linearity and multiplicatively of a map between rings or algebras.
For the linearity a found the Proposition 2.34 in Rotman, An Introduction to homological algebra.
Thanks.

I'm not sure what you're looking for with regards to "references that point the extending of linearity and multiplicatively of a map between rings or algebras". I suspect there is a conceptual error happening, so I'll try to clarify the issue myself, and I'll include some references for the topic. Normally I would also include references in the literature where this technique is used, but extending maps linearly is so ubiquitous in the literature it doesn't seem worthwhile. This is an extremely fundamental technique, though if you ask in the comments I'll happily find some examples of it being used.
On to the discussion:
I think this is best done in terms of vector spaces over a field $k$, which, we recall, are just modules over $k$ viewed as a ring.
Let's start extremely concrete - what are the linear maps from $\mathbb{R}^2$ to $\mathbb{R}^3$? While this question might look difficult to a beginner, we have the tools of linear algebra at our disposal, and so we know exactly how to tackle this!
First, we fix a Basis for $\mathbb{R}^2$. For concreteness, let's take the standard basis $e_1 = (1,0)$ and $e_2 = (0,1)$. Now every linear function $L$ is, among other things, a function. So it has to end $e_1$ and $e_2$ somewhere.
But we also know that $L$ has to be linear. So a vector $ae_1 + be_2$ has to get sent to $L(ae_1 + be_2) = a L(e_1) + b L(e_2)$. This is where we use that $\{e_1, e_2\}$ is a basis -- Every vector $v \in \mathbb{R}^2$ can be written as $ae_1 + be_2$. So as soon as we know $L(e_1)$ and $L(e_2)$, we know the entire map $L$.
We now take this observation one step further: Say we pick two vectors $v_1$ and $v_2$ in $\mathbb{R}^3$. Then we can define a linear map $L$ by saying $L(e_1) = v_1$ and $L(e_2) = v_2$. Then we could say "also $L(ae_1 + be_2) = av_1 + bv_2$", but there's no reason to! Since we know we want $L$ to be linear, once we say what it does to $e_1$ and $e_2$, we know what it does to everything!
Of course, there's nothing special here about $\mathbb{R}^3$. If we pick two vectors $v_1$ and $v_2$ in any $\mathbb{R}$-vector space, we have a unique linear map $L : \mathbb{R}^2 \to V$ which satisfies $L(e_1) = v_1$ and $L(e_2) = v_2$. We can summarize this discussion in a very interesting (and useful!) theorem:
Ok, now it's time to bust out the categorical language.
If we have a function $f : \{e_1, e_2\} \to V$, we want to extend it to a linear function on all of $\mathbb{R}^2$, and the above theorem says there is a unique way to do this. We can summarize this with the picture below:
Here $\iota : \{e_1, e_2\} \to \mathbb{R}^2$ is just the function sending $e_i \in \{e_1,e_2\} \mapsto e_i \in \mathbb{R}^2$.
This picture summarizes the above theorem - it says that given any function $f : \{e_1, e_2\} \to V$, there exists a function $L$ from $\mathbb{R}^2 \to V$ which makes the triangle commute. That is, $f = L \circ \iota$. Said one more way, that $L(e_i) = f(e_i)$.
Of course, this is clear from the proceeding discussion, and also from linear algebra knowledge you already had. Why discuss it, then?
All this information is because Free Modules are almost exactly like Vector Spaces! A free ($R$-)module $M$ is given by a basis $X$, and every element of $M$ can be written uniquely as a sum $\sum_i r_i x_i$ where only finitely many $r_i$ are allowed to be nonzero.
Of course, now we can do exactly what we did above! If I have any function $f : X \to N$ for some other $R$-module $N$, we can find a unique linear map $L : M \to N$. How? Define $L(\sum_i r_i x_i) = \sum_i r_i f(x_i)$, just like we did with vector spaces! This definition is forced once we know $L$ has to be linear.
We say that $L$ is defined by linearly extending $f$ to the whole of $M$.
(As an aside, when $k$ is a field, it is a theorem that every $k$-module is free. This is why Vector Spaces and Free-Modules behave so similarly, they really are the same thing.)
If $M$ is instead a free algebra, say with basis $\{x_1, x_2\}$, then every element looks like $a + b x_1 + c x_2 + d x_1^2 + e x_2^2 + f x_1 x_2 + \ldots$, where only finitely many coefficients are allowed to be nonzero. That is, in an algebra, we are allowed to multiply the basis elements as well as add them.
Now the key point is that maps between algebras must satisfy $L(x_1x_2) = L(x_1) L(x_2)$. So there is still a unique way to extend a function from $X$ to $N$ into a function $L : M \to N$. This is now defined by extending linearly and multiplicatively. "Linearly" takes care of $L(ax_1 + bx_2) = aL(x_1) + bL(x_2)$ and "multiplicatively" takes care of $L(x_1 x_2) = L(x_1) L(x_2)$. By imposing these rules, all of $L$ is determined once we know the $L(x_i)$.
Now the promised references. There is a discussion of linear extensions in:
I hope this helps ^_^