In the book "Artifical Intelligence: A Modern Approach" from S. Russel there is a derivation of the gradient of the loss with respect to the weights w used for backpropagation on page 735.
I stumbled across the third row of the derivation, where $\Delta_k$ is inserted. On the left side of the equation, there is still a '2', but on the right side it is gone.
Can someone explain to me what happened to that '2'?
\begin{eqnarray} \frac{\partial Loss_k}{\partial \omega_{j,k}} &=& -2(y_k - a_k)\frac{\partial a_k}{\partial \omega_{j,k}} = -2(y_k - a_k)\frac{\partial g(in_k)}{\partial \omega_{j,k}} \\ &=& -2(y_k - a_k)g'(in_k)\frac{\partial in_k}{\partial \omega_{j,k}} = -2(y_k - a_k)g'(in_k)\frac{\partial }{\partial \omega_{j,k}}\left( \sum _{j} \omega_{j,k}a_j\right) \\ &=& -\color{red}{2}(y_k - a_k) g'(in_k) a_j = -a_j\Delta_k \end{eqnarray}
where
$$ \Delta_k = Err_k \times g'(in_k) $$