The confusing part is at this point in the lecture from the Numerical Optimization course.
Consider a simple way to update $B^k$: Let $a \neq 0, u \in R, u \neq 0$
$$B^{k+1} = B^k + \alpha uu^T$$
Choose a and u such that B^{k+1} satisfies the Quasi-Newton condition.
$$(B^k + \alpha u u^t)\gamma^k=\delta^k$$ $$\alpha u^t y^ku=\delta^k-B^k \gamma^k$$
$\alpha u^t y^ku$ definitely seems like the wrong way to rotate $\alpha u u^t\gamma^k$? Is that right or did I miss something?
$$\alpha u u^t\gamma^k = \alpha u (u^t\gamma^k) = \alpha (u^t\gamma^k) u = \alpha u^t\gamma^k u $$ because the quantity in parentheses, $(u^t\gamma^k)$, is a scalar, and therefore commutes with the vector $u$.
However, the lecturer does seem to have some strange ideas about SR1. For instance saying that the initial Hessian approximation must be positive definite. And saying that it is a bad method because it does not necessarily obey heriditary positive definiteness. On the contrary, the initial Hessian approximation need not be positive definite, and the value of SR1 is as an alternative to, for instance, BFGS, for situations in which the true Hessian may not be positive definite, and therefore SR1 might provide a better approximation, for optimization purposes, than a positive definite Hessian approximation. However, in such case, SR1 should generally be used in conjunction with either a trust region method, or a line search method which searches along directions of negative curvature in the event that the Hessian approximation is not positive definite.