I am watching a series on geometric calculus by Alan Mcdonald and in the first episodes he states that for any vector u and multivector M: $uM = u \cdot M + u \land M$
This doesn't really take into account what happens with the 0-grade i.e scalar part of M, let's call it s.
For the identity to be true either $u \cdot s$ or $u \wedge s$ has to be zero, and from what i read previously in some instances $u \land s = us$, which implies $u \cdot s = 0$, which kind of makes intuitive sense i guess?
The general dot product formula for two k-vectors $ a_r, b_s $, of grades r and s respectively, is typically defined as a grade selection of the following sort:
$$a_r \cdot b_s={\left\langle{{ a_r b_s }}\right\rangle}_{{\left\lvert {r - s} \right\rvert}}.$$
Similarly, the wedge product of the same k-vectors is defined as
$$a_r \wedge b_s={\left\langle{{ a_r b_s }}\right\rangle}_{r + s}.$$
Both of these can be extended to multivectors by decomposing that multivector into component k-vectors.
By these definitions, if $u$ is a scalar, and $v$ is a k-vector, we have
$u \cdot v = u v = u \wedge v,$
which also holds if $v$ is a multivector (and $u$ still a scalar.)