I’m trying to understand the reason for the assertion on page 20 of Hestenes and Sobczyk’s “Clifford Algebra to Geometric Calculus” that
If $B$ is a simple $s$-vector, then $B\cdot A$ [where $A$ is a simple $n$-vector] is simple.
According to page 4, a multivector $A_r$ is called a simple $r$-vector iff it can be factored into a product of $r$ anticommuting vectors $a_1, a_2,…, a_r$, that is
$$ A_r = a_1a_2…a_r,$$
where $a_ja_k = -a_ka_j$ for $j, k = 1, 2, …, r$, and $j\neq k$.
However, I can’t see how that is true even if $B$ were a simple $1$-vector $b=b_1+b_2+b_3$ and $A=a_1a_2a_3$ a simple $3$-vector with $b_i$ parallel $a_i$. In this case,
$$\begin{aligned}b\cdot A &= (b_1\cdot a_1)a_2a_3 - a_1(b_2\cdot a_2)a_3 + a_1a_2(b_3\cdot a_3) \\ &= (b_1\cdot a_1)a_2a_3 + a_1\left[a_2(b_3\cdot a_3) - (b_2\cdot a_2)a_3\right] \end{aligned}$$
and I can’t factor this further into a simple $2$-vector.


$ \newcommand\form[1]{\langle#1\rangle} \newcommand\lcontr{\mathbin\rfloor} $This can be proved by induction on grade, but instead I am going to give a "vector free" approach. Assume we have an $n$-dimensional vector space $V$ equipped with a nondegenerate metric which generates a geometric algebra. (When the metric is degenerate we can still make the arguments to follow work, but we have to do some shenanigans with the dual space $V^*$.)
First, notation: your inner product $\cdot$ can be defined on $s$- and $t$-vectors $A_s, B_t$ by $$ A_s\cdot B_t = \form{A_sB_t}_{|s-t|}. $$ However, the left contraction $$ A_s\lcontr B_t = \form{A_sB_t}_{t-s} $$ is more well behaved (where the grade projection is defined to be $0$ when $t-s$ is negative). For instance, for arbitrary multivectors $A, B, C$ we have $$ (A\wedge B)\lcontr C = A\lcontr(B\lcontr C),\quad (A\wedge B)*C = A*(B\lcontr C) $$ with $A*B = \form{AB}_0$ the scalar product. The second adjoint identity can be taken as a definition of the contraction when the metric is nondegenerate. In light of the first identity we make $\wedge$ tighter-binding than $\lcontr$ and make $\lcontr$ right-associative so that we may write $$ A\wedge B\lcontr C = A\lcontr B\lcontr C. $$
Crucially, contraction also satisfies the following dualities for any pseudoscalar $I$ $$ A\lcontr(BI) = (A\wedge B)I,\quad A\wedge(BI) = (A\lcontr B)I $$ This can be proved from the adjoint identity $$\begin{aligned} C*[A\lcontr(BI)] & = (C\wedge A)*(BI) = \form{(C\wedge A)BI}_0 = \form{(C\wedge A)B}_nI \\& = (C\wedge A\wedge B)I = \form{C(A\wedge B)I}_0 \\& = C*[(A\wedge B)I]. \end{aligned}$$ Since the scalar product is nondegenerate whenever the underlying metric is, this proves one of the dualities. The other is proved simply by replacing $I$ with $I^{-1}$ and $B$ with $BI$.
Now consider when $A, B$ are blades. $BI$ is a blade: we can find an orthogonal basis $e_1,\dotsc,e_n$ such that $$ B = be_ke_{k-1}\dotsb e_1,\quad I = e_1e_2\dotsb e_n $$ for some scalar $b$. Now by duality $$ A\lcontr B = [A\wedge(BI)]I^{-1}. $$ This is clealy a blade.
Here is a more geometric perspective.
First, a basic identity. Let $T : V \to V$ be linear. This map extends uniquely to an outermorphism on the exterior algebra $$ T(A\wedge B) = T(A)\wedge T(B). $$ If $a, b$ are vectors, then the adjoint $\bar T$ of $T$ is defined by $$ \bar T(a)*b = a*T(b). $$ You can find that the adjoint of the outermorphism is the outermorphism of the adjoint, so this equation extends to multivectors. Now consider that $$\begin{aligned} C*T(\bar T(A)\lcontr B) & = \bar T(C)*(\bar T(A)\lcontr B) = (\bar T(C)\wedge\bar T(A))*B \\& = \bar T(C\wedge A)*B = (C\wedge A)*T(B) \\& = C*(A\lcontr T(B)) \end{aligned}$$ and thus $$ T(\bar T(A)\lcontr B) = A\lcontr T(B). $$ I justify this in terms of subspaces further below.
Now consider the case that $B$ is a blade and $T = P_B$, the orthogonal projection onto the subspace of $V$ represented by $B$. It is easy to prove that $\bar P_B = P_B$; thus $$ P_B(P_B(A)\lcontr B) = A\lcontr B. \tag{$*$} $$ This proves two things:
Now suppose $A$ is also a blade. We already showed that $AI$ is a blade as well; geometrically, this corresponds to taking the orthogonal complement of $A$. It is easy to show that $$ A\lcontr I = AI. $$ But $B$ is a pseudoscalar for the subspace it represents, and $P_B(A)$ is blade contained in this subspace. Thus $$ A\lcontr B = P_B(A)\lcontr B $$ is a blade. In fact, this proves the following geometric interpretation of the contraction: if $[X]$ is the subspace represented by a blade $X$ then $$ [A\lcontr B] = \begin{cases} V &\text{if }\exists v \in [A].\: v\perp[B],\\ [A]^\perp\cap[B] &\text{otherwise}. \end{cases} $$ So the contraction is essentially relative orthogonalization. Note that $$ P_B([A])^\perp\cap B = [A]^\perp\cap B. $$
We can justify the adjoint equation $$ T(\bar T(A)\lcontr B) = A\lcontr T(B) $$ more geometrically. Consider a fixed vector $v$ and arbitrary $w \in T(v)^\perp$: $$ \bar T(v)\cdot w = 0 = v\cdot T(w). $$ What this is saying is that $\bar T$ is the unique map (up to scaling of some sort) such that $$ T(\bar T(v)^\perp) \subseteq v^\perp \quad\text{or equivalently}\quad \bar T(T(v)^\perp) \subseteq v^\perp. $$ If $S$ is a subspace then this generalizes to $$ T(\bar T(S)^\perp) \subseteq S^\perp $$ with equality when $T$ (and hence $\bar T$) are bijective. You can see this as follows: $$ T(\bar T(S)^\perp) = T(\bigcap_{v \in \bar T(S)}v^\perp) = T(\bigcap_{v \in S}\bar T(v)^\perp) \subseteq \bigcap_{v \in S}T(\bar T(v)^\perp) \subseteq \bigcap_{v \in S}v^\perp = S^\perp, $$ with equality in the case of bijectivity following from dimension counting. A direct consequence is the restriction to relative orthogonal complements $S^\perp\cap R$: $$ T(\bar T(S)^\perp\cap R) \subseteq S^\perp\cap T(R). $$ This is precisely the analog of $$ T(\bar T(A)\lcontr B) = A\lcontr T(B) $$ with $A$ playing the role of $S$ and $B$ the role of $R$.