From the 3rd edition of the book "The Linear Algebra a Beginning Graduate Student Ought to Know" by Jonathan S. Golan, we find the following statement in chapter 4:
Note that if ${v,w}$ is a linearly-dependent set of vectors in an anticommutative algebra $(K,•)$ over a field of characteristic other than 2, then there exist scalars $a$ and $b$, not both equal to 0, such that $av + bw = 0_K$ . Relabeling if necessary, we can assume that $b\ne0$. Then $0_K =a(v•v)+b(v•w)=b(v•w)$ and so $v•w=0_K$. A simple induction argument shows that if $D$ is a linearly-dependent subset of $K$ then $v_1•···•v_k=0_K$ for any finite subset $\{v_1,...,v_k\}$ of $D$.
If we additionally assume associativity, the claimed result at the end of the paragraph can be easily demonstrated. Note, if:
$$ a_1v_1 + a_2v_2 + ... a_nv_n = 0_K$$
where, WLOG, $a_n \ne 0$, then:
$$ (v_1•v_2•v_3•...•v_{n-1})•(a_1v_1 + a_2v_2 + ... a_nv_n) = (v_1•v_2•v_3•...•v_{n-1})•0_K = 0_K$$
After distributing, we have:
$$ a_1(v_1•v_2•v_3•...•v_{n-1}•v_1) + a_2(v_1•v_2•v_3•...•v_{n-1}•v_2) + ... + a_{n-1}(v_1•v_2•v_3•...•v_{n-1}•v_{n-1}) + a_{n}(v_1•v_2•v_3•...•v_{n-1}•v_n) = 0_K $$
All, but the last summand, by both associativity and by anti-commutativity, can, for each $i$ in $\{0,1,...,n-1\}$, be re-arranged to be of the form:
$$\pm a_i(...v_{i-2}•v_{i-1}•(v_{i}•v_{i})•v_{i+1}•v_{i+2}•...) = \pm a_i(...v_{i-2}•v_{i-1}•(0_K)•v_{i+1}•v_{i+2}•...) =\pm a_i 0_K = 0_K$$
Hence, we are left with the desired statement:
$$ a_{n}(v_1•v_2•v_3•...•v_{n-1}•v_n) = 0_K $$ $$ v_1•v_2•v_3•...•v_{n-1}•v_n = a_{n}^{-1}0_K = 0_K $$
Note, we directly showed the result without induction. But clearly, the premise presented in the book does not assume associativity, so we can not just allow ourselves to rearrange terms as we did above, and so our approach fails to show the claim that follows from the general interpretation of book's statement.
In general, the closest I have gotten is to take the base case provided in the book, assume the induction/inductive hypothesis, and show that for any integer $n$:
$$ v_1•(a_1v_1 + a_2v_2 + ... a_nv_n) = v_1•0_K= 0_K$$ $$ a_1(v_1•v_1) + a_2(v_1•v_2) + ... + a_n(v_1•v_n) = a_2(v_1•v_2) + ... + a_n(v_1•v_n) = 0_K $$
Where we have assumed, again WLOG, that $a_n$ is non-zero. So, we can then use the inductive hypothesis to assert that:
$$ (v_1•v_2)•(v_1•v_3)•...•(v_1•v_n) = 0_K $$
Note, this is very far from showing the required equation "$v_1•···•v_k=0_K$". So what possibly is the simple induction argument that the book is talking about?
P.S. Since associativity is not granted, technically it isn't even clear what the expression "$v_1•···•v_k$" represents. I have assumed, it should be evaluated as:
"$v_1•···•v_k=(...((v_1•v_2)•v_3)•···•v_k)$"
On the other hand, since the expression is ambiguous, is this a hint that we should assume associativity? Or is there a specific interpretation of this expression that evaluates to 0 even without associativity?
$\newcommand{\bul}{\bullet}$As pointed out in the comments, the statement is trivially wrong in the way stated.
The statement is false even otherwise if we do not assume associativity.
Consider $F = \Bbb R$ and $K = \Bbb R^3$ with $\bul$ being the usual cross-product $\times$. (This is indeed an anticommutative algebra, a Lie algebra even.)
Consider $D = \{e_1, e_2, 2e_1\}$. This set is clearly linearly dependent. However, note that $$(e_1 \bul e_2) \bul 2e_1 = e_3 \bul 2e_1 = 2e_2 \neq 0.$$