There are lectures by Theodore Shifrin on differential forms, and sadly one video ends suddendly where he explains some notation. I try to formulate it in my own words:
When k=n, we have $D(\vec{v_{1}},...,\vec{v_{n}})$ (where D means determinant). What about k=2, n=3 ? Now we have to take a random vector $\vec{a} \in R^{3}$. Take $\vec{a}= \vec{e_{1}}, \vec{e_{2}}, \vec{e_{3}}$. What do we get?
$\begin{vmatrix} | & | & 1 \\ \vec{v_{1}} & \vec{v_{2}} & 0 \\ | & | & 0 \end{vmatrix} = \begin{vmatrix} v_{1,2} & v_{2,2} \\ v_{1,2} & v_{2,3} \end{vmatrix} $ Where, for example, $v_{1,2}$ means, take the second coordinate of $v_{1}$ and so on. Of course this is what you get when you do the cofactor expansion on the third column, and you can do this with the remaining $e_i$ in the same way. Then the lecture suddenly ends, and the next part begins where he introduces new notation. In own words: Take $d\vec{x_I}$ where I is a multiindex meaning $1 \leq i_{1},i_{2},...i_{k}\leq n$. So $d\vec{x_I}(\vec{v_1},..,\vec{v_k})$ = Determinant of the k $\times$k matrix obtained by using rows $i_1,...,i_k$. Then he takes an example with $\vec{v_1}=(1,2,4)$ and $\vec{v_2}=(-1,0,5)$
$d\vec{x}_{31}(\vec{v_1},\vec{v_2})= \begin{vmatrix} 4 & 5 \\ 1 & -1 \end{vmatrix} = 9$ In words, take the third coordinate of $v_1$ and $v_2$ and the first coordinate of $v_1$ and $v_2$. But does this actually means $d\vec{x}_{31}(\vec{v_1},\vec{v_2})= \begin{vmatrix} 1 & -1 & 0 \\ 2 & 0 & 1 \\ 4 & 5 & 0 \end{vmatrix} = \begin{vmatrix} 4 & 5 \\ 1 & -1 \end{vmatrix} = 9 $ ? So he took $e_{3}$ as a third vector and did the expansion for the third column and second row? I don't see how the end of this lecture https://www.youtube.com/watch?v=Nh5XFX0iKgE&list=UUp9W-et2Zbx7u5_VMiXGtPQ connects to the beginning of this lecture https://www.youtube.com/watch?v=ZFPWK2gHGrY&list=UUp9W-et2Zbx7u5_VMiXGtPQ
You can express $d\mathbf x_I$ in terms of the $n\times n$ determinant by putting standard basis vectors in all the remaining slots, yes, but it is not necessary to do so. For example, with vectors in $\Bbb R^4$, $$d\mathbf x_{12}(v_1,v_2) = D(v_1,v_2,e_3,e_4),$$ as you can check by expansion in cofactors. But the key fact is that for the different increasing multi-indices $I$, i.e., $1\le i_1<i_2<\dots<i_k\le n$, we get $\binom nk$ linearly independent multilinear maps that, in fact, give a basis for $\Lambda^k(\Bbb R^n)^*$.
I don't remember exactly what happened in those few minutes at the end of the first lecture. I think one of the students had suggested that we could get multilinear functions in several variables by putting in fixed vectors in the remaining slots of the determinant (as in the example above), but I wanted them to see that that corresponded to doing $k\times k$ subdeterminants.