How do I see that for a $K$-vector space $V$ the map
$\bigwedge^d(V^*) \times \bigwedge^d(V) \rightarrow K, (f_1 \wedge ... \wedge f_d, x_1 \wedge ... \wedge x_d) \mapsto det(f_i(x_i)_{i,j})$
is bilinear?
How do I see that for a $K$-vector space $V$ the map
$\bigwedge^d(V^*) \times \bigwedge^d(V) \rightarrow K, (f_1 \wedge ... \wedge f_d, x_1 \wedge ... \wedge x_d) \mapsto det(f_i(x_i)_{i,j})$
is bilinear?
On
The given formula is certainly giving a well defined mapping
$$\varphi:V^*\times\dots\times V^*\ \times\ V\times\dots\times V \longrightarrow K$$
Fixing all but one arguments makes the matrix of the determinant varying linearly in one row or column.
This shows that $\varphi$ is multilinear, so that it factors through $$V^*\otimes\dots\otimes V^*\ \otimes\ V\otimes\dots\otimes V \longrightarrow K$$
Finally, if $f_i=f_j$ [or $x_i=x_j$] with $i\ne j$, the matrix of the determinant will have two identical rows [columns], which shows that restricting to the first [second] $d$ variables gives an alternating multilinear map, and hence it factors through
$$(V^*\land\dots\land V^*)\ \otimes\ (V\land\dots\land V)\ .$$
I try to write an answer that should clear the definition of the map in the OP. By definition, it is a (multi)linear map. The main instrument is using "universality" when working with elements in the category of vector spaces and (multi)linear applications. (This would not fit as a comment, and it would be hard to type without markup control.)
(1) Let us fix some field $K$ (of characteristic $\ne 2$, or maybe even $=0$ to exclude any problems with the definition of the wedge space).
We work in the category of vector spaces over $V$.
Functorially, if $f,g$ are (linear) maps $V\to V'$ and $W\to W'$, then $f\otimes g$ is a (bi)linear map $V\times W\to V'\times W'$. Same is valid for more tensor factors.
(2) Consider now "the $V$" from the OP. We have an evaluation map $V^*\otimes V\to K$.
Using it we can define for a fixed pair $(i_0,j_0)$ the map $$ \left(\times_{i=1}^dV^*\right)\ \otimes\ \left(\times_{j=1}^dV\right) \to K\ , $$ $$ (f_1,f_2,\dots,f_d)\otimes(x_1,x_2,\dots,x_d) \to f_{i_0}(x_{j_0})\ . $$
(3) Putting together all the above maps for all possible values of $(i_0, j_0)$, so that the image space is a space of matrices $d\times d$, we have a map By repeating it in $d$ tensor parts, we also have a map: $$ \left(\times_{i=1}^dV^*\right)\ \otimes\ \left(\times_{j=1}^dV\right) \to M_{d\times d}(K)\ , $$ $$ (f_1,f_2,\dots,f_d)\otimes(x_1,x_2,\dots,x_d) \to \Big[\ f_{i_0}(x_{j_0})\ \Big]_{1\le i_0,j_0\le d}\ . $$ As it is so far, this map is linear (in each component), but it is not "balanced", i.e. we cannot move a scalar from one $f$-component to an other one, or from an $x$-component to an other one.
(4) Consider now the composition $$ \left(\times_{i=1}^dV^*\right)\ \otimes\ \left(\times_{j=1}^dV\right) \to M_{d\times d}(K) \overset\det\longrightarrow K\ . $$ This composition is now balanced by the properties of the determinant. For instance, $(af_1,f_2,\dots,f_d)\otimes x$ and/or $f\otimes(ax_1,x_2,\dots,x_d)$ is mapped via (3) to the matrix obtained from the one for $f\otimes x=(f_1,f_2,\dots,f_d)\otimes (x_1,x_2,\dots,x_d)$ by multiplying the first matrix row/column with the scalar $a\in K$.
Applying the $\det$, $a$ becomes now a factor of the result.
The same computation can be done when $a$ is on an other component position of $f$ and/or of $x$.
So the balancing properties is valid after applying $\det$.
(5) The balancing implies we have an induced map $\bar\Phi$: $$ \bar\Phi\ :\ \left(\bigotimes_{i=1}^dV^*\right)\ \otimes\ \left(\bigotimes_{j=1}^dV\right) \to K\ . $$ On elements $f_1\otimes f_2\otimes\dots\otimes f_d$ algebraically tensored with $x_1\otimes x_2\otimes\dots\otimes x_d$ it is defined by lifting this to (linear combinations of) $(f_1,f_2,\dots,f_3)\otimes(x_1,x_2,\dots,x_d)$ and applying $\Phi$ (followed by linear assembly).
The result does not depend on the lifts. Every relation that has to be tested for $\bar\phi$ has an equivalent pendant at the level of $\Phi$.
(6) It remains to observe that $\bar\Phi$ is alternating in its $f$-components, and also in its $x$-components. We only need to show this at the level of $\Phi$.
So we have to compare first the result of applying $\Phi$ on the elements
$(\color{blue}{f_1,f_2},\dots,f_d)\otimes(x_1,x_2,\dots,x_d)$ and respectively
$(\color{blue}{f_2,f_1},\dots,f_d)\otimes(x_1,x_2,\dots,x_d)$
(and on all other cases of a change implemented by a transposition of two indices).
We apply $\Phi$ on the above two elements, the intermediate matrix station delivers two matrices with interchanged first and second rows, further applying $\det$ leads to a sign difference. In this case and in the other cases of using a transposition of indices of the $f$-component
This shows the alternating relation for the $f$-components.
The similar argument applied for the comparison of $\Phi$-values on
$(f_1,f_2,\dots,f_d)\otimes(\color{blue}{x_1,x_2},\dots,x_d)$ and respectively
$(f_1,f_2,\dots,f_d)\otimes(\color{blue}{x_2,x_1},\dots,x_d)$
and on the values in the more general case, when we are using a transposition $(j_1,j_2)$ instead of $(1,2)$ as above,
is leading to the comparison of two determinants with matrices two exchanged columns, and again we deduce the alternation relation, this time on the $x$-components.
(7) We can thus factorize through the wedge-product, getting a final map $\hat\Phi$: $$ \hat\Phi\ :\ \left(\wedge_{i=1}^dV^*\right)\ \otimes\ \left(\wedge_{j=1}^dV\right) \to K\ . $$ (The wedge product can be realized in characteristic zero either als subobject, or as a quotient of the tensor product. The factorization makes sense when the quotient is taken, and the map is already alternating.)