I am currently going through the abstract tensor notation in Wald's "General Relativity". I understand the purpose of it, but I need help understanding some of the conventions and definitions. So, first Wald writes "Since a metric [tensor] g is a tensor of type $(0,2)$, it is denoted $g_{ab}$. If we apply the metric to a vector, $v^a$, we get the dual vector $g_{ab}v^b$. It is convenient to denote this vector as simply $v_a$, thus making notationally explicit the isomorphic between $V_p$ [the tangent space at $p$] and $V_p^*$ defined by $g_{ab}$".
Let me try to unpack what he just said here: The metric tensor $g_{ab}$ is a covariant $2$-tensor on $V_p$, so it does not make sense to apply the metric to a single vector $v^a$ like Wald says, but I suppose he means that $g_{ab}(v^a, \cdot)$ is a dual vector if $v^a$ is held fixed (by the way, does $v^a$ have to be in the first slot because the metric tensor is denoted as $g_{ab}$ and not $g_{ba}$?). Is this correct? Wald goes on to say that this dual vector is $g_{ab}v^b$. Now, in a previous paragraph he wrote that writing two tensors side by side like that denotes the tensor product. So $g_{ab}v^b$ is short for $g_{ab} \otimes v^b$, according to Wald. But surely he does not mean that the dual vector $g_{ab}(v^a, \cdot)$ is equal to $g_{ab}v^b$? Because that would be a $(1,2)$ type tensor. Could someone please clear up this confusion for me?
Ok, lets formalise all that have been said in the comments:
Let $(v_1,\dots,v_n)$ and $(v^1,\dots,v^n)$ be basis of $V_p$ and $V^*_p$, respectively. Take a tensor $\tau \in \mathcal{T}^r_s(V_p)$, and pick indexes $k \leq r, l \leq s$. Then we define the contraction $C^k_l\tau \in \mathcal{T}^{r-1}_{s-1}(V_p)$ as
$$C^k_l\tau(f^1,\dots,f^{r-1},w_1,\dots,w_{s-1}):=\sum_{a=1}^{n}\tau(f^1,\dots,\underbrace {\hbox{$v^a$}}_{\hbox{$k$-th position}},f^{r-1},w_1,\dots,\underbrace {\hbox{$v_a$}}_{\hbox{$l$-th position}},\dots,w_{s-1}).$$
Recall that a $(r,s)$-tensor is a sum over all $(r,s)$-combinations of vectors of the respective basis. So, take indexes $1 \leq i_1 < i_2 < \dots < i_r \leq n$ and $1 \leq j_1 < j_2 < \dots < j_r \leq n$, and denote them by $I = i_1\dots i_r$ and $J = j_1 \dots j_s$, respectively. Then an $(r,s)$-tensor $\tau$ is the sum over all such $I$'s and $J$'s
$$\tau := \sum_J \sum_I \tau^I_J \, v_I \otimes v^J := \tau^{i_1\dots i_r}_{j_1\dots j_s} \, v_{i_1} \otimes \dots \otimes v_{i_r} \otimes v^{\, j_1} \otimes \dots \otimes v^{\, j_s},$$
$\tau^{i_1\dots i_r}_{j_1\dots j_s} = \tau (v^{\, i_1}, \dots, v^{\, i_r}, v_{j_1}, \dots, v_{j_s}).$ The last equality is just the $Einstein \, summation \, convention.$
Returning to contraction, since it is a $(r-1,s-1)$-tensor the components of it are
$$\left( C^k_l \tau \right) ^{i_1 \dots {\hat{i_k}}\dots i_r}_{j_1 \dots {\hat{j_l}}\dots j_s} := \tau^{i_1 \dots a \dots i_r}_{j_1 \dots a \dots j_r} \, (\text{sum over $a$}),$$
where $\hat{i_k}, \, \hat{j_l}$ means omission.
Tackling your example, we have $g \in \mathcal{T}_2(V_p), \, j_1= a, \, j_2 = b,$ that written in component form is $g = g_{ab}$. The act of "applying" a vector to the metric then, is equivalent of building a contraction $C^c_b g \nu \in \mathcal{T}_1(V_p)=V_p^*$, to the tensor product $g_{ab}\nu^c \in \mathcal{T}^1_2(V_p)$. Using the component notation I introduced above we get that
$$v_a := \left(C^c_bg\nu \right)_a= g_{ab} \nu^b \, \text{(sum over $b$)}.$$
you can also get $v_b$ by contracting $a$, so
$$v_b := \left( C^c_a g\nu \right) _b = g_{ab}\nu^a. $$
edit: There is a natural map $V \to V^{**}$ that sends $v\in V \mapsto \hat{v} \in V^{**}$, where $\hat{v}$ is such that for any $f \in V^*, \hat{v}(f) := f(v).$ In the case of finite dimensional vector spaces, this map is an isomorphism. Now, since $v \in V$, we can express it as a linear combination of elements of a basis of $V$, lets say $v = \sum_{k=1}^n \alpha_k e_k$. It turns out that if we evaluate the elements of the corresponding dual basis $\{e^1, \dots, e^n \} $ of $V^*$ to $\hat{v}$, we get the coefficients of $v$
$$\hat{v}(e^k):= e^k(v)= \alpha_k.$$
Now, $\hat{v} \in \mathcal{T}^1(V)$, so we can make the tensor product $g \otimes \hat{v} \in \mathcal{T}^1_2(V)$, given by:
$$g \otimes \hat{v} (f^1, w_1, w_2) := g\hat{v} (f^1, w_1, w_2) = g(w_1,w_2) \cdot \hat{v}(f^1).$$
Now, if we denote $g=g_{ab}, \, \hat{v}= \hat{v}^c$, and make the contraction $C^c_b g\hat{v}$, we obtain:
$$C^c_b g \hat{v}(-):= \sum_{b=1}^n g\hat{v}(e^b,-,e_b) = \sum_{b=1}^n g(-, e_b)\cdot \hat{v}(e^b) = \sum_{b=1}^n g(-, e_b) \cdot \alpha_b$$
$$= \sum_{b=1}^n g(-,\alpha_b e_b) = g(-,v), $$ which is the evaluation of a vector in the second entry of $g$.