Here is the Theorem:
Let $V$ and $W$ be finite-dimensional vector spaces over $F$ with ordered bases $\beta = \{x_1, \ldots, x_n\}$ and $\gamma = \{y_1, \ldots, y_m\}$ respectively. For any linear transformation $T : V \to W$, the mapping $T^T : W^* \to V^*$ defined by $T^T(g) = gT$ for all $g \in W^*$ is a linear transformation with the property that $[T^T]_{\gamma^*}^{\beta^*} = ([T]_\beta^\gamma)^T$.
At some point in his proof he derives this formula $$T^T(g_j) = g_j T = \sum\limits_{s = 1}^{n}(g_j T)(x_s)f_s$$ with dual bases $ \beta^* = \{f_1, \ldots, f_n\}$ and $\gamma^* = \{g_1, \ldots, g_m\}$ and then claims that the $(i, j)^{\text{th}}$ entry of $[T^T]_{\gamma^*}^{\beta^*}$ is
$$(g_jT)(x_i)$$
I don't understand what he does here to make this claim. Could somebody please clarify?
You have not explicitly said so, but I suspect that $(x_1,\dots,x_n),(y_1,\dots,y_m)$ are meant to denote bases for $V$ and $W$, and $(f_1,\dots,f_n),(g_1,\dots,g_m)$ are the corresponding dual bases for $V^*$ and $W^*$. Please correct me if I am wrong.
Recall that for a transformation $\alpha:V \to W$, the entries $a_{ij}$ of $[\alpha]^\gamma_\beta$ are defined so that $$ \alpha(x_j) = \sum_{i=1}^m a_{ij}y_i. $$ With that in mind, let $A$ denote the matrix $[T^\top]_{\gamma^*}^{\beta^*}$ of the transformation $T^\top:W^* \to V^*$. By the above, this means that $$ T^\top(g_j) = \sum_{k=1}^n a_{kj} f_i, $$ where I have switched the summation index for clarity. It follows that $$ (g_j T)(x_i) = (T^\top(g_j))(x_i) = \left( \sum_{k=1}^n a_{kj} f_k\right)(x_i) = \sum_{k=1}^n f_k(x_i) a_{kj} = a_{ij}. $$ So, the $i,j$ entry of $T^\top$ is indeed $g_j T(x_i)$.