Recently reading a set of lecture notes on vector calculus, which is a topic I am already familiar with. However during this I came across this representation of the gradient vector...
$$\frac{\partial \phi}{\partial x^{i}} {e}_{i} = \nabla \phi $$
I don't understand why on the suffix for the derivative it is raised however on the basis vector it is lowered. I have a brief grasp of the metric tensor having studied special relativity, however I don't feel like it would play any role here.
This is a Tensor notation, as you have rightly surmised.
The position vector $\boldsymbol{x}$ is a contravariant vector, or a rank $(1,0)$ tensor.
As such, we can use the Einstein summation notation to write $\boldsymbol{x} = x^i\boldsymbol{e}_i := \displaystyle\sum_i{x^i\boldsymbol{e}_i}\,$, where $x^i$ are the vector components and $\boldsymbol{e}_i$ the basis vectors of a vector space $V$, and $i$ runs over all indices (often $1$, $2$, and $3$).
Similarly, covariant vectors or covectors (rank $(0,1)$ tensors) vectors in the dual space $V^*$, are written as $\boldsymbol{\alpha} = \alpha_i\boldsymbol{\omega}^i$, where $\boldsymbol{\omega}^i$ are the corresponding basis vectors of $V^*$ such that $\boldsymbol{e}_i \boldsymbol{\omega}^j = \delta_i^j$.
This helps when performing transformations, remembering whether a component transforms covariantly or contravariantly with the basis. It also helps with algebraic manipulations, as upper and lower indices go together, and we may contract a vector and covector as $$\boldsymbol{\alpha}\boldsymbol{x} = x^i\boldsymbol{e}_ i \, \alpha_j\boldsymbol{\omega}^j = x^i\alpha_j \delta^j_i = x^i \alpha_i\,,$$ as we expected.
Thus, in your example, using this notation, we write $$\nabla\, \phi(\boldsymbol{x}) = \partial^i \, \phi(\boldsymbol{x}) \boldsymbol{e}_i =\frac{\partial\, \phi(\boldsymbol{x})}{\partial\,x^i}\boldsymbol{e}_i\,.$$
Edit: As Nicholas Todoroff pointed out, this equation assumes orthonormality, such that the metric tensor $g_{ij} = \delta_{ij}$. A more general form of the gradient vector for arbitrary metric tensor is $$\nabla\, \phi(\boldsymbol{x}) = \mathbf{g}^{-1} \partial_i \, \phi(\boldsymbol{x}) \boldsymbol{\omega}^i = g^{ij}\frac{\partial\, \phi(\boldsymbol{x})}{\partial\,x^j}\boldsymbol{e}_i\,,$$ where $\mathbf{g}^{-1}$ is the inverse metric tensor such that $\mathbf{g}^{-1} \mathbf{g} = \mathbf{I}$, i.e. $g^{ij}g_{jk} = \delta^i_k$.
Obviously in the example, it is the case that we have orthonormality, $g_{ij} = \delta_{ij}$, and so $g^{ij}\frac{\partial\, \phi(\boldsymbol{x})}{\partial\,x^j}\boldsymbol{e}_i = \frac{\partial\, \phi(\boldsymbol{x})}{\partial\,x^i}\boldsymbol{e}_i$.