I wonder what the monomial basis is.
Given a finite degree polynomial $a(X) = a_0 + a_1 X+ a_2 X^2 + \ldots + a_{n-1}X^{n-1} $, then the basis of this polynomial is $\{1, X,X^2,\dots,X^{n-1}\}$. However $X$ is a variable so this basis is really ambiguous. Using FFT, we can get specific coeficent w.r.t specific basis (not variable but concrete value). What's this concept (monomial basis) for?
If you're studying vector spaces, then the space $V$ of polynomials spanned by $\{1, X, X^{2}, \ldots, X^{n-1}\}$ is an example of a vector space of dimension $n$, with each 'vector' in $V$ being a linear combination of the basis 'vectors' $\{1, X, X^{2}, \ldots, X^{n-1}\}$; that is, a polynomial of the form $$ a_{0} + a_{1}X + \ldots + a_{n}X^{n-1}. $$ This basis is monomial since each 'vector' is polynomial with one term, and can be viewed as the polynomial analogue of the standard basis $\{e_{1}, e_{2}, \ldots, e_{n}\}$ for $\mathbb{R}^{n}$. Of course, there are other bases for $V$ that are not monomial; for example, the space spanned by $\{1, X, X^{2}\}$ also has the basis $\{1, 1 + X, 1 + X + X^{2}\}$ which is clearly not monomial.
In general, what is this monomial basis for? Just for describing the space of polynomials, but this basis is the 'easiest' to work with when you first start learning about polynomial vector spaces, much like the standard basis $\{e_{1}, e_{2}, e_{3}\}$ is the 'easiest' basis to work with when you begin learning about $\mathbb{R}^{3}$, but there are plenty of other bases for it.
As I mentioned in the comments, despite $X$ being a variable, there is really no ambiguity in the basis since two polynomials $f(X)$ and $g(X)$ are linearly independent if and only if they are not scalar multiples of each other for all $X$ in your domain; equivalently, if $X \in [a, b]$, then $f$ and $g$ are linearly independent if and only if $$ \alpha_{1}\cdot f(X) + \alpha_{2}\cdot g(X) = 0 $$ has solution $\alpha_{1} = \alpha_{2} = 0$ as $X$ ranges across the whole of $[a, b]$. For example, if $f(X) = \sin(X)$ and $g(X) = \cos(X)$ with $X \in [0, \pi/4]$, then sure $\sin(\pi/4) = \cos(\pi/4)$, but for all other values in $[0, \pi/4]$ they are different and not related by a constant scalar, so $$ \alpha_{1}\sin(X) + \alpha_{2}\cos(X) = 0 \hspace{20pt} \forall X \in [0, \pi/4] $$ only has the solution $\alpha_{1} = \alpha_{2} = 0$, since such a solution must work for all values of $X$ in the domain. Hence $\sin(X)$ and $\cos(X)$ are linearly independent over $[0, \pi/4]$.