When dealing with block codes such as BCH or Reed-Solomon. There is a concept called a generator polynomial (which is a special case of a polynomial code).
As far as I can tell the definition of a generator - is a polynomial defined over a finite field that is the multiplication of roots that are consecutive powers.
For example, a generator polynomial used in Reed-Solomon codes over GF$(q)$, the finite field of $q$ elements, is of the form
$$g(x) = g_0 + g_1x + \cdots + g_nx^{N} = (x-\alpha)(x - \alpha^2)\cdots (x-\alpha^{N})$$
where $N=2t$
My question is why are the roots meant to be consecutive in value? Every definition I've read so far seems to suggest that is to be the case without providing an explanation.
Would the code still (in all conditions) work if the N-roots were arbitrarily defined over elements in the field? like so:
$$g(x) = g_0 + g_1x + \cdots + g_nx^{N} = (x-\alpha^7)(x - \alpha^4)\cdots (x-\alpha^{103}) + (x-\alpha^{28})$$
Any explanation or a link to a paper or some such explaining it all would be great!
The generator polynomial is not an idea specific to BCH or Reed-Solomon codes; it applies more generally to cyclic codes: those enjoying the property that the cyclic shift of a codeword is also a codeword in the code. If we associate the codeword polynomial $C(x)=\sum_{i=0}^{n-1} C_ix^i$ with the codeword $\mathbf C = (C_0, C_1, \ldots, C_{n-1})$ in a linear cyclic code of length $n-1$, then all the codeword polynomials are multiples of the generator polynomial $g(x)$ of the code which is the (nonzero) monic polynomial of least degree among all the codeword polynomials. $g(x)$ is a divisor of $x^n-1$, and so all its roots are $n$-th roots of unity.
Example: A $[15,11]$ binary cyclic Hamming code has generator polynomial $g(x) = x^4 + x + 1$ which is a divisor of $x^{15}-1$. Its roots are $\alpha, \alpha^2, \alpha^4, \alpha^8$ which are not $4$ consecutive powers of $\alpha$. It is, nonetheless, a (single-error-correcting) BCH code.
Note that there is no constraint on how many other powers of $\alpha$ are roots of $g(x)$; for any given $g(x)$ look at the longest run of successive powers of $\alpha$ among its roots to get a lower bound on the minimum distance of the code
For our example, $2$ consecutive powers of $\alpha$ are roots of $g(x)$ and so the minimum distance is at least $3$; in fact it is exactly $3$. If $g(x)$ were $(x^4+x+1)(x^4+x^3+x^3+x+1)$ whose roots are $\alpha, \alpha^2, \alpha^3, \alpha^4, \alpha^6, \alpha^8, \alpha^9, \alpha^{12}$, the longest run is of length $4$ and the minimum distance is $5$; the shorter run $\alpha^8, \alpha^9$ doesn't help increase the minimum distance. This is a two-error-correcting BCH code.
A $t$-error-correcting BCH code is a cyclic code with generator polynomial $$g(x) = \operatorname{lcm}\{M_1(x), M_2(x), \cdots, M_{2t}(x)\}$$ where $M_i(x)$ is the minimal polynomial of $\alpha^i$. More generally, $\operatorname{lcm}\{M_i(x), M_{i+1}(x), \cdots, M_{i+2t-1}(x)\}$ can be used instead. For binary codes, $M_j(x) = M_{2^1j}(x) = M_{2^2j}(x) = \cdots$ so that that least common multiple is not the product of $2t$ distinct polynomials.
The proof of the BCH bound has been given in another answer and so I won't include it here.