Consider the example where I have a matrix $\mathbf{D}$ in $-1/1$ coding with $5$ columns,
$$D = \begin{bmatrix}-1&-1&-1&1&1\\1&-1&-1&-1&1\\-1&1&-1&-1&-1\\1&1&-1&1&-1\\ -1&-1&1&1&-1\\1&-1&1&-1&-1\\-1&1&1&-1&1\\1&1&1&1&1\end{bmatrix}$$
We see that the fourth and fifth columns are combinations of the first three columns so that if we label the columns $a,b,c,d,e$ we can say that $d=a*b$ and $e=b*c$. We can separate the columns in two groups: $a,b,c$ are basic columns and $e,d$ are added columns. Furthermore we can call the relations that define $e$ and $d$ as a combination of $a,b,c$ defining contrasts.
My question is this: Is there a way to determine more generally (for a larger matrix or a matrix with different defining contrasts) which columns are combinations of the others and
There is a mutuality in linear dependence.
If a vector $\bf a$ is linearly dependent of $\bf b$ and $\bf c$, then $\bf b$ is linearly dependent on $\bf a$ and $\bf c$ and vice versa.
This is no stranger than we can re-write $${\bf v_0} = \sum_{\forall i \neq 0} c_i {\bf v_i}$$ into $${\bf v_k} = \frac{1}{c_k}\sum_{\forall i \neq k} c_i {\bf v_i}$$
The linear weights just get scaled by $c_k$.
So what we can do now is to build a linear equation system removing one of the columns as the data $\bf d$, (we modify $\bf D$ by removing this column), and the rest as regression functions and solve with a classical linear least squares:
$${\bf x_o = }\min_{\bf x}\|{\bf D x - d}\|_2^2$$
If there is a perfect fit, then $\bf d$ is linearly dependent on at least some of the vectors in $\bf D$. If not, then still some of the vectors in $\bf D$ may be linearly dependent of each other.