Farkas lemma can be stated as follow:
If for all $\mu$ such that $\mu^T\cdot a_i \geq 0$ implies that $\mu^T\cdot b \geq 0$ then $b=\sum \lambda_i a_i$ with $\lambda_i \geq 0$
I need a generalized version which goes as follows:
Let $\alpha_i \geq 0$. If for all $\mu$: $\mu^T\cdot (b-\sum a_i I_{\mu^T\cdot a_i < 0}) \geq 0$ then: $b=\sum \lambda_i a_i$ with $0 \leq \lambda_i \leq \alpha_i$
Here $I_{a<0}$ is 1 when $a<0$ and 0 otherwise.
The idea is that for each $\mu$ we have the signs of $\mu^T\cdot a_i$ and accordinly we pertube the score function, when $\mu^T\cdot a_i < 0$ we enlarge the score by $-\alpha_i \mu^T\cdot a_i$ and over all we end up with positive score.
When $b=\sum \lambda_i a_i$ with $0 \leq \lambda_i \leq \alpha_i$ the assertion follow, i.e. $\mu^T\cdot (b-\sum a_i I_{\mu^T\cdot a_i < 0}) \geq 0$ for all $\mu$. Is it also neccesery?
The non-degenerate case is also not interesting, i.e. when the $\lambda$'s are unique.
Let $A$ be the matrix with columns $a_i$, $\lambda$ is a vectors of $\lambda_i$, $\alpha$, and so on. The condition $$ \mu^T \cdot (b - \sum_j \alpha_j a_j I_{\mu^T \cdot a_j < 0} ) \geq 0$$ is equivalent to the following matrix inequality:
$$ \left( \begin{array}{c} \mu \\ z \\ \end{array} \right) ^T\cdot \left( \begin{array}{cc} A & 0 \\ I & I \\ \end{array} \right) \geq 0 $$ implies $$ \left( \begin{array}{c} \mu \\ z \\ \end{array} \right) ^T\cdot \left( \begin{array}{c} b \\ \alpha \\ \end{array} \right) \geq 0$$
now use farkas lemma to find $\lambda = (\lambda_1,\lambda_2) \geq 0$ such that $A\lambda_1=b$ and the second matrix row gives $\lambda_1 \leq \alpha$ as needed.