What does "the activation of a basis" mean?

50 Views Asked by At

In the paper

Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, Andrew Y. Ng, Self-taught learning: transfer learning from unlabeled data, ICML '07 Proceedings of the 24th international conference on Machine learning, 2007, pp.759-766, ACM link

the authors define a term as follows (p.761):

$a_j^{(i)}$ is the activation of basis $b_j$ for input $x_u^{(i)}$.

I have not encountered the term activation of a basis before, and have been unable to find a definition of this online. Could anyone explain what this means?

The term appears in the following excerpt (p.761):

... given the unlabeled data $\{ x_u^{(1)}, \ldots , x_u^{(k)} \}$ with each $x_u^{(i)} \in \mathbb{R}^n$, we post the following optimization problem: $$\operatorname{minimise}_{b,a} \quad {\textstyle \sum_i} \| x_u^{(i)} - {\textstyle \sum_j} a_j^{(i)} b_j \|^2_2 + \beta \| a^{(i)} \|_1$$ s.t. $\| b_j\|_2 \leq 1$, $\forall j \in 1,\ldots,s$.

About this the authors say (p.761):

The optimization objective balances two terms: (i) The first quadratic term encourages each input $x_u^{(i)}$ to be reconstructed well as a weighted linear combination of the bases $b_j$ (with corresponding weights given by the activations $a_j^{(i)}$); and (ii) it encourages the activations to have low $L_1$ norm.