The definition of the affine hull of a set of vectors $\{a_1,\dots,a_n\}$ is
$$ \bigg\{x=\sum_{i=1}^n\lambda_ia_i\ \bigg|\ \sum_{i=1}^n\lambda_i=1\bigg\}. $$
On the other hand, a set of points $\{a_1,\dots,a_n\}$ is defined to be affinely independent if
$$ \sum_{i=1}^n\lambda_ia_i=0\text{ and } \sum_{i=1}^n\lambda_i=0\text{ implies }\lambda_i=0\text{ for all }i=1,\dots,n.$$
In the definition of affine hull, the coefficients $\lambda_i$ sum to 1, while for affine independence, they sum to 0. What is the intuitive reason for this discrepancy? Does the set
$$ \bigg\{x=\sum_{i=1}^n\lambda_ia_i\ \bigg|\ \sum_{i=1}^n\lambda_i=0\bigg\}. $$
have a meaningful interpretation? It is neither the affine, conic nor convex hull, nor the linear span. In two dimensions for a set of two vectors, it seems geometrically like the affine hull shifted to pass through the origin. Is that all it is?
Any references appreciated.
Think of affine combinations as an extension of linear combinations and affine independence as extension of linear independence.
Consider, for example, the vector space $\mathbb{R}^2$. An affine combination of two vectors $\alpha$ and $\beta$ is simply another name for the line through these two points. Now the line through $\alpha$ and $\beta$ can be written as: $$ \lambda \alpha + (1-\lambda)\beta, \quad \lambda \in \mathbb{R} \tag{1}\label{1}$$
Notice that this is just the linear combination of the vectors with the added constraint that the coefficients of the vectors sum to $1$.
We can extend this result to an arbitrary vector space $V$ on the field $\mathbb{R}$. If $a_1, \dots, a_n$ are $n$ vectors in $V$, then the set of all linear combinations of the vectors (the linear hull or linear span) is given by $$\{ \nu \in V: \nu =\sum_{j = 1}^n \lambda_j a_j, \ \lambda_j \in \mathbb{R}\} \tag{2}\label{2}$$
Going by what we did for two vectors above, the set of all affine combinations of the vectors (the affine hull) is given by the set of all linear combinations with the constraint that the sum of all coefficients is $1$: $$\{ \nu \in V: \nu =\sum_{j = 1}^n \lambda_j a_j, \sum_{j = 1}^n \lambda_j = 1, \ \lambda_j \in \mathbb{R}\} \tag{3}\label{3}$$
A bit of geometry can help with the intuition regarding affine independence. Firstly, an affine subspace of a vector space is the direct sum of a vector and a linear subspace. For example, if $U$ is a subspace of a vector space $V$, and $\alpha$ a vector in $V$, then the sum $ A = \{ \alpha \} + U $ is an affine subspace of $V$. In other words, to each vector of $U$, add the vector $\alpha$ and the resulting set $A$ is an affine subspace. Think of the set $A$ as the set $U$ translated by the vector $\alpha$.
Let's compare the concepts of linear and affine independence in this setting:
If $u_1, u_2, \dots, u_n$ are vectors in $U$, the corresponding vectors in $A$ are $v_1, \dots, v_n$ where $ v_j = u_j + \alpha$. Now to say that a vector $\beta$ is linearly independent of the vectors in the set $u_1, u_2, \dots, u_n$ means that if there are scalars $\lambda_1, \dots, \lambda_n$ such that $$\beta = \sum_{j=1}^n \lambda_j u_j \quad \implies \lambda_1 = \lambda_2 = \dots =\lambda_n = 0 \tag{4}\label{4}$$.
In $\eqref{4}$ replacing $u_j = v_j - \alpha$, we obtain:
\begin{align} \beta &= \sum_{j=1}^n \lambda_j(v_j - \alpha) \tag{5}\label{5} \\ &= \sum_{j=1}^n \lambda_j v_j - (\sum_{j=1}^n \lambda_j )\alpha \\ &= \sum_{j=1}^{n+1} \lambda_j v_j \end{align}
Where we have renamed $v_{n+1} = \alpha$ with $\lambda_{n+1} = - (\sum_{j=1}^n \lambda_j )$. Now notice that the sum of coefficients in the final equation is $\lambda_1 + \lambda_2 + \dots +\lambda_n - \sum_{j=1}^n \lambda_j= 0$. This leads to the definition of affine independence with the additional condition that the sum of coefficients is $0$
The set of vectors $a_1, \dots, a_n$ is affinity independent if $$ \sum_{j = 1}^n \lambda_j a_j = 0 \ , \sum_{j=1}^n \lambda_j = 0 \quad \implies \lambda_1 = \lambda_2 = \dots = \lambda_n =0$$