I was reading this paper related to neighborly polytope where they mentioned:
Consider a $d \times n$ matrix $A$, with $d < n$. The problem of solving for $x$ in $y = Ax$ is underdetermined, and has many possible solutions (if there are any). In several fields it is of interest to find the sparsest solution – the one with fewest nonzeros – but in general this involves combinatorial optimization.
Let $a_i$ denote the $i$-th column of $A$, $1 \leq i \leq n$. Associate to $A$ the quotient polytope $P$ formed by taking the convex hull of the $2n$ points $(±a_i)$ in $\mathbf{R}^d$. $P$ is centrosymmetric and is called (centrally) $k$-neighborly if every subset of $k + 1$ elements $({±}_{il} a_{il})_{l=1}^{k+1}$ are the vertices of a face of $P$.
I didn't get what they mean by $({±}_{il} a_{il})_{l=1}^{k+1}$. Do they mean any $± a_i$? Each $a_i$ represents a point. Any clarifications?
Also in this paper (p. 4) they mention that:
In general, if $A$ maps all $t$-dimensional facets of $P_1$ to facets of $P$, the polytope $P$ is referred to as (centrally) $t$-neighborly. From the above, we see that the $\ell^1$-minimization correctly recovers all $x_0$ with $\leq t+1$ nonzeros iff $P$ is $t$-neighborly, in which case it is equivalent to the $\ell^0$-minimization.
I didn't get how come mapping t-dimensional facets to P recovers all all $x_0$ with t+1 nonzeros. The max dimension is t. So how come they have t+1 in there?
Any suggestions/clarifications?