I'm trying to prove the following.
Suppose $P\subset R^n$ is a polyhedral cone ($P=\{x\in R^n:Ax\leq0\}$). Sow that $P=cone\{ x_1,...,x_k\}$, for some $x_i\in R^n$, $i=1,...,k$.
My question is as follows.
The polyhedral cone is just the intersection of a finite number of halfspaces that cross the origin. So, to represent $P$, wouldn't it be sufficient to just choose two of the most extreme halfspaces (say the $i$th and $j$th halfspace), pick two vectors on the boundary of each halfspace (one vector orthogonal to $a_i^T$ and the other orthogonal to $a_j^T$th (with the right directions)) and, then, construct the polyhedral cone using the conical combination of these two vectors? Why is it required to choose $k$ such vectors to construct the cone?
If you choose two vectors on the most extreme halfspaces then you may generate a cone in the usual geometric sense (here), i.e. an n-dimensional convex body where intersections with planes are (n-1)-dimensional balls.
A polyhedral cone, however, is defined as a convex subset of a vector space under linear combinations with positive coefficients (here). As the name says, it is used in the representation theory of polyhedra. You may construct them from as many boundary vectors or halfplanes.
Here is a simple example in $n$ dimensions which shows that indeed $n$ conditions are needed: Let $e_i$ be the unit vector in dimension $i$, and let the cone be all positions $x$ with $x = \sum_{i=1}^n c_i e_i$ with $c_i \geq 0$. So your cone is the first orthant. Expressing the same in halfspaces, you have written (in your question) the conditions as $A x \leq 0$ and the matrix $A$ is
$$ \begin{aligned} A &= \begin{bmatrix} -1 & 0 & \dots & 0 \\0 & -1 & \dots & 0 \\ \vdots & \ \vdots & \ddots & \vdots \\ 0 & 0& \dots & -1 \end{bmatrix}\end{aligned} $$