Looking at wiki, I want to
$p_A (t) = \sum_{k=0}^n t^{n-k} (-1)^k \operatorname{tr}(\Lambda^k A) $
where ${\displaystyle \operatorname {tr} (\Lambda ^{k}A)={\frac {1}{k!}}{\begin{vmatrix}\operatorname {tr} A&k-1&0&\cdots &\\\operatorname {tr} A^{2}&\operatorname {tr} A&k-2&\cdots &\\\vdots &\vdots &&\ddots &\vdots \\\operatorname {tr} A^{k-1}&\operatorname {tr} A^{k-2}&&\cdots &1\\\operatorname {tr} A^{k}&\operatorname {tr} A^{k-1}&&\cdots &\operatorname {tr} A\end{vmatrix}}~.}$
Wikipedia states this can be proven via the language of exterior algebra, How?
Let $v_1,\dots,v_n$ be a basis of $V$, consisting of eigenvectors with eigenvalues $\lambda_1,\dots,\lambda_n$. Then $\mathrm{tr}(A) = \sum \lambda_i$.
As a basis for $\Lambda^k V$, you can take the collection of all $v_{i_1} \wedge \cdots \wedge v_{i_k}$, where $i_1 < \cdots < i_k$. Since by definition, $$ (\Lambda^k A)(v_{i_1} \wedge \cdots \wedge v_{i_k}) = (Av_{i_1}) \wedge \cdots \wedge (Av_{i_k}) = \lambda_{i_1} \cdots \lambda_{i_k} \, v_{i_1} \wedge \cdots \wedge v_{i_k} $$ you see that this basis also consists of eigenvectors, with corresponding eigenvalues $\lambda_{i_1} \cdots \lambda_{i_k}$. So the trace of $\Lambda^k A$ is the sum of its eigenvalues, giving $$ \mathrm{tr}(\Lambda^k A) = \sum_{i_1 < \cdots < i_k} \lambda_{i_1} \cdots \lambda_{i_k} $$ If you are familiar with symmetric polynomials, this is the "elementary symmetric polynomial" $e_k(x_1,\dots,x_n)$, evaluated at $x_i = \lambda_i$ for all $i$.
On the other hand, $\mathrm{tr}(A^k) = p_k(\lambda_1,\dots,\lambda_n)$, where these are the "power sum symmetric polynomials" $p_k(x_1,\dots,x_n) = x_1^k + x_2^k + \cdots + x_n^k$.
The determinant formula you quoted from Wikipedia follows from the well-known "Newton Identity", which tells how to express the two types of symmetric polynomials in terms of the other ones. See the Wikipedia page for Newton identity for more info.