I'd like to prove following statement and check whether it's true or not.
Let $M$ be a diagonalizable $n × n$ matrix. If the rank of $M$ equals $r (> 0)$, then the pseudo determinant pdet$M$ equals the sum of all principal minors of order $r$.
Pseudo determinant refers to the product of all non-zero eigenvalues of a square matrix. Eigenvalues are scaling factors as far as I know. And principal minors of order r, is also small-sized scaling factors(determinant) of given $M$.
But does pseudo determinant equal to sum of all principal minor? It looks to me multiplication of those equals to pseudo determinant.
Which one is correct?
In order not to leave this question unanswered, let me prove the claim along the lines I've suggested in the comments.
Let us agree on a few notations:
Let $n$ and $m$ be two nonnegative integers. Let $A=\left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq m}$ be an $n\times m$-matrix (over some ring). Let $U=\left\{ u_{1}<u_{2}<\cdots<u_{p}\right\} $ be a subset of $\left\{ 1,2,\ldots,n\right\} $, and let $V=\left\{ v_{1}<v_{2}<\cdots<v_{q}\right\} $ be a subset of $\left\{ 1,2,\ldots,m\right\} $. Then, $A_{U,V}$ shall denote the submatrix $\left( a_{u_{i},v_{j}}\right) _{1\leq i\leq p,\ 1\leq j\leq q}$ of $A$. (This is the matrix obtained from $A$ by crossing out all rows except for the rows numbered $u_{1},u_{2},\ldots,u_{p}$ and crossing out all columns except for the columns numbered $v_{1},v_{2},\ldots,v_{q}$.) For example, \begin{equation} \begin{pmatrix} a_{1} & a_{2} & a_{3} & a_{4}\\ b_{1} & b_{2} & b_{3} & b_{4}\\ c_{1} & c_{2} & c_{3} & c_{4}\\ d_{1} & d_{2} & d_{3} & d_{4} \end{pmatrix} _{\left\{ 1,3,4\right\} ,\left\{ 2,4\right\} } = \begin{pmatrix} a_{2} & a_{4}\\ c_{2} & c_{4}\\ d_{2} & d_{4} \end{pmatrix} . \end{equation}
If $n$ is a nonnegative integer, then $I_n$ will denote the $n\times n$ identity matrix (over whatever ring we are working in).
Fix a nonnegative integer $n$ and a field $\mathbb{F}$.
We shall use the following known fact:
Theorem 1 appears, e.g., as Corollary 6.164 in my Notes on the combinatorial fundamentals of algebra, in the version of 10th January 2019 (where I use the more cumbersome notation $\operatorname*{sub}\nolimits_{w\left( P\right) }^{w\left( P\right) }A$ instead of $A_{P,P}$). $\blacksquare$
Proof of Corollary 2. We have $r\in\left\{ 0,1,\ldots,n\right\} $, thus $n-r\in\left\{ 0,1,\ldots,n\right\} $. Also, from $tI_n +A=A+tI_n $, we obtain \begin{equation} \det\left( tI_n +A\right) =\det\left( A+tI_n \right) =\sum_{k=0} ^{n}\left( \sum_{\substack{P\subseteq\left\{ 1,2,\ldots,n\right\} ;\\\left\vert P\right\vert =n-k}}\det\left( A_{P,P}\right) \right) t^{k} \end{equation} (by \eqref{darij.eq.t1.2}, applied to $\mathbb{K}=\mathbb{F}\left[ t\right] $ and $x=t$). Hence, for each $k\in\left\{ 0,1,\ldots,n\right\} $, we have \begin{align*} & \left( \text{the coefficient of }t^{k}\text{ in the polynomial } \det\left( tI_n +A\right) \right) \\ & =\sum_{\substack{P\subseteq\left\{ 1,2,\ldots,n\right\} ;\\\left\vert P\right\vert =n-k}}\det\left( A_{P,P}\right) . \end{align*} We can apply this to $k=n-r$ (since $n-r\in\left\{ 0,1,\ldots,n\right\} $) and thus obtain \begin{align*} & \left( \text{the coefficient of }t^{n-r}\text{ in the polynomial } \det\left( tI_n +A\right) \right) \\ & =\sum_{\substack{P\subseteq\left\{ 1,2,\ldots,n\right\} ;\\\left\vert P\right\vert =n-\left( n-r\right) }}\det\left( A_{P,P}\right) =\sum_{\substack{P\subseteq\left\{ 1,2,\ldots,n\right\} ;\\\left\vert P\right\vert =r}}\det\left( A_{P,P}\right) \qquad\left( \text{since }n-\left( n-r\right) =r\right) \\ & =\left( \text{the sum of all principal }r\times r\text{-minors of }A\right) \end{align*} (by the definition of principal minors). This proves Corollary 2. $\blacksquare$
Proof of Lemma 3. The eigenvalues of $A$ are defined as the roots of the characteristic polynomial $\det\left( tI_n -A\right) $ of $A$. (You may be used to defining the characteristic polynomial of $A$ as $\det\left( A-tI_n \right) $ instead, but this makes no difference: The polynomials $\det\left( tI_n -A\right) $ and $\det\left( A-tI_n \right) $ differ only by a factor of $\left( -1\right) ^{n}$ (in fact, we have $\det\left( A-tI_n \right) =\left( -1\right) ^{n}\det\left( tI_n -A\right) $), and thus have the same roots.)
Also, the characteristic polynomial $\det\left( tI_n -A\right) $ of $A$ is a monic polynomial of degree $n$. And we know that its roots are the eigenvalues of $A$, which are exactly $\lambda_{1},\lambda_{2},\ldots ,\lambda_{n}$ (with multiplicities). Thus, $\det\left( tI_n -A\right) $ is a monic polynomial of degree $n$ and has roots $\lambda_{1},\lambda_{2} ,\ldots,\lambda_{n}$. Thus, \begin{equation} \det\left( tI_n -A\right) =\left( t-\lambda_{1}\right) \left( t-\lambda_{2}\right) \cdots\left( t-\lambda_{n}\right) \end{equation} (because the only monic polynomial of degree $n$ that has roots $\lambda _{1},\lambda_{2},\ldots,\lambda_{n}$ is $\left( t-\lambda_{1}\right) \left( t-\lambda_{2}\right) \cdots\left( t-\lambda_{n}\right) $). Substituting $-t$ for $t$ in this equality, we obtain \begin{align*} \det\left( \left( -t\right) I_n -A\right) & =\left( -t-\lambda _{1}\right) \left( -t-\lambda_{2}\right) \cdots\left( -t-\lambda _{n}\right) \\ & =\prod_{i=1}^{n}\underbrace{\left( -t-\lambda_{i}\right) }_{=-\left( t+\lambda_{i}\right) }=\prod_{i=1}^{n}\left( -\left( t+\lambda_{i}\right) \right) \\ & =\left( -1\right) ^{n}\underbrace{\prod_{i=1}^{n}\left( t+\lambda _{i}\right) }_{=\left( t+\lambda_{1}\right) \left( t+\lambda_{2}\right) \cdots\left( t+\lambda_{n}\right) } \\ & = \left( -1\right) ^{n}\left( t+\lambda_{1}\right) \left( t+\lambda_{2}\right) \cdots\left( t+\lambda_{n}\right) . \end{align*} Comparing this with \begin{equation} \det\left( \underbrace{\left( -t\right) I_n -A}_{=-\left( tI_n +A\right) }\right) =\det\left( -\left( tI_n +A\right) \right) =\left( -1\right) ^{n}\det\left( tI_n +A\right) , \end{equation} we obtain \begin{equation} \left( -1\right) ^{n}\det\left( tI_n +A\right) =\left( -1\right) ^{n}\left( t+\lambda_{1}\right) \left( t+\lambda_{2}\right) \cdots\left( t+\lambda_{n}\right) . \end{equation} We can divide both sides of this equality by $\left( -1\right) ^{n}$, and thus obtain $\det\left( tI_n +A\right) =\left( t+\lambda_{1}\right) \left( t+\lambda_{2}\right) \cdots\left( t+\lambda_{n}\right) $. This proves Lemma 3. $\blacksquare$
Let us also notice a completely trivial fact:
Proof of Lemma 4. The coefficients of the polynomial $p\cdot t^{k}$ are precisely the coefficients of $p$, shifted to the right by $k$ slots. This yields Lemma 4. $\blacksquare$
Now we can prove your claim:
Proof of Theorem 5. First of all, all $n$ eigenvalues of $A$ belong to $\mathbb{F}$ (since $A$ is diagonalizable). Moreover, $r=\operatorname*{rank} A\in\left\{ 0,1,\ldots,n\right\} $ (since $A$ is an $n\times n$-matrix).
The matrix $A$ is diagonalizable; in other words, it is similar to a diagonal matrix $D\in\mathbb{F}^{n\times n}$. Consider this $D$. Of course, the diagonal entries of $D$ are the eigenvalues of $A$ (with multiplicities).
Since $A$ is similar to $D$, we have $\operatorname*{rank} A=\operatorname*{rank}D$. But $D$ is diagonal; thus, its rank $\operatorname*{rank}D$ equals the number of nonzero diagonal entries of $D$. In other words, $\operatorname*{rank}D$ equals the number of nonzero eigenvalues of $A$ (since the diagonal entries of $D$ are the eigenvalues of $A$). In other words, $r$ equals the number of nonzero eigenvalues of $A$ (since $r=\operatorname*{rank}A=\operatorname*{rank}D$). In other words, the matrix $A$ has exactly $r$ nonzero eigenvalues.
Label the eigenvalues of $A$ as $\lambda_{1},\lambda_{2},\ldots,\lambda_{n}$ (with multiplicities) in such a way that the first $r$ eigenvalues $\lambda_{1},\lambda_{2},\ldots,\lambda_{r}$ are nonzero, while the remaining $n-r$ eigenvalues $\lambda_{r+1},\lambda_{r+2},\ldots,\lambda_{n}$ are zero. (This is clearly possible, since $A$ has exactly $r$ nonzero eigenvalues.) Thus, $\lambda_{1},\lambda_{2},\ldots,\lambda_{r}$ are exactly the nonzero eigenvalues of $A$.
Lemma 3 yields \begin{align*} \det\left( tI_n +A\right) & =\left( t+\lambda_{1}\right) \left( t+\lambda_{2}\right) \cdots\left( t+\lambda_{n}\right) =\prod_{i=1} ^{n}\left( t+\lambda_{i}\right) \\ & =\left( \prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \cdot\left( \prod_{i=r+1}^{n}\left( t+\underbrace{\lambda_{i}} _{\substack{=0\\\text{(since }\lambda_{r+1},\lambda_{r+2},\ldots,\lambda _{n}\text{ are zero)}}}\right) \right) \\ & =\left( \prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \cdot\underbrace{\left( \prod_{i=r+1}^{n}t\right) }_{=t^{n-r}}=\left( \prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \cdot t^{n-r}. \end{align*} Now, Corollary 2 yields \begin{align*} & \left( \text{the sum of all principal }r\times r\text{-minors of }A\right) \\ & =\left( \text{the coefficient of }t^{n-r}\text{ in the polynomial }\underbrace{\det\left( tI_n +A\right) }_{=\left( \prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \cdot t^{n-r}}\right) \\ & =\left( \text{the coefficient of }t^{n-r}\text{ in the polynomial }\left( \prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \cdot t^{n-r}\right) \\ & =\left( \text{the coefficient of }t^{0}\text{ in the polynomial } \prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \\ & \qquad\left( \text{by Lemma 4, applied to }m=0\text{ and }k=n-r\text{ and }p=\prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \\ & =\left( \text{the constant term of the polynomial }\prod_{i=1}^{r}\left( t+\lambda_{i}\right) \right) \\ & =\prod_{i=1}^{r}\lambda_{i}=\lambda_{1}\lambda_{2}\cdots\lambda_{r}\\ & =\left( \text{the product of all nonzero eigenvalues of }A\right) \end{align*} (since $\lambda_{1},\lambda_{2},\ldots,\lambda_{r}$ are exactly the nonzero eigenvalues of $A$). This proves Theorem 5. $\blacksquare$
Note that in the above proof of Theorem 5, the diagonalizability of $A$ was used only to guarantee that $A$ has exactly $r$ nonzero eigenvalues and that all $n$ eigenvalues of $A$ belong to $\mathbb{F}$.