I have a question regarding the pseudo-determinant of a rank deficient matrix times a constant. Lets say matrix $K$ has dimension $n \times n$, however $\text{rank}(K)<n$. Does the following rule for a non-singular square matrix $A$
$$ \det(cA) = c^{\text{rank}(A)}\det(A), $$
also hold for the pseudo-determinant $\det(cK)_{+}$
$$ \det(cK)_{+} = c^{\text{rank}(K)}\det(K)_{+}? $$
Moreover are there even rules for the pseudo-determinant of a rank-deficient square matrix in general? For a non-singular square matrix this is a common rule, for a discussion see here. I'm interested in this due to evaluating the (log-)density of a degenerate multivariate normal distribution (prior) in a Bayesian regression set up. Some insights are highly appreciated.
The pseudo-determinant is the product of the non-zero eigenvalues of $K$. Letting $m = \text{rank}(K)$, if $\lambda_1,\ldots,\lambda_m$ are the non-zero eigenvalues of $K$, then $c\lambda_1,\ldots, c\lambda_m$ are the non-zero eigenvalues of $cK$ so indeed, $\det(cK)_+ = c^{\text{rank}(K)} \det(K)_+$.
Some properties are preserved, such as $\det(AB)_+ = \det(BA)_+$, but others are lost, for example, $\det(AB)_+ \neq \det(A)_+\det(B)_+$ in general. See Proposition 2 in this paper by Knill for more properties of the pseudo-determinant.