First, I want some verification for the validity of my approach for this det evaluation question:
If $A,B\in M_n(K)$, $K$ is a number field (in the sense that $\Bbb Q$ is the smallest possible one), and $AB=BA$, prove this determinat equality $$ \begin{vmatrix} A & -B \\ B & A \end{vmatrix} =|A^2+B^2| $$
My approach:
Case 1: when $A=(a_{ij})$ is non-singular
Consider $$ \begin{bmatrix} I & O \\ -B & A \end{bmatrix} \begin{bmatrix} A & -B \\ B & A \end{bmatrix}= \begin{bmatrix} A & -B \\ O & A^2+B^2 \end{bmatrix} $$ therefore $$ \det \begin{bmatrix} I & O \\ -B & A \end{bmatrix} \det \begin{bmatrix} A & -B \\ B & A \end{bmatrix}= \det \begin{bmatrix} A & -B \\ O & A^2+B^2 \end{bmatrix} $$ by Laplace expansion, it is clear that $$ \begin{vmatrix} A & -B \\ B & A \end{vmatrix}=\frac1{|A|}\cdot|A|\cdot |A^2+B^2|=|A^2+B^2| $$
Case 2: when $A$ is singular
Consider $\require{cancel} A(t)\xcancel{=(a_{ij}+t)}=A+It$, since $A$ is singular, we have $A(0)=0$. That's to say, $0$ is a root of $A(t)$, which we regard now as a (non-zero) polynomial w.r.t. the parameter $t$. Since any non-zero polynomial has only finitely many roots in $\Bbb C$, it is clear that there exists a positive $\delta$ such that $t\in (-\delta,0)\cup(0,\delta)$ implies $A(t)\ne 0$, otherwise there would be an infinite sequence consisting of $A(t)$'s roots that converges to $0$. So if we pick any $t\in (-\delta,0)\cup(0,\delta)$ and replace $A$ by $A(t)$ in case 1, we'll obtain $$ \begin{vmatrix} A(t) & -B \\ B & A(t) \end{vmatrix}=|A^2(t)+B^2| $$ However, this is also a real polynomial function and therefore continuous at $t=0$, hence $$ \begin{vmatrix} A & -B \\ B & A \end{vmatrix}= \lim_{t\to 0}\begin{vmatrix} A(t) & -B \\ B & A(t) \end{vmatrix}= |A^2(0)+B^2|=|A^2+B^2| $$
Second, if my previous deduction is sound, will it be also proper to extend similar convert-singular-to-non-singular tricks elsewhere (except when the underlying field $K$ is not a number field in the usual sense, say, $K$ is a finite field)? For instance, there is an exercise marked as "difficult" in my textbook
For $n\times n$ matrices $A,B$, prove $$(AB)^*=B^*A^*$$ $(A^*)$ denotes the adjoint matrix (transpose of the cofactor matrix) of $A$.
I know that it is marked as "difficult" because of the possibility that $A$ or $B$ can be singular, for if they are both non-singular, the proof will be extremely easy. However, if I apply my previous tricks here -- like making some $A(u),B(v)$ stuff, and noticing that every entry of $(AB)^*$ is continuously dependent on all the entries of $A(u),B(v)$, which are respectively continuously dependent on $u$ and $v$ due to the way I construct $A(u),B(v)$ -- will it not be extremely easy to extend the "non-singular" case to the "singular" case and thus complete the whole proof?
Further yet, if we have successfully proven a matrix/det equality for a bunch of non-singular matrices like $A,B,C\cdots$, and this equality only involves things (like $\det$, but not $\text{rank}$ of course) that are continuously dependent on each entry of these matrices, will it be natural to extend the result to all cases, whether the involved matrices are singular or not?
Any rectification/inspiration/clarification on my thoughts is welcomed. Best regards!
EIDT
@user1551 has pointed out that in my approach if $A(t)\equiv 0$ there'd be a fallacy, and has suggested that it can be fixed if I replace my $A(t)$ by $A+It$, which will in no case be a 0 polynomial.
EDIT
Or, is there any such matrix equations (both sides only include matrix addition and multiplication but not there inverses) where invertibility makes all the difference?
Your attempt in case 2 is wrong, because you have mistakenly assumed that $\det(a_{ij}+t)$ is a nonzero polynomial. Consider, e.g. $A=\pmatrix{1&1\\ 1&1}$. Then $(a_{ij}+t)=\pmatrix{1+t&1+t\\ 1+t&1+t}$ is always singular, whatever $t$ is.
There is an easy fix to your proof, though: simply replace $(a_{ij}+t)$ by $A+tI$. When $t\ne0$ is sufficiently small, $A+tI$ is invertible.