I have a few determinant-related questions that I've been struggling with for at least a few days. I couldn't see a similar question on here. So, here it is:
I wrote my own electromagnetics moment method solver to calculate eigenvalues (cutoff frequencies) of a certain waveguide. It works fine, but now I want to try to take advantage of physical symmetry of the structure that I'm investigating. The matrix whose determinant is of interest is called the impedance matrix, usually denoted as $Z,$ which is an $N\times N$ complex matrix.
What we're interested in is the smallest eigenvalue of $Z.$ When the smallest eigenvalue is zero (or close to it), that gives me the cutoff frequency (in other words, I'm trying to find where $\det(Z)=0$). By using one plane of symmetry, it can be shown that the impedance matrix can be written as:
$$ Z = \left[\begin{array}{cc} Z_{1} & Z_{2}\\ Z_{2}^{R} & Z_{1}^{R} \end{array}\right] $$ where $Z_1^R$ and $Z_2^R$ are reversed square matrices, obtained from $Z_1$ and $Z_2,$ respectively, by reversing the order of both rows and columns. According to the reference I'm using ("Field computation by moment methods" by Harrington), the determinant of $Z$ is given by $$ \det(Z) = \det(Z_1+Z_2)\times\det(Z_1-Z_2) $$ which I have confirmed numerically in my case. My first question: Why is this true? I refreshed my linear algebra by going through bunch of materials online and offline, but could not come up with anything yet explaining this.
Side note 1: $Z_1 + Z_2$ is called the even-mode impedance matrix, $Z_1 - Z_2$ is called the odd-mode impedance matrix. Depending on whether the waveguide mode is even or odd, one or the other gives the eigenvalue that I'm interested in.
Side note 2: While this is probably not true in general, in the specific problem that I'm solving it so happens that $Z_1 = Z_1^R$ and $Z_2 = Z_2^R.$
Moving on, I can actually divide things up further to speed up my calculations because my geometry has not one, but two planes of symmetry. So, I can actually write the impedance matrix as follows $$ Z = \left[\begin{array}{cc} \underset{Z_{1}}{\underbrace{\left[\begin{array}{cc} Z_{11} & Z_{12}\\ Z_{12}^{R} & Z_{11}^{R} \end{array}\right]}} & \underset{Z_{2}}{\underbrace{\left[\begin{array}{cc} Z_{21} & Z_{22}\\ Z_{22}^{R} & Z_{21}^{R} \end{array}\right]}}\\ Z_{2}^{R} & Z_{1}^{R} \end{array}\right] $$ which means I can potentially deal with $\frac{N}{4} \times \frac{N}{4}$ matrices. Note that in this case $Z_{ij} \neq Z_{ij}^R$ for $i,j=1,2.$
My second question: I'd like to find a way that involves calculating determinants of $Z_{11},\;Z_{12},\;Z_{21},\;Z_{22}$ (or combinations thereof) such that when it is minimized, the determinant of $Z$ is also minimized. My hope is if I figure out the answer to my first question, I can extend it to this specific case. For instance, I calculate the following: $$ \det(Z_{11}+Z_{21}+Z_{12}+Z_{22}) \times \det(Z_{11}+Z_{21}-Z_{12}-Z_{22}) $$ and it is actually pretty close to $\det(Z_1+Z_2).$ By pretty close, I mean $\det(Z_1+Z_2) = 1.152-j0.2717$ versus the above gives $1.131-j0.2724.$ Similarly for $\det(Z_1-Z_2).$ However, "pretty close" does not give me a whole lot of confidence.
Do you have any comments, suggestions, pointers to literature on this?
Thanks a lot,