This problem has occurred when working on my BA thesis: I have the following matrix equation $A\vec{x}=\vec{b}$
$$\begin{pmatrix} \dots & \dots & \dots \\ \dots & \dots & \dots \\ \dots & \dots & \dots \\ \dots & \dots & \dots \\ \end{pmatrix} \begin{pmatrix} \vdots \\ x_m \\ 0 \\ \vdots \\ \end{pmatrix} = \begin{pmatrix} \vdots \\ 0 \\ b_m \\ \vdots \\ \end{pmatrix} $$ where $A$ is just some matrix. (The entries are Bessel functions) In principle the matrix and vectors have infinite dimensions but we would like to restrict the problem to finite dimensions and come up with a statement if there are solutions or not for any $x_m, b_m$. I do not need to find a solution, showing there is/ isn't a solution would be absolutely sufficient. Hence, we would like to compute the determinant (numerically). However, at the moment there are "free" variables $b_m$ in the solution vector on the right. Obviously, before computing the determinant I would have to somehow shift these variables into the $\vec{x}$ vector and transform the present equation in a homogeneous one.
Let's look at a $2\times2$ example to illustrate my points: $$\begin{pmatrix} A & B \\ C & D \\ \end{pmatrix} \begin{pmatrix} x_1 \\ 0 \\ \end{pmatrix} = \begin{pmatrix} 0 \\ b_1 \\ \end{pmatrix}$$
What we came up with yet:
For starters, the equations represented by this matrix equation are $$ A x_1 = 0\\ C x_1 = b_1$$ If $C$ was non-zero, one could invert the 2nd equation to $x_1 = C^{-1} b_1$, and plug it into the first one. This would yield: $$ A x_1 =0 \\ \frac{A}{C} b_1 = 0 $$ which could be written as a homogeneous matrix equation $$\begin{pmatrix} A & 0 \\ 0 & \frac{A}{C} \\ \end{pmatrix} \begin{pmatrix} x_1 \\ b \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ \end{pmatrix}$$
I believe, this method could be applied to arbitrary dimensions, the caveat is that inverting the entries of my matrix is probably not a good idea, as Bessel functions have zeros which would blow up the whole thing.
Another interesting observation is, that the right side of the matrix is completely obsolete. Looking at the $2\times2$ example, the entries $B$ and $D$ do not contribute anything to the problem. Looking at bigger cases, e.g. $4\times4$, one can always separate the matrix equation into a homogeneous one and an inhomogeneous one with the upper left entries and the lower left entries. This makes me believe there could be some trick to achieve what I want. (See below example) $$ \begin{pmatrix} A & B & C & D\\ E & F & G & H\\ M & N & O & P\\ Q & R & S & T\\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ 0 \\ 0 \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ b_1 \\ b_2 \\ \end{pmatrix} $$
is really just $$ \begin{pmatrix} A & B \\ E & F \\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ \end{pmatrix}$$ (homogeneous) and $$ \begin{pmatrix} M & N \\ Q & R \\ \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ \end{pmatrix}$$ (inhomogeneous) Maybe, one could use the homogeneous equation to set some constraint on the $x_i$ later use the inhomogeneous one.
Here is, where I am stuck. Please share any ideas. Also if you think you know a different way of stating something about if the equations are solvable in general.
First of all, numerically computing the determinant is a terrible thing to due to numerical instability.
You don't say anything about the need for speed, so I will assume you just want to get a correct ruling on feasibility and are not concerned with tricks to make the determination as fast as possible. You have a linear system of equations (system of linear equalities). Therefore, you can use a Linear Programming or other Optimization solver which accepts linear (equality) constraints to determine whether the problem is feasible.
Declare which variables are "optimization" (a.k.a. decision) variables, to include any b's. You can enter any objective function you want, including a trivial objective function such as zero. Many optimization front ends don't require you to enter an objective at all. The optimizer will either return that the problem is feasible or infeasible.
Here is an example using CVX as a front end. You can adapt and fix up the form of the equation(s) and variable declarations as you see fit, but I have assumed the last n components of the left-hand side vector must be zero and the first m components of the right-hand side vector must be zero.
or