Context:
In the context of circuit theory and graph theory, suppose we have a graph $G,$ then the Laplacian (Kirchhoff) matrix $L$ is defined as follows:
$$ L = D-A \tag{1} $$ where $D$ is the degree matrix and $A$ the adjacency matrix. Alternatively, in terms of the incidence matrix $M$ it can be also expressed as:
$$ L = M^T M \tag{2} $$
I am interested in the case where the graph $G$ is connected. Then if we partition $G$ into connected subgraphs that are themselves connected, the Laplacian can then be represented as a block matrix. Let's assume we partition the graph into two parts, the boundary nodes $B$ and the connection nodes $C.$ Then $|V(G)|=|V(G_B)|+|V(G_C)|.$ The corresponding Laplacian matrix can be put in the form of the following block matrix:
$$ L = \begin{bmatrix} L_{BB} & L_{BC} \\ L_{CB} & L_{CC} \end{bmatrix} \tag{3} $$
One often is interested in reducing the size of the network/graph, using a scheme such as the Kron reduction, which relies on taking the Schur complement (Laplacian matrices are M-Matrices) of $L$ with respect to one of its blocks. But that is only possible if the submatrix of the chosen block is invertible, and in general this is assumed to follow from the connectedness of the graph.
Questions:
- Does the fact that $G$ and its subcomponents ($B$ and $C$) are connected ensure that the submatrices of the block representation of the Laplacian $L$ are invertible? (does this follow from a known theorem?) And therefore the Schur complements can be safely taken.
- In relation to connected graphs and their corresponding incidence matrices $M$ [*], is there a simple way to see that the kernel of $M$ is given by $\text{ker } M^T=\text{span } \mathbb{1}?$
[*]: The incidence matrix $M$ is usually defined with each column corresponding to an edge of $G,$ where then each column contains exactly one $1$ at the row corresponding to the tail-node of the edge and a $-1$ at the row corresponding to the head-node of the edge.
For your second question, if we think of $M^{\mathsf T}$ as a linear transformation, it takes an element of $\mathbb R^V$ (an assignment of a real scalar to every vertex) to an element of $\mathbb R^E$ (an assignment of a real scalar to every edge) by giving every edge the difference of the values on its endpoints.
So $\ker M^{\mathsf T}$ consists of all elements of $\mathbb R^V$ such that for every edge, the difference of values on its endpoints is $0$: the values on the endpoints are equal.
As a result, once we pick a value to put on one vertex, if we want to get an element of $\ker M^{\mathsf T}$, that value propagates to every other vertex in the same connected component. (All neighbors of the starting vertex must have the same value, and then all of their neighbors must have that value, and so on.) When the graph is connected, this means that each element of $\ker M^{\mathsf T}$ must be a multiple of $\mathbf 1_V$.
(In general, the same argument tells us that $\ker M^{\mathsf T}$ is generated by elements of $\mathbb R^V$ which are $1$ on a connected component of the graph, and $0$ elsewhere.)