A bipartite graph can be represented by an adjacency matrix, or specifically, by a biadjacency matrix.
Formally, let $G = (U, V, E)$ be a bipartite graph with parts $U = \{u_1, \ldots, u_r\}$ and $V = \{v_1, \ldots, v_s\}$. The biadjacency matrix is the $r \times s$ $0$–$1$ matrix $B$ in which $b_{i,j} = 1 \iff (u_i, v_j) \in E$.
With such a biadjacency matrix for a bipartite graph, we can apply a variety of (algebraic) matrix analysis on it. For instance, we can perform matrix decomposition/factorization.
Now I encounter tripartite graphs in applications. (They are not necessarily complete tripartite graphs). I would like to know:
- Are there any ways of representing tripartite graphs algebraically, say, by matrices or anything similar (I have no idea what they are)?
- What can we do on these representations?
- [ADDED 2016-05-04:] Are there any connections between tripartite graphs (or generally, multipartite graphs) and tensors, which can be represented as organized multidimensional arrays?
There are two kinds of matrices most often used to represent simple undirected graphs, adjacency matrices and incidence matrices. [Their variations for graphs that allow loops/self-edges, multi-edges, and/or directed edges are common, but we mostly avoid discussing them here.]
The adjacency matrix for a bipartite graph is mentioned already in the Question. If the parts of the bipartite graph are sets $V_1,V_2$ of nodes, with $|V_1| = n_1$ and $|V_2| = n_2$, then ordering the nodes of $V_1$ before those of $V_2$ puts the adjacency matrix $A$ into block matrix form:
$$ A = \begin{pmatrix} 0 & B \\ B^T & 0 \end{pmatrix} $$
where $B$ is a biadjacency matrix of size $n_1\times n_2$. In other words, $B$ only needs to encode the edges between $V_1$ and $V_2$, and the rest of the adjacency matrix $A$ can be reconstructed (by symmetry).
The incidence matrix of a graph has (according to one common convention) a row for each node and a column for each edge, with ones in locations where an edge is incident to a node (endpoint of the edge) and zeros elsewhere. Thus for our bipartite graph with nodes of $V_1$ ordered before nodes of $V_2$, the incidence matrix would be of size $(n_1+n_2)\times m$ where $m = |E|$ is number of edges, and each column would have exactly two nonzero entries (ones), one in the upper $n_1$ rows and the other in the lower $n_2$ rows.
In other words such an incidence matrix $C$ would consist of two blocks:
$$ C = \begin{pmatrix} C_1 \\ C_2 \end{pmatrix} $$
where each column of $n_1 \times m$ matrix $C_1$ and each column of $n_2 \times m$ matrix $C_2$ contains just a single nonzero entry $1$.
Similar observations can be made about a graph having three parts, or more generally a $k$-partite graph. After all a $k$-partite graph is the edge-disjoint union of $\binom{k}{2}$ bipartite graphs.
In particular the adjacency matrix of a tripartite graph with parts $V_1,V_2,V_3$ of respective sizes $n_1,n_2,n_3$ can be assumed by ordering the nodes of $V_1$ before nodes of $V_2$ before nodes of $V_3$ to take the block matrix form:
$$ A = \begin{pmatrix} 0 & B_{12} & B_{13} \\ B_{12}^T & 0 & B_{23} \\ B_{13}^T & B_{23}^T & 0 \end{pmatrix} $$
Note that $B_{ij}$ is a biadjacency matrix encoding the edges from part $i$ to part $j$.
The incidence matrix $C$ for a tripartite graph has similar block structure if we again order the nodes (rows) as previously discussed and further order the columns so the that first $m_1$ represent edges from part $1$ to part $2$, the next $m_2$ represent edges from part $1$ to part $3$, and the last $m_3$ represent edges from part $2$ to part $3$. Then:
$$ C = \begin{pmatrix} C_1 & C_3 & 0 \\ C_2 & 0 & C_5 \\ 0 & C_4 & C_6 \end{pmatrix} $$
where each of the $m_1$ columns of $C_1,C_2$, each of the $m_2$ columns of $C_3,C_4$, and each of the $m_3$ columns of $C_5,C_6$ contains a single $1$ entry (and zeros otherwise).
Note that $C$ is of size $(n_1+n_2+n_3)\times(m_1+m_2+m_3)$, so not square in general despite the appearance of our block notation above.
Are these matrices useful in proving graph properties algebraically? Indeed they are, though I've not seen a typical matrix "factorization" used in this connection. Indeed the block matrix forms of a $k$-partite graph shown above establish that they are $k$-partite, something which is (for $k\gt 2$) an NP-complete decision problem (when uncolored graphs are presented). (The bipartite case $k=2$ is equivalent to the absence of odd-length cycles.)
The adjacency matrix of a $k$-partite graph is symmetric, as for any undirected graph, and so it allows some study of graph invariants via the spectrum of the adjacency matrix. Graph isomorphisms correspond to similarity transformations on the corresponding adjacency matrices via permutation matrices.
A related Math.SE Question concerned the spectrum of a $k$-partite graph.
Incidence matrices are often employed for algebraic proof techniques in connection with specialized graphs (finite geometries and block designs). This sort of material can be found in almost any book on combinatorial design.
A "directed" version of an incidence matrix is closely related to the graph Laplacian and thus to many additional topics, e.g. community detection/clustering.
For an intriguing buzzword "folksonomy" related to community detection in tripartite graphs, see the paper Identifying Overlapping Communities in Folksonomies or Tripartite Hypergraphs by Gosh, Kane, and Ganguly (2012?).