Suppose I have a bounded lattice $L$, a subset $E\subset L$ of elements that are both meet-prime and join-prime, and every $x\in L$ can be represented as a finite expression of $\lor$, $\land$, and elements of $E$. $L$ is guaranteed to not be atomic, complemented, or finite.
If I have $\leq_E:E\times E\rightarrow\mathbb{2}$ to test whether $x\leq y$ for $x,y\in E$, is there an efficient algorithm to extend $\leq_E$ to all of $L$? I have an algorithm that works, but it has a combinatorial expansion that seems like it might have exponential (or worse) complexity in the worst case. (The computational cost of $\leq_E$ varies with $x$ and $y$, but I'm ignoring that for now.)
The current algorithm recursively expands the expressions on both sides to extend $\leq_E$ to $L$. It's straightforward except when dealing with $\land$ on the left and $\lor$ on the right. Only the first matching rule below is used. (I've transposed this from a Haskell function that uses pattern matching.)
| Expression Type | Expansion |
|---|---|
| $\mathbf{x\leq_L y}$ where $x,y\in E$ | $x\leq_E y$ |
| $\mathbf{\bigwedge X\leq_L \bigvee Y}$ | $\bigvee_{x\in X\cap E}\bigvee_{y\in Y\cap E}\left[x\leq_E y\right]~\lor \\ \bigvee_{x\in X\setminus E}\left[x\leq_L \bigvee Y\right]~\lor \\ \bigvee_{y\in Y\setminus E}\left[\bigwedge X\leq_L y\right]$ |
| $\mathbf{\bigvee X\leq_L y}$ (any $y$) | $\bigwedge_{x\in X}\left[x\leq_L y\right]$ |
| $\mathbf{x\leq_L \bigwedge Y}$ (any $x$) | $\bigwedge_{y\in Y}\left[x\leq_L y\right]$ |
| $\mathbf{\bigwedge X\leq_L y}$ ($y\in E$ by elimination) | $\bigvee_{x\in X}\left[x\leq_L y\right]$ |
| $\mathbf{x\leq_L \bigvee Y}$ ($x\in E$ by elimination) | $\bigvee_{y\in Y}\left[x\leq_L y\right]$ |
I also preprocess the expressions to cut down on complexity:
- Each $\bigvee$ contains only distinct $\bigwedge$ and distinct elements of $E$.
- Each $\bigwedge$ contains only distinct $\bigvee$ and distinct elements of $E$.
As far as I know, $\bigwedge X\leq_L \bigvee Y$ is the only case where combinatorial expansion is needed, but I'm not sure if the expansion above is overdoing it. It almost certainly duplicates work in some cases, but I'm not sure if I can transparently cache $\leq_L$, since I'm using a purely functional language.
Overall, this algorithm has been fine in practice, since the expressions are hand-written (they're ephemeral types in a programming language) and are meant to be comprehensible to humans. A hand-written expression with enough nesting to create a combinatorial explosion would be highly unusual, but they might come up in the automatic type inference the type system uses.
After examining the algorithm a bit more, I think it's at worst cubic rather than exponential:
Without the second expansion, the algorithm is quadratic, because at worst it compares every element in the left expression to every element in the right expression.
The second expansion doubles the work done each time it's invoked, provided the left and right have the same number of nested expressions.
With expressions of size $n$, you can have no more than $\lceil\mathrm{log}_2 n\rceil$ levels of nesting in the expression. This means that the comparison count will be multiplied by at most $2^{\lceil\mathrm{log}_2 n\rceil}\in[n,2n)$ in the second expansion.
(Actually, with expression preprocessing, there should be at most $\left\lceil\frac{1}{2}\mathrm{log}_2 n\right\rceil$ levels invoking the second expansion, making it $O\left(n^{2.5}\right)$.)
I did some deterministic empirical tests and it looks like it's between $\mathbf{O\left(n^{2.5}\right)}$ and $\mathbf{O\left(n^3\right)}$:
I also did some fine-grained sampling over $n\in[1,256]$:
So, it looks like the algorithm is already reasonable for checking expressions that are hand-written, since even something with 5 alternating levels of nesting would be a huge pain to both write and interpret. (Keep in mind that preprocessing will only preserve nestings that alternate between $\bigvee$ and $\bigwedge$; everything else will be flattened.)