I'm working on constructing deterministic finite automata (DFAs) with a specific learning complexity when using the L* algorithm developed by Dana Angluin. My goal is to create a DFA of size ( n ) that necessitates ( n-1 ) equivalence queries for the L* algorithm to correctly learn the automaton.
The L* algorithm operates by building an observation table and using membership and equivalence queries to fill out and correct this table. In each round, the algorithm proposes a hypothesis automaton based on the current observation table. If the hypothesis is incorrect, an equivalence query will provide a counterexample, which is then used to update the table for the next round.
I have been able to construct such DFAs manually for small ( n ) (up to 7), with each automaton requiring one less round than the number of states to be learned. However, I'm struggling to generalize this construction for arbitrary ( n ). Here is the progression pattern I've observed for my DFAs so far (also the observation table for the last hypothesis):
I am looking for a systematic method to create such DFAs for any given ( n ), ensuring the L* algorithm requires ( n-1 ) rounds to learn the DFA. How can this be achieved? Is there a general pattern or formula that can be applied to construct these DFAs?






