I'm reading Johnstone's Notes on Logic and Set Theory.
And here he shows that there is a very simple algorithm to distinguish gibberish $(M\setminus F_\Omega(X))$ from legitimate expressions $(F_\Omega (X))$:
I thought this to be extremely clever due to the simplicity of the algorithm and I think, perhaps there is a general theory about this distinction that can be applied to programming languages and other formal languages. Is there such a thing or this algorithm can (somehow) be applied to all such decision problems?


Seems like the language here is context free, and the described algorithm is basically a push-down automaton (the counter is represented by the automaton's stack). For more information about various classes of formal languages, and algorithms for their recognition, I recommend Michael Sipser's book.