Note the following (not too exact) correspondence between natural and formal languages.
a. In a natural language we begin with a set of alphabets.
a'. In a first order language we begin with a set of symbols.
b. In a natural language we construct (meaningful/legitimated) words from alphabets using particular rules. So an arbitrary finite sequence of alphabets is not necessarily a meaningful word.
b'. In a first order language we construct (meaningful/legitimated) terms from symbols using particular rules. So an arbitrary finite sequence of symbols is not necessarily a meaningful term.
c. In a natural language we construct (meaningful/legitimated) sentences from words using particular rules. So an arbitrary finite sequence of words is not necessarily a meaningful sentence.
c'. In a first order language we construct (meaningful/legitimated) sentences (formulas) from terms using particular rules. So an arbitrary finite sequence of terms is not necessarily a meaningful sentence (formula).
d. In a natural language we construct (meaningful/legitimated) texts from sentences using particular rules. So an arbitrary finite sequence of sentences is not necessarily a meaningful text.
d'. In a first order language we construct (meaningful/legitimated) theories from sentences without any rules. So an arbitrary (finite or infinite) set of sentences is a theory.
Question 1: Why the line of producing new legitimated objects using former and simpler legitimated objects is broken in theories of first order logic?
Question 2: Are there logics with particular rules for producing legitimated theories from sentences?
Question 3: Is there a reasonable criterion to determine which sequence of first order sentences is a legitimated first order theory?
(This is not quite an answer, but you might still find it useful enough.)
You need to discern between syntax and semantics. While the sentence "The dog programmed a cat to force a power set" is syntactically correct (I hope), it is semantically meaningless.
In first order logic, the theory $\{p,\lnot p\}$, while formally a theory (it is a set of well-formed sentences, given $p$ is such) it is inconsistent and therefore semantically meaningless.
We are not interested in every theory, we are interested in the theories which are not inconsistent, or at least not exhibiting obvious proofs of inconsistency1 (assuming some reasonable foundational theory in the background, e.g. $\sf PA$ or $\sf ZFC$).
So your point, while correct, misses the point. Meaningfulness is semantic consistency, and in first-order logic we have the completeness theorem which tells us that a theory is consistent if and only if it has a meaning.
It seems to me, therefore, that all your questions are about consistency. That we should allow creating theories only when we can ensure they are consistent (and indeed in one model theory course that I took a theory was always assumed to be consistent within the definition).
For this the compactness theorem is wonderful. It tells us that a theory is consistent if and only if the conjunction every finite fragment is not a false sentence. Which gives us a wonderful criterion for meaningful theories.
Footnotes.