I'm using a language $L$, and we'll say that say that an $L$-structure is minimal if it has no proper substructure.
I want to show that if a structure $A$ is a minimal $L$-structure then every element of $A$ is named by a term, by which I mean that for every $a ∈ A$ there is a closed $L$-term $λ$ such that $λ^A = a$, where by $λ^A$ I mean the the interpretation of $λ$ in $A$.
This is supposed to be an if and only if, and I've been able to do the other direction - starting with the existence of closed terms which are interpreted to be any element $a ∈ A$, because it's simple to just show that the inclusion map of any substructure fixes all of the closed terms (since they're just inductively either constants or functions of other terms) and so all the elements of $A$ must be in the substructure.
I'm struggling, however, to see how to solve the other direction. I've just started a model theory course and would appreciate any help anyone could offer.
A minor quibble: the statement as posed is false under the usual definition of "structure," which requires that the structure be nonempty. For the statement to be true as written we need to allow empty structures as well: the point being that otherwise a one-element structure in a language with no constant symbols would be a minimal structure with a non-closed-term-definable element. But this isn't a huge issue - and in fact is a decent argument that we should allow empty structures in general after all.
Try proving the contrapositive instead: that if it's not the case that all elements of $A$ are named by closed terms, then $A$ is not minimal - that is, $A$ has a proper substructure.
To show that $A$ has a proper substructure (in any situation), we want to start by finding a natural proper subset of $A$ which we'll then prove is actually a substructure. Under the given hypothesis, can you think of any particular distinguished proper subset of $A$?
Now, do you see why this is in fact a substructure?
Incidentally, there's a natural generalization of this result:
The proof is basically the same:
But this slight generalization turns out to be incredibly useful: it leads to the (full) version of the downwards Lowenheim-Skolem, theorem, namely that for every language $L$, every $L$-structure $A$, and every $X\subseteq A$ there is an elementary substructure $B\preccurlyeq A$ with $X\subseteq B$ and $\vert B\vert=\max\{\vert X\vert, \vert L\vert, \aleph_0\}$. This gets used all over the place in model theory, and is really one of the two most fundamental theorems of the subject (the other being the compactness theorem).