In the lecture notes I have, the constrain set in the equality case is defined as $S \cap \{x\in \mathbb{R}^n : g(x)=0\}$ where $S \subseteq \mathbb{R}^n$ open.
The inequality and equality-inequality cases are similarly defined.
Then the examples do not use the set $S$.
Why in the definition this set $S$ is used? I understand it is a more general definition than the usual $\{x\in \mathbb{R}^n : g(x)=0\}$ but what is the real use? One idea that comes to my mind is that $S$ may be the domain of the objective function. Any further thoughts?
You are right that one reason to use a set $S$ smaller than all of $\mathbb R^n$ is to allow for your objective function (or for that matter the constraints) to be undefined outside $S$.
Another effect of using a smaller $S$ is to declare that there are some constraints you don't want to deal with in the standard way, using $g(x)$. This changes the result of applying techniques like KKT duality, which I'm going to use as an example here (but pretty much any other technique you use will also be affected). Consider:
If we "move" a constraint $g_i(x) \ge 0$ to be a part of the definition of the domain $S$, the dual changes! It loses the variable $\lambda_i$, and the Lagrangian $L(x,\lambda)$ loses its $\lambda_i g_i(x)$ term. Then, in the definition of $h(\lambda)$, there is an extra restriction of $g_i(x) \ge 0$ on the set of $x$ that we're optimizing over. So we're making our dual simpler (it will have fewer variables) at the cost of making its objective function harder to understand.
In the special case of linear programming, we often want to make the nonnegativity constraints $x\ge 0$ be part of the domain constraints, rather than "official" inequality constraints, because it results in a more elegant dual. (I'm not sure how this squares with the requirement in your lecture notes that $S$ should be open, because $S = \{x \in \mathbb R^n : x \ge 0\}$ certainly isn't open. I have seen textbooks allow arbitrary $S$.)
In addition to KKT duality, another interesting example to look at is what happens when we try penalty methods. Here, moving some constraints to $S$ is equivalent to saying: "I don't want to relax this constraint to a penalty. I want to enforce it as an absolute constraint, even when I'm using a penalty method on everything else."
Here, the cost is that we lose a big advantage of penalty methods: they turn constrained optimization into unconstrained optimization, but if there's a "domain restriction" that $x \in S$, then our optimization is kind of still constrained. However, if $S$ is simple to describe, we might not suffer too much from this.
The advantage is that when you get a solution to a penalty problem, it will not satisfy our constraints from $g(x)$ exactly (there's the usual thing that in the limit as the penalty increases, it will satisfy them approximately, maybe, terms and conditions apply). However, it will satisfy $x \in S$ exactly.