Wikipedia defines a random variable:
Let $(\Omega,F,P)$ be a probability space and $(E,S)$ a measurable space. Then an $(E,S)$-valued random variable is a measurable function $X:\Omega \to E$, which means that, for every subset $B \in S$, its pre-image $X^{-1}(B)\in F$ where $X^{-1}(B)= \{\omega: X(\omega)\in B\}$. This definition enables us to measure any subset $B \in S$ in the target space by looking at its pre-image, which by assumption is measurable.
Question 1. Do all $E$-values in $E$ to which the random variable maps have to be included as part of some event in $S$? Since $S$ is a sigma-algebra in measurable space $(E,S)$ it has to include all of $E$, right?
Question 2. One of the elements in $S$ will always be $E$ (since $S$ is a sigma-algebra on $E$) and the same goes for $(\Omega,F)$. Does the random variable always map entire $E$ to probability of entire $\Omega$ (i.e. probability of entire $E$ is always 1)?
Question 3. Is it possible to have events in $S$ that do not have probability i.e. are not mapped to any event in $F$?
I realise the questions overlap somewhat.
You are getting confused between image and inverse image.
$E$ is certainly in $S$ and the inverse image of $E$ under any map from $\Omega $ to $E$ is always $\Omega$.
For every set $F$ is $S$ certainly $P(X \in F)=P(X^{-1}(F))$ is well defined.