This question comes from reading the discussion here.
(1) If one is given a "probability measure" $P : F \rightarrow [0,1]$ mapping a Borel $\sigma$-algebra $F$ to $[0,1]$ then for two ``random variables"/"probability distributions", $X: O \rightarrow S_1, Y: O \rightarrow S_2$ (mapping the underlying space to "outcomes" say $O$ to some set $S_1$ and $S_2$ respectively and $F$ is a $\sigma$-algebra over $O$) we can define the "conditional probability" as a quantity between the two random variables as the map,
$$P(X\mid Y) : S_1 \times S_2 \rightarrow [0,1]$$ $$(s_1,s_2) \rightarrow \frac { P ( X^{-1}(s_1) \cap Y^{-1}(s_2)) }{ P( Y^{-1}(s_2))}$$
(2) But if $X$ and $Y$ were two "events" i.e $X, Y \in F$ then its equally possible to define a conditional probability by the ``Kolmogorov definition" $$P (X \mid Y ) = \frac { P(X \cap Y) }{ P(Y) } $$
- Are these two different notions of conditional probability?
There are two notions of conditional expectation, however in your question (1) and (2) are of the same kind.
Note that you defined in 1 the probability of an event $A = X^{-1}(s_1)$ given the event $B = X^{-1}(s_2)$.
in (2) you did the same for events $A = X$ and $B = Y$
You can consider the conditional probability of events as you said in (1) or(2) only if you have $P(Y)= 0$. what will you do if $P(Y)= 0$?.
The way to a more general notion of conditional expectation is not immediate and requires several techniques. Conditional expectation given a sigma algebra will be seen as a random variable and will require Radon Nikodym derivtives.
The connection between these two notions is again not immediate, it requires the principle of substitution and the notion of kernels (regular conditional probability distributions)