This question is inspired by a problem I am working on, but in fact its more about the general approach, so I won't go into details.
Let $A$ be a measurable set, defined by some complicated conditions on a random variable $X$. I know, that if I assume some conditions $B$ on $X$, then $X\in A$ if and only if some far more simple conditions hold on $X$ (say given by $C$).
Is then $P_X(A|B)=P_X(C|B)$ or is already $P_X(A)=P_X(B\cap C)$?
How can I use this to calculate $P_X(A)$ if I know $P_X(C)$?
I was thinking about using Bayes, but then I still need to calculate $P_X(B|A)$ and I don't know how to do this other than by its definition (which would result in a circular argument)
Re: your first question, @sudeep5221 is right: what you describe is:
which is equivalent to:
from which you can conclude: $P(A\cap B) = P(C\cap B), P(A|B) = P(C|B)$.
You cannot conclude $P(A) = P(C \cap B)$. If this were true, then you'd have $P(A) = P(A \cap B)$ which means $A \subset B$.
Re: your second question: If you know how $A,C$ correlate conditioned on $B$, but have no further info on how $A,C$ correlate when conditioned on $\lnot B$ ($\lnot$ means NOT), then there is almost nothing you can say about how $A,C$ correlate when unconditioned.
The general insight is that conditioning creates its own probability law, so to speak. Maybe it's easier to think of Venn diagrams? You have a blob which represents $B$, and you know how $A$ and $C$ overlap (or not) inside that blob. But you have no info on how $A$ and $C$ overlap (or not) outside that blob. So you can say almost nothing about how $A$ and $C$ overlap (or not) in the overall picture. In particular it is possible that $P(A) >, =, $ or $< P(C)$... it all depends on whether $P(A \cap \lnot B) >, =, $ or $< P(C \cap \lnot B)$.