Assume I have two random variables $A$ and $B$, which are not independent. In my particular case they will be values of a stochastic process at two given points in time, where $A$ is observed at an earlier time.
Define $(B|A)$ to be a conditional random variable, i.e. a random variable defined by the conditional distribution of $B$ given $A$. Question: is $(B|A)$ independent of $A$? Why? Why not? Under what conditions it is?
EXAMPLE: Let $A$ and $C$ be two independent Gaussian (0, 1) random variables, and let $B=A+C$. Then $(B|A=a)$ is Gaussian (a, 1) and seems to be independent of $A$, but I am not sure how to work formally with these kind of things.
EDIT: As Sebastian Andersson pointed out in the comment, $A$ and $(B|A)$ seem not to be independent. However, what if we condition on $A=a$, where $a$ is a constant? The intuition would be that first we are interested in the uncertain event $A$, and then after it happens (and we know the outcome), we are interested in an event $(B|A=a)$. Does it make sense?
As Did says, there is no (currently widely used) definition of a random variable $(B|A)$. I actually once spent some time thinking about whether such a thing could be defined, and came to the conclusion that it could, but that it wasn't a terribly useful definition. So let us say no more about that.
However, as you say, we can certainly consider the probability distribution of $B$ when conditioned on a particular value of $A$, which we could write $(B\mathop{|}A=a)$. (This notation isn't standard, but I don't think there's anything wrong with it, so I'll use it. Just remember that this notation stands for a probability distribution, not a "random variable" in the usual sense of the word.) In this case it's true that $(B\mathop{|}A=a)$ is independent of $A$. This is simply because the conditional probability $p(B=b\mathop{|}A=a) = p(B=b,A=a)p(A=a)$ assumes $A$ has the value $a$, regardless of its "actual" value. Thus if $A$ is measured and found to have the outcome $a'$, this does not change anything about $(B\mathop{|}A=a)$.
In the context of your stochastic process, the disribution $(B\mathop{|}A=a)$ has the interpretation "the probability distribution we would have at time $t_2$, if we observed the value $a$ at time $t_1$." You should be able to see intuitively that this doesn't depend on whether we've observed the system at time $t_1$, or what the result was if we did.