I am not able to extract the concept of a tail $\sigma$-algebra from its definition. So, given a probability space $(\Omega,\mathcal{A},P)$ and sub-$\sigma$-algebrae $\mathcal{A}_1,\mathcal{A}_2,\dots$ (e.g. $\mathcal{A}_j:=\sigma(X_j)$ for r.v. $X_1,X_2,\dots$ on $\Omega$) I call $$\mathcal{C}_\infty:=\bigcap_{j=1}^\infty\sigma\left(\bigcup_{k=j}^\infty \mathcal{A}_k\right)$$ the tail $\sigma$-algebra.
So how would I extract from this defintion the property that allows people say "this event is not part of $\mathcal{C}_\infty$ because it depends on $X_1$" as I often read?
There must be some assumptions for this statement to hold, e.g. $(X_j=X_1,\,j\in\mathbb{N})$ would disallow the statement (right?); I feel the $\mathcal{A}_1,\mathcal{A}_2,\dots$ must somehow "separate" the space?
Can someone give me a criterion with proof that allows me talk conceptually about tail $\sigma$-algebrae? Thank you very much. (I don't have trouble to see why e.g. $\limsup$'s are in the tail $\sigma$-algebra, I rather have trouble with giving negative answers)
Here is one result that captures the intuition of the tail algebra, at least for me.
Define two sequences $x = (x_1, x_2, \ldots)$ and $y = (y_1, y_2, \ldots)$ to be tail equivalent if there exists an index $N$ such that for all $n \geq N$, $x_n = y_n$. Then for any measurable set $S$ in the tail algebra, $x \in S$ if and only if $y \in S$. So from the point of view of probability, $x$ and $y$ are indistinguishable.
This follows from this elementary result. Let $Y$ be a probability space. Consider the function $\pi: X \times Y \to Y$, $\pi( (x, y) ) = y$, and give $X \times Y$ the measure $\sigma(\pi)$. Then for a measurable set $S$, $(x, y) \in S$ iff $(x', y) \in S$. (This in turn follows from the definition of $\sigma(\pi)$.)