Let $X_1, X_2, ...$ be independent random variables.
Define $$\mathscr{T}_n = \sigma(X_{n+1}, X_{n+2}, \ldots)$$ and $$\mathscr{T} = \bigcap_{n} \mathscr{T}_n,$$ the tail σ-algebra of $(X_1, X_2, \ldots)$.
Are $\sigma(X_1), \sigma(X_2), ...$ independent of $\mathscr{T}$?
If so, why?
If not, why, and what about $$\sigma(X_1), \sigma(X_2), ..., \sigma(X_k) \ \;\forall k \in \mathbb{N}\quad?$$
All I got so far is that if $X_1, X_2, \ldots$ were events instead of random variables, $X_1, X_2, \ldots, X_k \ \forall k \in \mathbb{N}$ would be independent of some events in $\mathscr{T}$ such as $\limsup X_n$.
This is merely a slight extension of Nate's comment, but it should answer your question:
On $(\Omega,\mathcal{A},\mathbf{P})$ we have a sequence $(X_i)_{i\in \mathbb{N}}$ of independent RVs $X_i:(\Omega,\mathcal{A})\to(\Omega_i,\mathcal{A}_i)$ and the corresponding sequence $\sigma(X_i)_{i\in \mathbb{N}}\subset\mathcal{A}$ of independent $\sigma$-algebras, where we have $\sigma(X_i):=\{X^{-1}(A):A\in\mathcal{A}_i\}$.
Now we apply Kolmogorov's 0-1 law which states, that all events in the tail $\sigma$-algebra (also called terminal $\sigma$-algebra) $\mathcal{T}$ are trivial (i.e. almost sure happen or or almost sure don't) $$ A\in\mathcal{T}:\mathbf{P}(A)\in\{0,1\} $$ Now, this means that $\mathcal{T}$ is independent to the other $\sigma(X_i)_{i\in \mathbb{N}}$, since for $A\in\mathcal{T},B\in\sigma(X_i)$ it holds $$ \mathbf{P}(A\cap B)=\mathbf{P}(A)\mathbf{P}(B) \tag 1 $$ This must be true, because either we have $\mathbf{P}(A)=0$, which means - since $(A\cap B)$ is a subset of $A$ - that we have $\mathbf{P}(A\cap B)=0=\mathbf{P}(A)\mathbf{P}(B)$, or we have $\mathbf{P}(A)=1$. In the latter case we have because of $\mathbf{P}(A\cup B)=1$ again the stated equality of $(1)$ $$ \mathbf{P}(A\cap B)=\mathbf{P}(A)+\mathbf{P}(B)-\mathbf{P}(A\cup B)=1+\mathbf{P}(B)-1=\mathbf{P}(B)=\mathbf{P}(A)\mathbf{P}(B) $$ This proves that $\mathcal{T}$ is independent to the other $\sigma(X_i)_{i\in \mathbb{N}}$.