My question is somewhat arbitrary but I was thinking about independence of processes and stopping times.
Say that we define two processes $X,Y$ on different probability spaces $(\Omega^i,\mathcal{F}^i,\mathbb{P}^i)$, $i=1,2$ and we take the product space. We redefine the processes by $X(\omega^1,\omega^2)=X(\omega^1)$ and $Y(\omega^1,\omega^2)=Y(\omega^2)$. The product space is denoted $(\Omega,\mathcal{F},\mathbb{P})$.
By product space properties independence follows directly with this notion of independence of stochastic processes:
Two stochastic processes $X:(\Omega,\mathcal{F},\mathbb{P}) \rightarrow (\mathbb{R},\mathcal{B},\lambda)$ and $Y:(\Omega,\mathcal{F},\mathbb{P}) \rightarrow (\mathbb{R},\mathcal{B},\lambda)$ ($\lambda$ denotes the Lebesgue measure) are independent if
\begin{align}\label{eq 1} \mathbb{P}((X_{t_1},\dots,X_{t_n})\in A \cap (Y_{s_1},\dots,Y_{s_m}) \in B) = \mathbb{P}((X_{t_1},\dots,X_{t_n})\in A)\mathbb{P}((Y_{s_1},\dots,Y_{s_m}) \in B) \\ \forall_{m,n \in \mathbb{N}} \quad \forall t_i,s_i \in [0,\infty) \quad \forall A \in \mathcal{B}(\mathbb{R}^n), \forall B \in \mathcal{B}(\mathbb{R}^m) \end{align}
Also we define stopping times by the hitting times for some set in the ranges of $X$ and $Y$ (say $\mathbb{R}$). So denote the sets $A$ and $B$ then the stopping times are $T_X=\inf_{t \geq 0}\{X_t \in A\}$ and $T_Y=\inf_{t \geq 0}\{Y_t \in B\}$.
By product space properties these are also independent. (This is all logical up unto this point).
But now say that $X_t$ and $Y_t$ are both distributed identically with some sort of distribution that depends on a parameter $\alpha$. If $\alpha$ is deterministic there is no problem, but when $\alpha$ is stochastic there could be a problem. Now the independence is not a given since all finite dimensional distributions depend on $\alpha$. So it would be natural to assume that given $\alpha$ we would have independence. So $T_X$ and $T_Y$ are conditionally independent, given $\alpha$.
The build of the model is easy. We just add a auxiliary probability space to the product space and extend definitions as before and view the new complete space. But how should we now handle the conditional independence as a probability statement (e.g. $\mathbb{P}(A \cap B|\alpha)=\mathbb{P}(A|\alpha))\mathbb{P}(B|\alpha)$?
To make my question somewhat more specific: how does the conditioning on $\alpha$ work in the pre-image $T_X^{-1}(A)$. If $\alpha$ is deterministic we can work without the auxiliary space and we just get a set of the form $A \times \Omega^2$ as pre-image. But if $\alpha$ is given, what form does this pre-image take?
edit: I had the idea that if $\alpha$ is given, it is known that $\alpha=a$ for some $a \in range(\alpha)$. Hence the pre-image would be $A \times \Omega^2 \times \alpha^{-1}(a)$?
This is a good question. I do not have a full response to it, but rather a partial answer.
Maybe one approach would be to try to make use of the conditional expectation $\mathbb{E}(T_X \mid \alpha)$, or more specifically its factorized form $\mathbb{E}(T_X \mid \alpha = a)$.
If $T_{X^a}, T_{Y^a}$ denote the first entry time in some set $A_X, A_Y \in \mathcal{B}(\mathbb{R})$ of the processes starting in $a$ respectively, they only depend on $\sigma(X_t, t \in [0,\infty)), \sigma(Y_t, t \in [0,\infty))$ respectively, and if $\alpha$ is independent of both, one should be able to calculate the factorized conditional probability. If $A,B \in \mathcal{B}(\mathbb{R})$ are fixed, it is known that $\mathbb{P}(T_X \in A \mid \alpha = a) = \mathbb{E}(T_X \mid \alpha = a) = \mathbb{P}(T_{X^a} \in A)$, hence one should expect to have $$ \mathbb{E}(\mathbb{1}_{\{ T_X \in A \cap T_Y \in B \}} \mid \alpha = a) = \mathbb{E}(\mathbb{1}_{\{ T_{X^a} \in A \cap T_{Y^a} \in B \}}) = \mathbb{E}(\mathbb{1}_{\{ T_X^a \in A \}}) \mathbb{E}(\mathbb{1}_{\{ T_X^a \in B \}}) = \mathbb{P}(T_{X^a} \in A) \mathbb{P}(T_{Y^a} \in B) = \mathbb{P}(T_X \in A \mid \alpha = a) \mathbb{P}(T_Y \in A \mid \alpha = a), $$ hence they should factorize, at least when conditioned on $\alpha = a$.