The general definition of the $\phi$-mixing coefficient is as follows:
$$\phi(\sigma_1,\sigma_2):= \sup\limits_{B \in \sigma_1, A \in \sigma_2, \mathbb{P}(B)>0}\bigg \vert \mathbb{P}(A \mid B)-\mathbb{P}(A)\bigg \vert$$
Now imagine I have some stationary process $\{R_t\}_{t \in \mathbb Z}$;
Then we define: $$\phi(n):=\phi(\mathcal{F}_{-\infty}^0,\mathcal{F}_{n}^\infty)$$
where $\mathcal{F}_{-\infty}^0$ denotes the sigma algebra generated by $R_0, R_{-1},....$ and $\mathcal{F}_{n}^\infty$ denotes the sigma algebra generated by $R_n R_{n+1},....$;
Now the process is said to be $\phi$-mixing if the number goes to 0 when $n$ goes to infinity; Additionally we might give some convergence speed and say e.g.
$$\phi(n)\leq n^{-1-\epsilon}$$ for some $\epsilon>0$.
Now I do some transformation on it; Let's say $\{\vert R_t-m \vert \}_{t \in \mathbb Z}$, where $m$ could be the median;
Also I might do some other transformations e.g. with indicator functions, etc.
Now what I want to show: The transformation is still $\phi$-mixing and maintains the convergence speed; This seems only logical because of some transformation I can only lose information but not gain any;(and losing information means also losing dependence)
But I have no clue how to start a proper proof of it;
Any idea?
With transformations I mean something like this:
For what you said, with $R_t'=T(R_t)$ for some measurable function $T$ but also imagine $R_1, ..., R_T$ is a given sample of the process and I want to bootstrap it; Imagine we have 20 realizations, so $T=20$; Now if I bootstrap it e.g. with block length 2, then I pick 10 random natural numbers $\{i_1,...,i_{10}\}$ in $\{1,2,...,20\}$ and my new sample is:
$X_{i_1}, X_{i_1+1}, X_{i_2}, X_{i_2+1}, ..., X_{i_{10}}, X_{i_{10}+1}$
If some index is above 20 I just calculate it modulo 20 (so $X_l=X_{l \pmod{20}}$)
T and the block length can of course vary; Also here it seems logical that we must lose some dependence and therefore the rate of convergence should be the same;
But in none of the two cases (measurable function or with block bootstrap) I am able to show that the rate of convergence maintains or at least the result is still $\phi$-mixing; But actually I would need both results to hold
Do you have any idea to show one of the statements? Happy for any idea
I'm describing the bootstrap procedure a little bit more in detail: Let $T$ be the amount of realizations; We choose $f(T)\asymp T^{\epsilon}$ for some $\epsilon \in (0,1)$ to be the block length, so it is depending on $T$, e.g. it could be $l:=\lfloor T^{0.2}\rfloor$;
and the number of blocks is then given by $m:=\lfloor T/l \rfloor$. Then the bootstrap sample might have a sample size being less than $T$ but it shouldn't make any problems and we can assume that the bootstrap sample also has size $T$;
More important is the observations that both, the number of blocks as well as the block length go to infinity when T goes to infinity;
The procedure is the same as I described before; take $m$ random numbers $\{i_1,...,i_m\}$ from the set $\{1,...,T\}$ randomly with equal probability and set the bootstrap sample as:
$X_{i_1},X_{i_1+1}, ...,X_{i_1+(l-1)}, X_{i_2},..., X_{i_2+(l-1)},..., X_{i_m},..., X_{i_m +(l-1)}$;
For an index exceeding $T$ we just calculate the index modulo $T$ as usual;
Also here it seems logical that the bootstrap sample should have the same convergence rate; Maybe for my purposes it would be enough to show that at least it is still $\phi$-mixing;
Can I just use this very very simple argument? "Since the block length goes to infinity and every block itself is $\phi$-mixing if we consider it as an own sample, also the bootstrap sample must be $\phi$-mixing" Is this a sufficient argument to prove at least that the dependence goes to zero? Not really sure about that to be honest...