Converging implies weak convergence?

70 Views Asked by At

Let $X_{n}$ be real valued random variables which converges to $0$ (in probability or almost surely if necessary), and let $X_{(n\cdot)}$ be its linearization for $t\in\left[0,1\right]$: $$ X_{(nt)}=\begin{cases} X_{k} & nt=k\\ \theta X_{k-1}+\left(1-\theta\right)X_{k} & \theta=\lceil nt\rceil-nt , k=\lceil nt\rceil \end{cases} $$ and let $\varphi:C[0,1]\to[0,1]$ be bounded and continuous then: $$ E\varphi(X_{(n\cdot)})\to0 $$ Any help would be much appreciated.

2

There are 2 best solutions below

0
On BEST ANSWER

The answer is no, it is even suffice to look at the determinstic random variables:$$X_{1}=3,X_{2}=0,X_{3}=0,X_{4}=0,...$$ and lets look at the following operator $$\varphi\left(f\right)=\sup_{t\in\left[0,1\right]}\left|f\left(t\right)\right|$$ we have $$E\left[\varphi\left(X_{n}\left(t\right)\right)\right]=\varphi\left(X_{n}\left(t\right)\right)=3$$

1
On

Using the same notations $$\min(X_k , \theta X_{k-1} + (1-\theta) X_k) \leq X_{(nt)} \leq \max(X_k , \theta X_{k-1} + (1-\theta) X_k)$$

Let be $f$ continuous bounded function $$ \min( E[f(X_k)] , E[ f(\theta X_{k-1} + (1-\theta) X_k] \ ) \leq E[f(X_{(nt)})] \leq \max( E[f(X_k)] , E[ f(\theta X_{k-1} + (1-\theta) X_k] \ ) $$ But both of two $max$ or $min$ evalued variables converges to $0$ in law (thanks to Slutsky).