Using Markov's inequality

184 Views Asked by At

If a point is chosen uniformly at random from the unit ball in $\mathbb{R}^{n}$ (that is, the set {($x_1$, $\dots$, $x_n$) : $x_1^{2}$+$\cdots$+$x_n^2$$\leq$$1$}), and $L_n$ is the distance of the point from the origin, I found that $E(L_n)$ is $\frac{n}{n+1}$.

However, I want to show that if a point if chosen uniformly at random from a high dimensional unit ball it is likely to be very close to the boundary. Hence, I want to use Markov's inequality to show that $L_n$$\rightarrow$$1$ in probability as $n\rightarrow\infty$.

I know that $P(X\geq t)\leq\frac{E(X)}{t}$ for any $t>0$, but I have difficulties proceeding after this.

2

There are 2 best solutions below

2
On

$P(L_n<t) \sim t^{n-1}$, hence the limit is 0 if $t<1$ and 1 if $t\ge1$. In this case convergence in distribution implies convergence in probability.

2
On

Let $D_n$ be the distance from the boundary so $D_n=1-L_n$ and $\mathbb E[D_n]=\frac{1}{n+1}$ and $\mathbb P(D_n\gt t )\lt \frac{1}{t(n+1)}$

Then, given any positive $\epsilon$ and $\delta$, for any $n> \frac{1}{\delta \epsilon}-1$ you have $\mathbb P(D_n\gt \epsilon )\lt \frac{1}{\epsilon/(\delta \epsilon)}=\delta$

and so $D_n \to 0$ in probability as $n$ increases, and thus $L_n \to 1$ in probability