Poisson process - 2D

544 Views Asked by At

Points are distributed over a 2-D plane, except in a circle of radius $r_0$ centered at the origin, according to a homogeneous Poisson point process with density $\lambda$. Prove: $$\exists c_0:\lim_{\lambda\to \infty}\mathbb{P}\left(\sum_{d_k>r_0}\frac{1}{d_k^\alpha}>\lambda c_0\right)=0$$ where $d_k$ is the distance between the $k^{th}$ point & origin, and $\alpha>2$.

My effort:

\begin{align} \lim_{\lambda\to \infty}\mathbb{P}\left(\sum_{d_k>r_0}\frac{1}{d_k^\alpha}>\lambda c_0\right)\stackrel{(1)}{=}\mathbb{P}\left(\lim_{\lambda\to \infty} \frac{1}{\lambda }\sum_{d_k>r_0 }\frac{1}{d_k^\alpha}>c_0\right) &=\mathbb{P}\left(\lim_{\lambda\to \infty} \frac{1}{\lambda}\sum_{\{d_k>r_0\} \cap \{d_k<\lambda\} }\frac{1}{d_k^\alpha}> c_0\right)\\ &\stackrel{(2)}{=}\lim_{\lambda\to \infty} \mathbb{P}\left( \sum_{\{d_k>r_0\} \cap \{d_k<\lambda\} }\frac{1}{d_k^\alpha}>\lambda c_0\right)\\ \end{align}

How can I prove $(1)$ and $(2)$? If I prove them, I can continue from the last step and complete the proof. Please leave any useful comments.

2

There are 2 best solutions below

13
On BEST ANSWER

Your ideas seem to lead nowhere. Even leaving aside the correctness of (1), its right-hand side implicitly assumes that the Poisson processes are given on the same probability space (how exactly?).

So I will give some ideas, which you, hopefully, will be able to elaborate.

Denote by $\Pi_\lambda$ a Poisson point process in $\mathbb R^2$ with intensity $\lambda$, a random measure of the form $\Pi_\lambda = \sum_{n=1}^\infty \delta_{x_\lambda(n)}$, where $\{x_\lambda(n), n\ge 1\}$ are your Poisson points.

Theorem. For any integrable function $f$, $$ \lambda^{-1}\int_{\mathbb R^2} f(x) \Pi_\lambda(dx)\to \int_{\mathbb{R}^2}f(x) dx, \ \lambda\to \infty, \tag{1} $$ in probability.

Proof The convergence $(1)$ is easy to see for indicators of bounded sets (e.g. by Chebyshev or LLN). By linearity, it extends to simple functions with bounded support.

Now for arbitrary integrable $f$ let $(f_n,n\ge 1)$ be a sequence of simple functions with bounded support such that $\int_{\mathbb{R^2}} |f_n(x) - f(x)|dx \to 0$, $n\to\infty$.

Now write for any $\varepsilon>0$ $$ \mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}f(x) \Pi_\lambda(dx) - \lambda^{-1}\int_{\mathbb R^2}f_n(x)dx\right|>\varepsilon \right)\\ \le \mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}f_n(x) \Pi_\lambda(dx) - \lambda^{-1}\int_{\mathbb R^2}f_n(x)dx\right|>\frac\varepsilon2 \right)\\ + \mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}\big(f(x)-f_n(x)\big) \Pi_\lambda(dx)\right|>\frac\varepsilon2 \right). \tag{2} $$ Using Markov inequality, estimate $$ \mathbb{P}\left(\left|\lambda^{-1}\int_{\mathbb R^2}\big(f(x)-f_n(x)\big) \Pi_\lambda(dx)\right|>\frac\varepsilon2 \right) \\\le \frac2{\varepsilon\lambda}\mathbb{E}\left[\left|\int_{\mathbb R^2}\big(f(x)-f_n(x)\big) \Pi_\lambda(dx)\right|\right] \le \frac2\varepsilon \int_{\mathbb R^2}|f(x)-f_n(x)\big|dx. $$ Finally, first let $\lambda \to \infty$ in $(2)$ and then $n\to\infty$, arriving at the desired statement.

6
On

Sketch, rather informal approach:

Let the random variable $X_{R}$ count the points inside an annulus, at distances in the interval $[R;R+\Delta R)$ from the origin. Then $X_R$ is a Poisson rv with $E[X_R]=2\pi \lambda R \, \Delta R=Var({X_R})$.

Consider the random variable $$Z=\sum_{k\mid R_k>r_0}\frac{1}{ R_k^\alpha} \approx \sum_{R=r_0}^\infty \frac{X_{R}}{R^\alpha}$$

(The first sum is over all points, the second is over all distances, partitioned at intervals of length $\Delta R$ - the approximation tends to an equality as $\Delta R\to 0$)

Taking the expectation of the last sum:

$$E[Z]=\sum_{R=r_0}^\infty \frac{E[X_R]}{R_k^\alpha}\to \int_{r_0}^\infty 2\pi \lambda R^{1-\alpha} \, dR =\frac{2 \pi \lambda}{(\alpha-2)r_0^{\alpha-2}}$$

By a similar procedure we can calculate $\sigma_Z^2=Var(Z)$. In general, for independent variables $Y_k$, we have $Z=\sum a_k Y_k \implies \sigma_Z^2 =\sum a_k^2 \sigma_{Y_k}^2$ , hence in our scenario

$$ \sigma_Z^2 =\sum_{R=r_0}^\infty \frac{Var[X_R]}{R_k^{2\alpha}}\to \int_{r_0}^\infty 2\pi \lambda R^{1-2\alpha} \, dR=\frac{ \pi \lambda}{(\alpha-1)r_0^{2(\alpha-1)}}$$

Hence the normalized variable $W=\frac{Z}{\lambda}$ has fixed (wrt $\lambda$) mean $\mu_W$ , and vanishing variance: $\sigma^2_W = O(\lambda^{-1})$.

Then, applying Chebyshev inequality, $P(W > c_0) \to 0$ as $\lambda \to \infty$ as long as $c_0 > \mu_W$.