The function $\log^+x=\max\{1, \log x\}$.

103 Views Asked by At

I was reading Marcinkiewicz-Zygmund (MZ) law of large numbers for random fields and came across necessary and sufficient condition $E(|X|\log^+|X|)< \infty$ for MZ-SSLN to hold true. I have a question about this function $\log^+|X|$. Why don’t they just need condition without this max, that is, $E(|X|\log|X|)< \infty$?

1

There are 1 best solutions below

1
On BEST ANSWER

@Shyam, you were right after all! Its unnecessary to state that for $\log^+$ instead of $\log$, as it can be shown easily that $$ \operatorname{E}[|X|\log^+|X|]<\infty \iff \operatorname{E}[|X|\log|X|]<\infty $$

The reason is that the function $x\mapsto x\log x$ is bounded in $(0,1)$ and it can be continuously extended to $[0,1)$. Let $Y:=|X|$ then note that $$ \operatorname{E}[Y\log Y]=\int_{[0,\infty )}y\log y P_Y(dy)=\int_{[0,e)}y\log y P_Y(dy)+\int_{[e,\infty )}y\log yP_Y(dy) $$

However as $P_Y$ is a probability measure it follows that $$ \left| \int_{[0,e)}y\log y P_Y(dy) \right|\leqslant \sup_{y\in[0,e)}|y\log y|\Pr [0\leqslant Y< e]\leqslant e $$ Therefore $\operatorname{E}[|X|\log |X|]<\infty $ if and only if $\int_{[e,\infty )}y\log yP_Y(dy)<\infty $. A similar result shows that $\operatorname{E}[|X|\log^+|X|]<\infty $ if and only if $\int_{[e,\infty )}y\log yP_Y(dy)<\infty $, so both conditions are equivalent.

However in probability theory usually we use $\log^+$ instead of $\log$ because $\log^+$ is positive and increasing, so is a nicer function to apply many measure-theoretic results as MCT, DCT, Fubini's or Tonelli's theorem, etc... It would be easier to prove that the MZ-SLLN hold using $\log^+$ than $\log$.