Strangely textbooks on probability theory do not define when some events $A$ and $B$ are maximally or minimally statistically dependent, let alone a measure of dependence between them.
The following seems plausible for arbitrary given values of $P(A)$ and $P(B)$:
- $P(A\cap B)=min(P(A);P(B))$, if $A$ and $B$ are maximally dependent
- $P(A\cap B)=P(A)\cdot P(B)$, if $A$ and $B$ are independent
- $P(A\cap B)=max(0;P(A)+P(B)-1)$, if $A$ and $B$ are minimally dependent
Note that this is not a sufficient definition as I wrote "if" instead of "if and only if". This is because the three formulas are identical when $P(A)=P(B)=1$ or $P(A)=0$ or $P(B)=0$ (or both in the latter two cases).
What then would be a continuous measure $D(A;B)$ of the degree of probabilistic dependence where
- $D=1$ if $A$ and $B$ are maximally dependent
- $D>0$ if $A$ and $B$ are positively dependent
- $D=0$ if $A$ and $B$ are independent
- $D<0$ if $A$ and $B$ are negatively dependent
- $D=-1$ if $A$ and $B$ are minimally dependent
?
I have to leave unspecified what the value of $D$ should be if $P(A)=P(B)=1$ or $P(A)=0$ or $P(B)=0$.
Judging from the formulas above, $D$ should plausibly have the following properties:
- $D(A;B)=D(B;A)$ (symmetry)
- $D(A;B)=D(\overline{A};\overline{B})$
- $D(\overline{A};B)=D(A;\overline{B})$
- $D(A;B)=-1\cdot D(\overline{A};B)$
(From those properties some others can be derived, e.g. $D(A;\overline{B})=D(\overline{A};B)$ or $D(A;\overline{B})=-1\cdot D(\overline{A};\overline{B})$.)
Thank you in advance for your help!