My book defines mutual independence as: events ${A_1, A_2, ..A_n}$ are mutually independent if for any subset ${A_1, A_2, ..A_m}$ (where $m \leq n$) of these events we have: $$P(A_1 \cap A_2 \cap ...A_m) = P(A_1)P(A_2)...P(A_m)$$ The book also defines discrete random variables $X_1, X_2, ...., X_n$ as mutually independent if $$P(X_1 = r_1 , X_2 = r_2, ...X_n = r_n) = P(X_1= r_1)P(X_2= r_2)...P(X_n= r_n)$$ If $X_1, X_2, ...., X_n$ are continuous with CDFs $F_1(x_1),F_2(x_2),...F_n(x_n)$, then they are mutually independent if: $$F(x_1, x_2..., x_n) = F_1(x_1)F_2(x_2)...F_n(x_n)$$ I'm trying to link these ideas together. Is $X_1 = r_1$ (or $X_1 < x_1$) etc. equivalent to an event in ${A_1, A_2, ..A_n}$ in the first equation? If so, why is there not a requirement to check that all subsets of these events are independent (for example, that $$P(X_1 = r_1 , X_2 = r_2, ...X_m = r_m) = P(X_1= r_1)P(X_2= r_2)...P(X_m= r_m)$$ for any $m \leq n$) per the first definition?
2026-03-27 19:33:50.1774640030
Applying the definition of mutual independence to outcomes of random variables
208 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in PROBABILITY
- How to prove $\lim_{n \rightarrow\infty} e^{-n}\sum_{k=0}^{n}\frac{n^k}{k!} = \frac{1}{2}$?
- Is this a commonly known paradox?
- What's $P(A_1\cap A_2\cap A_3\cap A_4) $?
- Prove or disprove the following inequality
- Another application of the Central Limit Theorem
- Given is $2$ dimensional random variable $(X,Y)$ with table. Determine the correlation between $X$ and $Y$
- A random point $(a,b)$ is uniformly distributed in a unit square $K=[(u,v):0<u<1,0<v<1]$
- proving Kochen-Stone lemma...
- Solution Check. (Probability)
- Interpreting stationary distribution $P_{\infty}(X,V)$ of a random process
Related Questions in PROBABILITY-THEORY
- Is this a commonly known paradox?
- What's $P(A_1\cap A_2\cap A_3\cap A_4) $?
- Another application of the Central Limit Theorem
- proving Kochen-Stone lemma...
- Is there a contradiction in coin toss of expected / actual results?
- Sample each point with flipping coin, what is the average?
- Random variables coincide
- Reference request for a lemma on the expected value of Hermitian polynomials of Gaussian random variables.
- Determine the marginal distributions of $(T_1, T_2)$
- Convergence in distribution of a discretized random variable and generated sigma-algebras
Related Questions in INDEPENDENCE
- How to prove mutually independence?
- Simple example dependent variables but under some conditions independent
- Perturbing equivalent measures
- How to prove conditional independence properties
- How do I prove A and B are independent given C?
- Forming an orthonormal basis with these independent vectors
- Independence of stochastic processes
- joint probability density function for $ X = \sqrt(V) \cdot cos(\Phi) $ and $ Y = \sqrt(V) \cdot sin(\Phi) $
- How predictable is $Y$, given values of $X_i$s?
- Each vertex of the square has a value which is randomly chosen from a set.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
The OP's book's statement that
is incorrect because the equality stated above must hold for every subset of ${A_1, A_2, ..A_n}$ (e.g. such as $\{A_1, A_3, A_9\}$), not just the subsets $\{A_1, A_2, ..A_m\}$ consisting of the first $m$ of the $n$ events.
Note that $\{i_1, i_2, \ldots i_k\}$ is a subset of $\{1,2,\ldots, n\}$ and that $$\big(A_1 \cap A_2 \cap \cdots \cap A_n\big) ~~{\subset}~~ \big(A_{i_1} \cap A_{i_2} \cap \cdots \cap A_{i_k}\big)$$ which looks bass ackwards at first glance to many people but is absolutely true: just think about it a bit.
An alternative definition of mutually independent events (which is logically equivalent to the standard one above) is that the following $2^n$ conditions hold: $$P\big(A_1^* \cap A_2^* \cap \cdots \cap A_n^*\big) = P(A_1^*)P(A_2^*)\cdots P(A_n^*)\tag{***}$$ where the $2^n$ equalities that must hold are obtained by replacing each $A_i^*$ with either $A_i$ or by $A_i^c$; your choice for each $i$, but whatever choice you make, you must use the same choice on both sides of $(***)$. With $n$ binary choices, you get $2^n$ equations. One of these $2^n$ equations is of course $(*)$ but note that another one is $$P\big(A_1 \cap A_2 \cap \cdots \cap A_{n-1}\cap A_n^c\big) = P(A_1)P(A_2)\cdots P(A_{n-1})P(A_n^c)$$ which when added to $$P\big(A_1 \cap A_2 \cap \cdots \cap A_{n-1} \cap A_n\big) = P(A_1)P(A_2)\cdots P(A_{n-1})P(A_n)\tag{*}$$ allows us to deduce that $$P\big(A_1 \cap A_2 \cap \cdots \cap A_{n-1}\big) = P(A_1)P(A_2)\cdots P(A_{n-1})$$ which is of the form $(**)$. Similarly, the more general $(**)$ with $k$ events in it is obtained by adding together $2^{n-k}$ of the equations $(***)$.
=================================================
The requirement for independence of $n$ random variables $X_1, X_2, \ldots, X_n$ that $$F_{X_1, X_2, \ldots, X_n}(x_1, x_2,\ldots, x_n) = \prod_{i=1}^n F_{X_i}(x_i)\tag{1}$$ holds for all choices of $x_1, x_2, \ldots, x_n$, that is, at every point in $n$-dimensional space, the $n$-dimensional joint CDF factors into the product of the marginal CDFs. In particular, the equality holds in the limit as some of the $x_i$ increase without bound (and so $F_{X_i}(x_i)$ converges to $(1)$) and thus the left side converges to the marginal CDF of those $x_j$ that retain their finite value while the right side of $(1)$ converges to the product of the $F_{X_j}(x_j)$'s. Since $$\lim_{x_i\to\infty} F_{X_i}(x_i) = 1,$$ we are left with the subset result $$F_{X_{i_1}, X_{i_2}, \ldots, X_{i_m}}(x_{i_1}, x_{i_2},\ldots, x_{i_m}) =\prod_{k=1}^m F_{X_{i_k}}(x_{i_k}),\tag{2}$$ that is, Eq. $(1)$ implies that all the subsets of random variables also adhere to the factorization rule.
Why is the result "different" for random variables than for events where the factorization $$P\big(A_1 \cap A_2 \cap \cdots \cap A_{n-1} \cap A_n\big)=P(A_{i_1})P(A_{i_2})\cdots P(A_{i_k})\tag{*}$$ does not imply the factorization $$P\big(A_{i_1} \cap A_{i_2} \cap \cdots \cap A_{i_k}\big) = P(A_{i_1})P(A_{i_2})\cdots P(A_{i_k})~????\tag{**}$$ Well, the answer is that $(1)$ is not just one instance where the factorization holds but infinitely many instances where we are assuming that the factorization holds. That is, if we have that for one specific (fixed) vector of real numbers $(x_1, x_2, \ldots, x_n)$, it just so happens that the events $A_i = \{X_i = x_i\}$ are independent events and so $F_{X_1, X_2, \ldots, X_n}(x_1, x_2,\ldots, x_n)$ factors into \begin{align}F_{X_1, X_2, \ldots, X_n}(x_1, x_2,\ldots, x_n) &= \prod_{i=1}^n F_{X_i}(x_i)\\ \big\Downarrow~~~~~~~~~~~~~~~~ &~ ~~~~~~~~~~~~\big\Downarrow\\ P\big(A_1 \cap A_2 \cap \cdots \cap A_{n-1} \cap A_n\big) &= P(A_1)P(A_2)\cdots P(A_{n-1})P(A_n) \end{align} that equality does not imply that the events $B_i = \{X_i = x_i^\prime\}$ corresponding to some other vector $(x_1^\prime, x_2^\prime, \ldots, x_n^\prime)$ of real numbers also are independent. In short, $(1)$ involves much stronger assumptions than $(*)$ does; $(1)$ says that infinitely many collections of $n$ events are collections of $n$ mutually independent events while $(*)$ is an assertion about just one of those collections. That is why $(1)$ implies $(2)$ while $(*)$ does not imply $(**)$.