I am reading Murphys machine learning: a probabilistic perspective and in 4.3.2.1 it says that for a joint gaussian with 2 zero mean random variables, lets say $y$ and $x$, if the correlation is $0$, they are independent. It then goes on to say that if you know a $y$ value, the conditional pdf of $x$ given $y$ has the same variance of the marginal pdf of $x$. This implies that learning information about $y$ tells us nothing about $x$. However, when I draw a joint gaussian and take a slice for some fixed $y$ value, it appears the distribution will get tighter for $x$ even if there is 0 correlation. Why is it the case that our uncertainty of $x$ does not decrease after learning something about $y$? Additionally, why would $x$ and $y$ be independent. One potential answer is that after we normalize the slice of the gaussian, the conditional pdf of $x$ given $y$ will look just like the marginal pdf of $x$.
2026-04-11 20:20:02.1775938802
Why is $x_1$ independent of $x_2$ for a joint gaussian with $0$ correlation
37 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in NORMAL-DISTRIBUTION
- Expectation involving bivariate standard normal distribution
- How to get a joint distribution from two conditional distributions?
- Identity related to Brownian motion
- What's the distribution of a noncentral chi squared variable plus a constant?
- Show joint cdf is continuous
- Gamma distribution to normal approximation
- How to derive $E(XX^T)$?
- $\{ X_{i} \}_{i=1}^{n} \thicksim iid N(\theta, 1)$. What is distribution of $X_{2} - X_{1}$?
- Lindeberg condition fails, but a CLT still applies
- Estimating a normal distribution
Related Questions in CONDITIONAL-PROBABILITY
- Given $X$ Poisson, and $f_{Y}(y\mid X = x)$, find $\mathbb{E}[X\mid Y]$
- Finding the conditional probability given the joint probability density function
- Easy conditional probability problem
- Conditional probability where the conditioning variable is continuous
- probability that the machine has its 3rd malfunction on the 5th day, given that the machine has not had three malfunctions in the first three days.
- Sum of conditional probabilities equals 1?
- Prove or disprove: If $X | U$ is independent of $Y | V$, then $E[XY|U,V] = E[X|U] \cdot E[Y|V]$.
- Conditional probability and binomial distribution
- Intuition behind conditional probabilty: $P(A|B)=P(B\cap A)/P(B)$
- Transition Probabilities in Discrete Time Markov Chain
Related Questions in INDEPENDENCE
- How to prove mutually independence?
- Simple example dependent variables but under some conditions independent
- Perturbing equivalent measures
- How to prove conditional independence properties
- How do I prove A and B are independent given C?
- Forming an orthonormal basis with these independent vectors
- Independence of stochastic processes
- joint probability density function for $ X = \sqrt(V) \cdot cos(\Phi) $ and $ Y = \sqrt(V) \cdot sin(\Phi) $
- How predictable is $Y$, given values of $X_i$s?
- Each vertex of the square has a value which is randomly chosen from a set.
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
For a random vector $X=(X_1,\ldots,X_n)$, by definition, it is a joint gaussian $\mathcal{N}(\mu,\Sigma)$, where $\mu$ is the mean vector and $\Sigma$ is the covariance matrix, if and only if it has pdf $$f(x_1,\ldots,x_n)=\frac{\exp(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu))}{\sqrt{(2\pi)^n|\Sigma}}.$$
In the case that $X_1,\ldots,X_n$ are uncorrelated, this is the same as saying $\Sigma$ is a diagonal matrix, in which case the pdf splits, giving independence.
Your second question is a general question about independent random variables. Let $X,Y$ be independent (for convenience, continuous) random variables with pdfs $f_X,f_Y$, respectively. Then the joint pdf is $f_{X,Y}(x,y)=f_X(x)f_Y(y)$. Then the conditional pdf of $x$ given $y$ is $$f_{X|Y}(x|y) = \frac{f_{X,Y}(x,y)}{f_Y(y)}=f_X(x).$$ Similarly, the marginal pdf for $X$ is just $$\int f_{X,Y}(x,y)dy = f_X(x)\int f_Y(y)dy=f_X(x).$$
The $y$ in the conditional is assumed to come from $\mathcal{Y}:=\{y\in \mathbb{R}:f_Y(y)>0\}$ so that we avoid dividing by zero. This isn't an issue in the normal case.
Your proposed explanation is exactly the correct one: The joint pdf should have level sets that are ellipses. Each ellipse gets narrower as you go along the $y$ axis. But when you divide (normalize) by the $f_Y(y)$, the segments will become the same. That's because the form of the pdf is $f_X(x)f_Y(y)$, we can see that the scaling factor is exactly $f_Y(y)$.