For whitening, we need to calculate the inverse of the covariance matrix which is computationally expensive. While we can reduce the complexity by finding the inverse of square root of the eigenvalue matrix (which is diagonal), but even finding the whole eigenvector matrix is still expensive when the input matrix is of very high dimension. Now consider a scenario in which we would like to do the same whitening for streaming high dimensional data. Is there any method to update the whitening matrix with streaming data, once we have calculated that matrix for the first time?
2026-04-02 16:28:38.1775147318
How to update a whitening matrix online, for streaming data?
459 Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail At
1
There are 1 best solutions below
Related Questions in MATRICES
- How to prove the following equality with matrix norm?
- I don't understand this $\left(\left[T\right]^B_C\right)^{-1}=\left[T^{-1}\right]^C_B$
- Powers of a simple matrix and Catalan numbers
- Gradient of Cost Function To Find Matrix Factorization
- Particular commutator matrix is strictly lower triangular, or at least annihilates last base vector
- Inverse of a triangular-by-block $3 \times 3$ matrix
- Form square matrix out of a non square matrix to calculate determinant
- Extending a linear action to monomials of higher degree
- Eiegenspectrum on subtracting a diagonal matrix
- For a $G$ a finite subgroup of $\mathbb{GL}_2(\mathbb{R})$ of rank $3$, show that $f^2 = \textrm{Id}$ for all $f \in G$
Related Questions in EIGENVALUES-EIGENVECTORS
- Stability of system of parameters $\kappa, \lambda$ when there is a zero eigenvalue
- Stability of stationary point $O(0,0)$ when eigenvalues are zero
- Show that this matrix is positive definite
- Is $A$ satisfying ${A^2} = - I$ similar to $\left[ {\begin{smallmatrix} 0&I \\ { - I}&0 \end{smallmatrix}} \right]$?
- Determining a $4\times4$ matrix knowing $3$ of its $4$ eigenvectors and eigenvalues
- Question on designing a state observer for discrete time system
- Evaluating a cubic at a matrix only knowing only the eigenvalues
- Eigenvalues of $A=vv^T$
- A minimal eigenvalue inequality for Positive Definite Matrix
- Construct real matrix for given complex eigenvalues and given complex eigenvectors where algebraic multiplicity < geometric multiplicity
Related Questions in INVERSE
- Inverse of a triangular-by-block $3 \times 3$ matrix
- Proving whether a matrix is invertible
- Proof verification : Assume $A$ is a $n×m$ matrix, and $B$ is $m×n$. Prove that $AB$, an $n×n$ matrix is not invertible, if $n>m$.
- Help with proof or counterexample: $A^3=0 \implies I_n+A$ is invertible
- Show that if $a_1,\ldots,a_n$ are elements of a group then $(a_1\cdots a_n)^{-1} =a_n^{-1} \cdots a_1^{-1}$
- Simplifying $\tan^{-1} {\cot(\frac{-1}4)}$
- Invertible matrix and inverse matrix
- show $f(x)=f^{-1}(x)=x-\ln(e^x-1)$
- Inverse matrix for $M_{kn}=\frac{i^{(k-n)}}{2^n}\sum_{j=0}^{n} (-1)^j \binom{n}{j}(n-2j)^k$
- What is the determinant modulo 2?
Trending Questions
- Induction on the number of equations
- How to convince a math teacher of this simple and obvious fact?
- Find $E[XY|Y+Z=1 ]$
- Refuting the Anti-Cantor Cranks
- What are imaginary numbers?
- Determine the adjoint of $\tilde Q(x)$ for $\tilde Q(x)u:=(Qu)(x)$ where $Q:U→L^2(Ω,ℝ^d$ is a Hilbert-Schmidt operator and $U$ is a Hilbert space
- Why does this innovative method of subtraction from a third grader always work?
- How do we know that the number $1$ is not equal to the number $-1$?
- What are the Implications of having VΩ as a model for a theory?
- Defining a Galois Field based on primitive element versus polynomial?
- Can't find the relationship between two columns of numbers. Please Help
- Is computer science a branch of mathematics?
- Is there a bijection of $\mathbb{R}^n$ with itself such that the forward map is connected but the inverse is not?
- Identification of a quadrilateral as a trapezoid, rectangle, or square
- Generator of inertia group in function field extension
Popular # Hahtags
second-order-logic
numerical-methods
puzzle
logic
probability
number-theory
winding-number
real-analysis
integration
calculus
complex-analysis
sequences-and-series
proof-writing
set-theory
functions
homotopy-theory
elementary-number-theory
ordinary-differential-equations
circles
derivatives
game-theory
definite-integrals
elementary-set-theory
limits
multivariable-calculus
geometry
algebraic-number-theory
proof-verification
partial-derivative
algebra-precalculus
Popular Questions
- What is the integral of 1/x?
- How many squares actually ARE in this picture? Is this a trick question with no right answer?
- Is a matrix multiplied with its transpose something special?
- What is the difference between independent and mutually exclusive events?
- Visually stunning math concepts which are easy to explain
- taylor series of $\ln(1+x)$?
- How to tell if a set of vectors spans a space?
- Calculus question taking derivative to find horizontal tangent line
- How to determine if a function is one-to-one?
- Determine if vectors are linearly independent
- What does it mean to have a determinant equal to zero?
- Is this Batman equation for real?
- How to find perpendicular vector to another vector?
- How to find mean and median from histogram
- How many sides does a circle have?
You might try updating the inverse of your matrix. If $A$ is your matrix at time $t$, and this changes by a small amount $B$, then $$ (A + B)^{-1} \approx A^{-1} + A^{-1} B A^{-1}$$ However, matrix inversion has the same asymptotic complexity as matrix multiplication, so this may not be helpful unless $B$ is sparse.
EDIT: It's probably better to update the covariance matrix and its inverse as each new data point appears. Suppose your data are the column vectors $X_n$, $n = 1,2,3, \ldots$. If $\mu_n$ and $\Sigma_n$ are the mean vector and covariance matrix (in the version with denominator $n-1$) of the first $n$ data points, we have $$ \eqalign{\mu_{n+1} &= \dfrac{n \mu_n + X_{n+1}}{n+1}\cr \Sigma_{n+1} &= \dfrac{n-1}{n} \Sigma_n + \dfrac{1}{n+1} \left(X_{n+1} - \mu_n)(X_{n+1} - \mu_n)^T\right) }$$ This is a rank-$1$ update of a multiple of $\Sigma_n$, so by the Sherman-Morrison formula $$ \Sigma_{n+1}^{-1} = \frac{n}{n-1} \Sigma_n^{-1} - \frac{n}{n-1} \frac{\Sigma_n^{-1} (X_{n+1} - \mu_n) (X_{n+1} - \mu_n)^T \Sigma_n^{-1}} {\dfrac{n^2-1}{n} + (X_{n+1}-\mu_n)^T \Sigma_n^{-1} (X_{n+1} - \mu_n)}$$ We can compute this as $$\eqalign{Y &= X_{n+1} - \mu_n\cr Z &= \Sigma_n^{-1} Y\cr \Sigma_{n+1}^{-1} &= \dfrac{n}{n-1} \Sigma_n^{-1} - \dfrac{n}{n-1} \dfrac{Z Z^T}{\dfrac{n^2-1}{n} + Y^T Z}}$$ Note that this only involves matrix-vector and vector-vector multiplications, so it's much faster than matrix-matrix multiplication.