Expectation in spectral sparsification algorithms

55 Views Asked by At

I am new to random matrices. I am studying the (Sampling) sparsification algorithms done by Daniel Spielman, Teng, Srivastava.

They used the concept of graph sampling to obtain a good spectral sparsifier $H $ for every graph $G $.

The sampling procedure involves assigning a probabilty $p_{u,v} $ to each edge $(u,v)$ in $G$ and then selecting edge $(u,v)$ to be in graph $H $ with probability $p_{u,v} $. When edge $(u,v) $ is chosen to be in $H $, we multiply its weight by $1/p_{u,v} $.

Apparently this procedure guarantees that the expectation of the Laplacian of $H$ is $E (L_{H})= L_{G}$. Now I am not fully understanding how they figured out the expectation.please explain

1

There are 1 best solutions below

0
On BEST ANSWER

To figure out the correct weights $\tilde{w}_e$ of the randomly sampled graph, it is useful to write the Laplacian of a weighted graph $G$ as the sum of outer products $$L_G = \sum_{e \in E} w_e b_e b_e^T,$$ where $w_e$ is the weight of the edge $e$ and $b_e$ is the vector such that $$b_e(w) = \begin{cases} 1 & w = \mathrm{head}(e)\\ -1 & w = \mathrm{tail}(e)\\ 0 & \text{otherwise}, \end{cases}$$ after some arbitrary orientation of the edge $e$.

Now if we sample each edge $e$ with probability $p_e$, we are left with a graph $\tilde{G}$ whose Laplacian is $$L_\tilde{G} = \sum_{e \in E} I_e \tilde{w}_e b_e b_e^T,$$ where $I_e \sim \textsf{Bern}(p_e)$ is an indicator random variable for whether or not $e$ was chosen. To make the expectation line up, we just need $\mathbb{E}[I_e \tilde{w}_e] = w_e$. In other words, we want $\tilde{w}_e = \frac{w_e}{\mathbb{E}[I_e]} = \frac{w_e}{p_e}$.