Let's assume we have a matrix $\boldsymbol{R} \in \mathbb{R}^{n \times n}$ drawn uniformly in $[-1,1]$ and set $$\begin{aligned} \boldsymbol{A} & =\frac{1}{2+\|\boldsymbol{R}\|} \operatorname{diag}(\boldsymbol{R}) \\ \boldsymbol{W} & =\frac{1}{2+\|\boldsymbol{R}\|}(\boldsymbol{R}-\operatorname{diag}(\boldsymbol{R}))\end{aligned}$$ and further draw a random vector $b \in \mathbb{R}^n$ from $[-1,1]$.
Let's further assume we have the set $\mathcal{D}=\{D_1,...,D_{2^n}\}$ of matrices which are of the form $D=diagm([1,0,1,0,0,0,1,...,1])$. I.e. all the $2^n$ possibilities to distribute ones and zeros on the diagonal. These matrices can be thought of as inidcating a quadrant in $\mathbb{R}^n$.
Now we calculate the following quantitiy for all matrices in $\mathcal{D}$ $$z_i=(\mathbb{1}-(A+W \cdot D_i))^{-1}b, \quad i=1,...,2^n$$
Emprically one observes that all of the $z_i$ lie in only a few quadrants
$n=15$" />
But when one draws $A$ and $W$ without dividing by $\|R\|$, i.e.
$$\begin{aligned} \boldsymbol{A} & = \operatorname{diag}(\boldsymbol{R}) \\ \boldsymbol{W} & =(\boldsymbol{R}-\operatorname{diag}(\boldsymbol{R}))\end{aligned}$$
this property does not hold any more and the $z_i$'s are quite uniformley distributed in the quadrants.
My question is: Can somebody proof why the $z_i$'s with this initalization only end up in a few quadrants?