Suppose $W_i=\frac{\delta_i}{\pi_i} S^\prime_i(y_i,\vec\theta)$, where $\delta_i$ is iid Bernoulli r.v, $y_i$ is fixed variable and $\vec \theta$ is a vector of parameters. $S^\prime_i(y_i,\vec\theta)=\frac{\partial}{\partial\vec \theta}S_i(y_i,\vec\theta)$. Suppose the vector $$ S_i(y_i,x_i,\vec\theta) = \begin{bmatrix} y_i-\theta_1 &\\(y_i-\theta_1)^2-\theta^2_2 \end{bmatrix}$$ then
$$\frac{\partial}{\partial \vec \theta} S_{i}(y_i,\vec \theta) \text{ will be a matrix of order } 2 \times 2 \text{ of fixed quantities }.$$ Now $W_i$ is independent but non-identical random matrices because $E(W_i) = E(\delta_i/\pi_i) S^\prime_{i} (y_i,\vec \theta)=S^\prime_{i}(y_i,\vec \theta)$ since $E(\frac{\delta_i}{\pi_i}|\pi_i)=1$ and $y's$ are fixed. I need to show that $$ S_N=\frac{1}{N}\sum_{i=1}^{N} W_i \text{ convergence in probability to } E(S_N) \text{uniformly}. $$ Any quick help would be greatly appreciated.
If we have a sequence of (finite sized) matrices $(a_{ij}(n))$ such that $\lim_{n\rightarrow\infty} a_{ij}(n) = 0$ for all entries $(i,j)$, then the entries indeed converge uniformly to 0 (uniform over all finite entries). So you can reduce the problem to scalars.
If $\{X_i\}_{i=1}^{\infty}$ is a random process of mutually independent random variables, and if $\{a_i\}_{i=1}^{\infty}$ is a deterministic sequence of real numbers, then we can define $S_n = \frac{1}{n}\sum_{i=1}^n X_i a_i$. Then $S_n - E[S_n]$ will converge to 0 in probability (as $n\rightarrow \infty)$ under some mild assumptions on the random process $\{X_i\}$ and on the sequence $\{a_i\}$. Just compute the variance of $S_n - E[S_n]$ to derive the properties that you need.
In general, the quantity $S_n - E[S_n]$ will not converge to 0 in any sense. For example, if $\{X_i\}$ is i.i.d. Bernoulli with $P[X_i=0]=P[X_i=1]=1/2$, and if $a_i = 2^i$, you can see that $S_n$ will jump whenever $X_n=1$ and will never converge to anything.