Consider a sequence of i.i.d. random variables $\{\xi_j\}_{j=1}^J$.
Consider a random variable $X$ with support $\mathbb{R}$.
Consider the map $f: \mathbb{\mathbb{R}}\rightarrow \mathbb{R}^J$. Let $f_j(X)$ be the $j$-th element of the $J\times 1$ random vector $f(X)$.
Assume that $E(\xi_j| X)=0$ $\forall j=1,...,J$.
I want to show that $$ \frac{1}{J} \sum_{j=1}^J f_j(X) \xi_j\rightarrow_p0 \text{ as $J\rightarrow \infty$} $$
The book I'm reading claims that this holds because:
The random variables in the sequence $\{f_j(X) \xi_j\}_{j=1}^J$ are i.i.d. conditional on $(f_j(X) \text{ }\forall j=1,...,J)$.
Hence, under the law of large numbers for triangular arrays, $ \frac{1}{J} \sum_{j=1}^J f_j(X) \xi_j\rightarrow_p0$ as $J\rightarrow \infty$
I'm struggling to understand this proof.
The law of large number for triangular arrays states:
Consider the triangular sequence $\{(Y_{J,j})_{j=1}^J\}_{J \in \mathbb{N}}$. Assume $Y_{J,1},\ldots,Y_{J,J}$ are i.i.d random variables with mean $\mu_J$. Then, under some conditions [?], it holds that
$$
{1 \over J}\sum_{j=1}^J Y_{J,j}-\mu_J \to_p 0 \text{ as $J\rightarrow \infty$} $$
(see here for example).
But what is $ Y_{J,j}$ in my example? Could you help me to clarify?
My interpretation:
First of all, $f$ should really be $f^J$ since the domain of $f$ is $\mathbb{R}^J$ so different $J$'s $\implies$ different $f$'s.
Then the thing you want to prove becomes (edit in red):
$$\frac{1}{J} \sum_{j=1}^J f^\color{red}{J}_j(X) \xi_j\rightarrow_p0 \text{ as $J\rightarrow \infty$}$$
Then comparing to your theorem for triangular array, we identify $Y_{J,j} \equiv f^J_j(X) \xi_j$, and note that:
$$\mu_J = E[Y_{J,j}] = E[f^J_j(X) \xi_j] = E_X [ E_\xi [\xi_j | f^J_j(X)]] = E_X [ 0] = 0$$
However, as pointed out by the comment of @AngelaRichardson, you might be missing some condition on $f$ (and/or on $Y$). E.g. consider:
$$f^J_j(X) = \begin{cases} JX, & j = J\\ 0, & j < J \end{cases} $$
Then:
$$\frac{1}{J} \sum_{j=1}^J f^J_j(X) \xi_j = \frac{1}{J} (JX) \xi_j = X \xi_j \not\to_p 0$$
Even if you additionally impose the condition that $f_j^J = f_j^K$ for all $j,J,K$ in range, this would rule out the above example but the example by Angela that $f_j(X) = 2^j X$ is still likely to lead to $\not\to_p 0$.