Murry Rosenblatt in his 2000 book "Gaussian and Non-Gaussian Linear Time Series and Random Fields" introduces stationary linear random fields with d-dimensional vector indices thus: \begin{equation} x_\textbf{t} =\sum_{\textbf{j}} a_{\textbf{j}} \xi_{\textbf{t} - \textbf{j} } = (a * \xi)_{\textbf{t}} \end{equation} where
\begin{equation} \sum_{\textbf{j}}a_{\textbf{j}} < \infty \end{equation} and where subscripts $\textbf{t}$ and $\textbf{j}$ are $d$-vectors and the summation extends over all lattice points in $Z^d$ with the $\xi$'s independent, identically distributed random variables, $E\xi \equiv 0$, $0 < E \xi_{\textbf{j}}^2 < \infty$.
If the indices were one dimensional this would say that $x_t$ is equal to the sum of $j$ past values of $\xi_t$, each weighted by $a_j$.
What does it mean when the indices are multidimensional?
If I think of it in two dimensions then each $x_{\textbf{t}}$ is a point on a grid and each of those $x_{\textbf{t}}$ points are determined by the summing of a set of points on another grid containing $\xi_{\textbf{t}}$ (with each $\xi_{\textbf{t}- {\textbf{j}} }$ weighted by $a_{\textbf{j}}$). This also means which points on the $\xi_{\textbf{t}}$ grid that are summed to determine each $\xi_{\textbf{t}}$ is determined by $\textbf{j}$.
My (experimental) example:
If $\textbf{t} \in \{ \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix} \} $ and $\textbf{j} \in \{ \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 1 \end{bmatrix} \}$
then for $\textbf{t} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ we have \begin{equation} x_{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} = a_{\begin{bmatrix} 0 \\ 0 \end{bmatrix}} \xi_{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} + a_{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} \xi_{\begin{bmatrix} 1 \\ 0 \end{bmatrix} - \begin{bmatrix} 1 \\ 1 \end{bmatrix}} \end{equation}
\begin{equation} = a_{\begin{bmatrix} 0 \\ 0 \end{bmatrix}} \xi_{\begin{bmatrix} 1 \\ 0 \end{bmatrix}} + a_{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} \xi_{\begin{bmatrix} 0 \\ -1 \end{bmatrix} } \end{equation}
and for $\textbf{t} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$ we have
\begin{equation} x_{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} = a_{\begin{bmatrix} 0 \\ 0 \end{bmatrix}} \xi_{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} + a_{\begin{bmatrix} 1 \\ 1 \end{bmatrix}} \xi_{\begin{bmatrix} 0 \\ 0 \end{bmatrix} } \end{equation}
Is this the idea, except that in application we would sum over a larger set of index vectors? If so, what does it mean when the index on $\xi$ is not in the index set $\textbf{j}$? (as is the case in my experimental example where I have $\xi_{\begin{bmatrix} 0 \\ -1 \end{bmatrix}}$)
In dimension one, linear time series are usually defined by $$ x_t:=\sum_{j\in\mathbb Z}a_j\xi_{t-j}, $$ where $\left(\xi_i\right)_{i\in\mathbb Z}$ is independent and identically distributed with $\varepsilon_0$ centered and square integrable and $\sum_{j\in\mathbb Z}a_j^2$ is finite. In this way, the series is convergent.
In dimension two, this definition becomes $$ x_{t_1,t_2}=\sum_{j_1\in\mathbb Z}\sum_{j_2\in\mathbb Z}a_{j_1,j_2}\varepsilon_{t_1-j_1,t_2.j_2}, $$ where $\left(\xi_{i_1,i_2}\right)_{i_1,i_2\in\mathbb Z}$ is independent with $\varepsilon_{0,0}$ centered and square integrable and $\sum_{j_1,j_2\in\mathbb Z}a_{j_1,j_2}^2$ is finite. Here, the infinite sum is understood as the limit in $\mathbb L^2$ of the sequence $\left(x_{t_1,t_2}^{(m)}\right)_{m\geqslant 1}$ where $$x_{t_1,t_2}^{(m)}=\sum_{j_1=-m}^m\sum_{j_2=-m}^ma_{j_1,j_2}\varepsilon_{t_1-j_1,t_2.j_2}.$$ The same idea applies to dimension $d$.