A System of Infinite Linear Equations

332 Views Asked by At

Suppose that $\{a_{i}\}_{i=-\infty}^{\infty}$ with $\sum_{i=-\infty}^\infty a_{i} \lt \infty$ is known and that $\{b_i\}_{i=-\infty}^{\infty}$ is such that

$$\sum_{i=-\infty}^\infty a_{i}b_{-i} =1,$$

and that, for all $k \in \mathbb Z/\{0\}$,

$$\sum_{i=-\infty}^\infty a_{i}b_{-i+k} =0.$$

Is it possible to solve for $\{b_{i}\}_{i=-\infty}^{\infty}$ as a function of $\{a_{i}\}_{i=-\infty}^{\infty}$?

I just can't figure out where to start from.

3

There are 3 best solutions below

0
On BEST ANSWER

It is impossible to get the analytical solution because computing a numerical method is forced. If you want to obtain the exact solution, it needs modify the problem definition. The following description is redefined from the above original question:

Problem: Suppose the $[a_{i}]$ and $[b_{i}]$, and then we add the new condition that $b_{n+k}=b_{-n+k}$ is set. Then, solve the following system equations:

$$ \lim_{n \to \infty}\sum_{i=-n}^n a_{i}b_{-i+k}= \begin{cases} 1 & (k=0) \\ \\ 0 & (\text{otherwise}) \end{cases} $$

Herewith, circular convolution appears in left side. When we want to estimate the infinity convolution between non-periodic functions, we use the periodic summation of one hand signal. This operation is very important in signal processing. Additionally, the equation can be expressed by linear algebra such like matrix and vectors:

$$ \mathsf{A} \ \boldsymbol{b}=\boldsymbol{v} $$

where

$$ \mathsf{A}= \begin{bmatrix} a_{-n} & a_{n} & a_{n-1} & \cdots & a_{1-n} \\ a_{1-n} & a_{-n} & a_{n} & \cdots & a_{2-n} \\ a_{2-n} & a_{1-n} & a_{-n} & \cdots & a_{3-n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n} & a_{n-1} & a_{n-2} & \cdots & a_{-n} \end{bmatrix} $$

$$ \boldsymbol{b}= \begin{bmatrix} b_{-n} & b_{-n+1} & \cdots & b_{n-1} & b_{n} \end{bmatrix} ^{\text{T}} $$

and $\boldsymbol{v}$ is all zero vector except last element as one:

$$ \boldsymbol{v}= \begin{bmatrix} 0 & 0 & \cdots & \cdots & 0 & 1 \end{bmatrix} ^{\text{T}} $$

Particularly, the matrix $\mathsf{A}$ is circulant matrix which has special form in Toepliz matrix. Then each one side are $N=2n+1$. Furthermore, it can be diagonalized and generated the diagonal matrix $\mathsf{D}$ using DFT matrix $\mathsf{W}$:

$$ \mathsf{W}=\cfrac{1}{\sqrt{N}} \begin{bmatrix} \omega_{0,0} & \omega_{1,0} & \omega_{2,0} & \cdots & \omega_{N-1,0} \\ \omega_{0,1} & \omega_{1,1} & \omega_{2,1} & \cdots & \omega_{N-1,1} \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \omega_{j,k} & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ \omega_{0,N-1} & \cdots & \cdots & \cdots & \omega_{N-1,N-1} \end{bmatrix} $$

where,

$$ \omega_{j,k}=\exp\biggl(\cfrac{2\pi i}{N}jk\biggr) \ \ \ \ (j,k=0,1,\cdots,N-1). $$

Therefore the explicit solution to $\boldsymbol{b}$ is follows:

$$ \boldsymbol{b}= \mathsf{W}^{\text{T}}\mathsf{D}^{-1}\mathsf{W} \ \boldsymbol{v} $$

where $\mathsf{D}$ is:

$$ \mathsf{D}= \begin{cases} \displaystyle \sum_{k=-n}^{n}a_{-k}\omega_{j,k} & (j=k) \\ \\ 0 & (\text{otherwise}) \end{cases} $$

Then, each diagonal elements show the eigenvalues of $\mathsf{A}$. Furthermore, $b_{j}$ can be expressed by the expansion form as follows:

$$ b_{j+1}=\lim_{n\to \infty} \begin{bmatrix} \displaystyle \sum_{l=0}^{2n} \cfrac{w_{j,l}w_{2n,l} }{\displaystyle\sum_{k=-n}^{n}a_{-k}\omega_{j,k}} \end{bmatrix} $$

1
On

Note: This did not help, but was fun:

Let us try this for a finite sum: $$ 1 = \begin{pmatrix} a_{-1} & a_0 & a_1 \end{pmatrix} \begin{pmatrix} b_1 \\ b_0 \\ b_{-1} \end{pmatrix} = \begin{pmatrix} a_{-1} & a_0 & a_1 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} b_{-1} \\ b_0 \\ b_1 \end{pmatrix} $$ For $k=1$: $$ 0 = \begin{pmatrix} a_{-1} & a_0 & a_1 \end{pmatrix} \begin{pmatrix} 0 \\ b_1 \\ b_0 \end{pmatrix} = \begin{pmatrix} a_{-1} & a_0 & a_1 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} b_{-1} \\ b_0 \\ b_1 \end{pmatrix} $$ This leads to the equations $$ a^\top D^{(k)} b = \delta_{k0} $$ for a matrix $D^{(k)}$ with components $$ d_{ij}^{(k)} = \delta_{(k-i)j} $$ where we also use negative indices for the matrix elements, thus having the center element at $d_{00}^{(k)}$.

This leads to a matrix equation $$ A b = y $$ with $$ A = \begin{pmatrix} & \vdots & \vdots & \vdots \\ \dotsb & a_0 & a_{-1} & a_{-2} & \dotsb \\ \dotsb & a_1 & a_0 & a_{-1} & \dotsb \\ \dotsb & a_2 & a_1 & a_{0} & \dotsb \\ & \vdots & \vdots & \vdots \\ \end{pmatrix} \quad y= \begin{pmatrix} \vdots \\ 0 \\ 1 \\ 0 \\ \vdots \end{pmatrix} $$ and the formal solution $$ b = A^{-1} y $$ which is the central column of $A^{-1}$.

0
On

Let's start and consider the case in which the sequences are limited, with indices running within $\left[ {0,h } \right]$, i.e. let's translate the problem as:

$$ \left\{ \begin{gathered} a_{\;j} = 0,\;b_{\;j} = 0\quad \left| {\;j < 0\; \vee \;h < j} \right. \hfill \\ \sum\limits_{0\, \leqslant \,j\, \leqslant \,h} {a_{\;j} } < \infty \hfill \\ \sum\limits_{0\, \leqslant \,j\, \leqslant \,h} {a_{\;j} \,b_{\;k - j} } = \delta _{\;0,\,k} \quad \left| {\;0\, \leqslant \,k,\,j\, \leqslant \,h} \right. \hfill \\ \end{gathered} \right. $$

which means that $\delta$ is the convolution of $a$ with $b$, which can be solved iteratively. $$ \left\{ \matrix{ a_{\;0} \,b_{\;0} = 1 \hfill \cr a_{\;0} \,b_{\;1} + a_{\;1} \,b_{\;0} = 0 \hfill \cr \quad \vdots \hfill \cr a_{\;0} \,b_{\;h} + a_{\;1} \,b_{\;h - 1} + \; \cdots \; + a_{\;h} \,b_{\;0} = 0 \hfill \cr} \right.\quad \Rightarrow \quad \left\{ \matrix{ b_{\;0} = 1/a_{\;0} \hfill \cr b_{\;1} = - {{a_{\;1} } \over {a_{\;0} }}\,b_{\;0} = - {{a_{\;1} } \over {a_{\;0} ^2 }} \hfill \cr b_{\;2} = - {{a_{\;1} } \over {a_{\;0} }}\,b_{\;1} - {{a_{\;2} } \over {a_{\;0} }}\,b_{\;0} \, = \left( {{1 \over {a_{\;1} }} - {{a_{\;1} } \over {a_{\;0} }}} \right)\,b_{\;1} \, = - {1 \over {a_{\;0} ^2 }}\left( {1 - {{a_{\;1} ^2 } \over {a_{\;0} }}} \right)\, \hfill \cr \quad \vdots \hfill \cr a_{\;0} \,b_{\;n + 1} = - (a_{\;1} \,b_{\;n} + \; \cdots \; + a_{\;n + 1} \,b_{\;0} ) \hfill \cr} \right. $$

But being a convolution, it also means that it is the product of (formal) unilateral power series, i.e. unilateral z-Tranforms. $$ \left\{ \matrix{ A(x) = \sum\limits_{0\, \le \,k\,\left( { \le \,h} \right)} {a_{\;k} \,x^{\,k} } \hfill \cr B(x) = \sum\limits_{0\, \le \,k\,\left( { \le \,h} \right)} {b_{\;k} \,x^{\,k} } \hfill \cr A(1) = \sum\limits_{0\, \le \,k\,\left( { \le \,h} \right)} {a_{\;k} } < \infty \hfill \cr 1 = A(x)B(x) \hfill \cr} \right. $$

so that if $A(x)$ is a known polynomial/function, $B(x)$ will be its reciprocal, and it might result into an infinite sequence

If the sequences are bilateral, like as you propose, then we move to bilateral z-Tranforms.

Finally, besides z-Transform, one may also consider to work with Discrete and/or Discrete-Time Fourier transforms.