Let $T : a \mapsto b$ be a transformation of sequence $a$ to $b$ of the form
$$ T(a)_m = b_m = \sum_{k=1}^{\infty} a_k e^{-i 2 \pi m / k } $$
Question. How would you go about determining if this transformation is invertible? If it is invertible, then can it be written in the same form (linear combination).
Note this is not the Fourier transform of a sequence which results in a continuous function on $\mathbb{C}$, and it is not a restriction of that map either, notice that we have $m / k$ not the opposite, which would then indeed be the same as the official Fourier transform.
I'm new to functional analysis so have know idea where to begin, but don't be afraid to use high-level terminology especial high-level in topology language since I know some of that.
Thanks.
I don't know whether this question is too old and that answer does make any sense so late. But in case it does: I'd try to attack the problem using the LDU-decomposition of the matrix $E$ which is made of the coefficients $e^{-i 2 \pi m /k}$ where I'd rewrite the variable-names to $r$ and $c$ to make unique (and memorizable) what is the row and what the column and that they begin at index $1$, so I assume $$ E_{r,c} = e^{-i2 \pi {r \over c}} \qquad \qquad \text { for } r,c \ge 1$$ Now the LDU-decomposition on finitely truncated versions of $E$ gives triangular matrices $L$ (lower) and $U$ (upper) and a diagonal matrix $D$ which have always the same entries independently of the size of truncation, so we could assume that they are also valid truncations for the infinite case.
To have $E$ inverted would now mean to do the product of the inverses $$E^{-1} = U^{-1} D^{-1} L^{-1}$$ I'm not sure at the moment, but I think because of their construction, $L$ and $U$ are always invertible (thus also for the infinite cases) since their diagonal contains units and they are triangular; only for $D$ we must show, that no zero occurs in the diagonal.
Then your initial transformation, with the columnvectors $A$ (containing the coefficients $a_k$) and $B$(containing the coefficients $b_k$) looks like $$ E \cdot A = B $$ or $$ L D U \cdot A = B $$ Because $L^{-1}$ is lower triangular we can write the partially inverted equation $$ U \cdot A = D^{-1} L^{-1} B $$ for any truncation and because of invariance of the entries in the matrices we might assume, that this holds also for the case of infinite size. Because $L$ and $D$ are row-finite, the entries in the rhs are all finitely determinable and thus well defined. The problem is now the premultiplication with $U^{-1}$. The dot-products of rows and columns are now infinite series and you must find arguments for the question, whether all that dotproducts are convergent (are at least summable) to make sure, you can invert your transformation. Let's write for that (questionable/unsafe) expression of infinite matrix-product the symbol $*_\infty$ sucht that we have $$ A = U^{-1} \underset{\infty}* (D^{-1} L^{-1} B) $$ Your question asks for a "fixed" inverse $F$ of $E$ such that $$F = E^{-1} = U^{-1} \underset{\infty}* D^{-1}L^{-1} = U^{-1} \cdot D^{-1}L^{-1}$$ and it might be, that such an inverse does not exist because of divergence of dotproducts in that matrix-multiplication. Possibly you have a general expression for the entries in the rows/columns of the inverses of $U$ and $L$ and can conclude whether the dotproducts converge or are at least summable.
However even if here divergences occurs then still for some sequences $B$ a solution might exist - it is possible that the columnvector of the partial product $$ G = D^{-1} L^{-1} B $$ becomes column-finite ( only a finite number of columns are nonzero) and after that the premultiplication with $U^{-1}$ is then possible. This effect can be seen for example if $ D^{-1} L^{-1} $ equals the inverse lower triangular Pascalmatrix and $B$ the consecutive integers to some power, thus the terms of a Dirichlet-series with negative integer exponent: then $G$ reduces to a vector with finitely many leading nonzero entries only (This example occurs for instance in H.Hasse's proof for his summability-procedure for the Riemann-zeta)
In your example, when I try to evaluate the dotproducts of one row in $U^{-1}$ with the columns of $ (D^{-1} L^{-1})$ up to truncation sizes 64x64 it seems, that the first few rows might give convergent series (possibly the first row gives a divergent one) and the later rows are inconclusive; but perhaps of the tendency, if I increase sizes from 16x16 to 32x32,48x48 to 64x64, then it seems, that convergence might occur/become visible however after more than 64 terms.
For reference: here is the top-left segment of $U^{-1}$:
and here the top-left segment of $D^{-1} L^{-1}$: