In a strict mathematical sens, can a convolution/deconvolution be equivalent to a coding/decoding process ? I just got the remark from a reviewer that it's strictly different, it's a little surprising to me.
To explain the process, let's work with two filters (codes ?) $H_1(f)$ and $H_2(f)$ that are orthogonal such that $H_1(f)\ H_2(f)^* = 0$.
I'm compressing two signals $X_1(f)$ and $X_2(f)$ by these filters : $Y(f) = H_1(f)\ X_1(f) + H_2(f)\ X_2(f)$
My time domain convolutions are written as simple product in the frequency domain. Thus, my signals are projeted on orthogonal basis and summed.
Then I can estimate my signals (let's try with $X_1(f)$) by a deconvolution of the sum :
$X_1(f)_{est} = Y(f) \frac{H_1(f)^*}{\lvert H_1(f) \lvert^2 + \alpha}$
(with this approach, $\alpha$ is a constant used to prevent the inversion of too low values)
So finally, can this method be considered as a coding/decoding process by using this projection over orthogonal sub-spaces ?
In communication/information theory, the terms coding and decoding are used in two contexts:
1) Source coding: In the source coding problem, we are looking for a compact representation of the data generated by a source, subject to a fidelity criterion. Consider a source that periodically generates symbols from an alphabet $\mathcal{X}$, according to some probabilistic rule (e.g., i.i.d.). A source code is a map from the set of length-$n$ strings of symbols to the set of length-$nR$ binary strings, which are the compressed representations of each block of symbols: $\mathcal{X}^n \to \{ 0,1\}^{nR}$. $R$ is called the coding rate in bits. A decoder, on the other hand, is a map $\{ 0,1\}^{nR} \to \mathcal{\hat X}^n$, where $\mathcal{\hat X}$ is the reconstruction alphabet. For a good source code, the "distortion" between the reconstructed sequence and the original one is small, with respect to the specified fidelity criterion.
2) Channel coding: In the channel coding problem, we add redundancy to the compressed data, so that it can be reliably reconstructed at the receiver, when transmitted over a noisy channel. A channel code is a map from the length-$nR$ binary strings to a set of length-$n$ channel symbols: $\{ 0,1\}^{nR} \to \mathcal{C}^n$, where $\mathcal{C}$ is the alphabet of symbols accepted by the channel. $R$ is similarly called the coding rate in bits. Assuming the channel generates symbols (probabilistically, based on the input symbols) over the alphabet $\mathcal{Y}$, a decoder is any map $\mathcal{Y}^n \to \{ 0,1\}^{nR}$. A good channel encoder/decoder pair is one such that the original binary string is the same as the decoded one with high probability.
Your example does not match with either of these cases, so the term coding/decoding would not be appropriate here. What you are doing is just frequency-domain multiplexing, which is one of the techniques used when multiple transmissions are supposed to share the same medium without interfering with each other.