How do you understand reconstruction alphabet?

56 Views Asked by At

If $X$ is an information source, the source alphabet is $\mathcal{X}$, then if we reconstruct it at the receiver, it has a corresponding reconstruction alphabet $\hat{\mathcal{X}}$.

But today when I read a paper, it defined a reconstruction alphabet $\hat{\mathcal{X}}$ as the set of all probability mass functions on $\mathcal{X}$.

It also said that

"$\hat{X}$ designates a probability distribution on $\mathcal{X}$ and $\hat{x}(x)$ is the value of this distribution evaluated for the outcome $x$ $\in \mathcal{X}$ "

I am really confused why does the reconstruction alphabet $\hat{\mathcal{X}}$ could be the set of probability?

In my opinion, for example $\mathcal{X} = \{0, 1, ...n\}$ and $\hat{\mathcal{X}} = \{0,1, ...m\}, $ I think $\hat{X}$ should be a reconstructed symbols, but why does it represent a distribution??

1

There are 1 best solutions below

2
On

There is no canonical form of reconstruction alphabet (just as there is no canonical notion of what decoding means). At the end of the day, this is just a listing of the sort of decisions we may try to about the variable $X$. This may well be the set $\mathcal{X}$ itself, but this is not necessary.

For instance (as is likely happening in the example you give), we may want to provide a soft reconstrution of $X$, in the sense of determining a posterior law $p(X = x|\mathrm{Obs}),$ where $\mathrm{Obs}$ are our observations. Clearly this is a notion of reconstruction - we're taking an unknown $X$, and using our observations to refine the probabilistic knowledge we have of its value. In this case, the reconstruction is some law $\hat{p}$, and the reconstruction alphabet is the set of all laws on $\mathcal{X}.$ (I guess the paper you're reading denotes this $\hat{p}$ as $\hat{x}$. I wouldn't do this because I semantically associate $x$s and $\hat{x}$s with single points, but mathematically it makes no difference).

Another possibility is list decoding, where a decoder might give a list of $K$ symbols, with the goal of ensuring that $X$ lies in this list. In this case the reconstruction alphabet might be the set of subsets of $\mathcal{X}$ of size $K$.

Alternately you can think of a hard decoder that is given an option to give up, in the sense that it can declare that it cannot pull enough reliable information about $X$ from the observations. In this case the reconstruction alphabet might be $\mathcal{X} \cup \{\bot\},$ where $\bot$ denotes giving up.

One thing to keep in mind is that reconstruction alphabet is only part of the story (it's only defining what sort of output you produce), and this must be complemented by a notion of loss to provide a target for the reconstruction process. The common situation is that you want to produce some $\hat{X} \in \mathcal{X},$ and the loss is $\mathbf{1}\{\hat{X} \neq X\}$, but in the various examples I discussed, other notions of loss make sense - for instance, in the giving up situation you might have $\mathbf{1}\{\hat{X} \not\in \{\bot, X\}\}$, in the list decoding situation you might have $\mathbf{1}\{X \not\in {L}\}$ (where $L$ is the list being output), in the probabilistic situation you might have a loss of $1-\hat{p}(X),$ or perhaps some divergence between $\hat{p}$ and a posterior. These together determine the sort of reconstruction you're doing - the alphabet is a proxy for the structure of the decoding, and the loss is a proxy for what one is trying to achieve. Of course, these together affect the decoder design.