I am reading Information Theory, Inference and Learning Algorithms by David MacKay. It is available free of charge online: http://www.inference.org.uk/mackay/itila/book.html (official site, not pirated copy).
My question is to solve Exercise 25.1. It is marked as one of the easier ones. I think I'm just missing some fundamental understanding. Assistance would be enormously appreciated!
I quote the desired section, slightly paraphrased for abbreviation. It is from Section 25.1 on Page 324 (Page 336 of my pdf).
A codeword $t$ is selected from a code with codewords which form a subset of $\{0,1\}^N$ and transmitted over a noisy channel and $y$ is received. We have $$ P(t \mid y) = P(y \mid t) P(t) / P(y) $$ by Bayes's theorem. Assume that he channel is memoryless. Thus $$ P(y \mid t) = \prod_{n=1}^N P(y_n \mid t_n). $$
For example, if the channel is a Gaussian channel with transmissions $\pm x$ and additive noise of standard deviation $\sigma$, then the probability density of the received signal $y_n$ in the two cases $t_n \in \{0,1\}$ are given as follows: $$ \begin{aligned} P(y_n \mid t_n = 1) &= (2 \pi \sigma^2)^{-1/2} \exp\bigl( - (y_n - x)^2 / (2 \sigma^2) \bigr); \\ P(y_n \mid t_n = 0) &= (2 \pi \sigma^2)^{-1/2} \exp\bigl( - (y_n + x)^2 / (2 \sigma^2) \bigr). \end{aligned} $$ [Here we are interpreting $t_n = 0$ as "send $-x$" and $t_n = 1$ as "send $+x$" (plus noise).] From the point of view of decoding, all that matters is the likelihood ratio: $$ \frac {P(y_n \mid t_n = 1)} {P(y_n \mid t_n = 0)} = \exp\biggl( \frac{2 x y_n}{\sigma^2} \biggr). $$
Exercise. Show that from the point of view of decoding a Gaussian channel is equivalent to a time-varying binary symmetric channel with a known noise level $f_n$ which depends on $n$.
For further reference, Section 11.2 (Page 179 of book and 191 of my pdf) discusses inferring the input from a Gaussian channel. It's not so helpful for this, though. I do not see what is time-varying about this.
PS Multiple different SE networks seemed appropriate for this: maths, cstheory, stats. I picked this one but if a mod wants to move it, no objection from me.
Arash presents one possibility. Here is another. It gives a completely different conclusion. MacKay's text appears ambiguous to which is intended.
Although it looks like a variable, here $y = (y_n)_{n \in [N]}$ is a single given realisation---the expression $$ \frac {P(y_n \mid t_n = 1)} {P(y_n \mid t_n = 0)} = \exp\biggl( \frac{2 x y_n}{\sigma^2} \biggr) $$ is not to be thought of as expressing a function of $y_n$. Rather, let $q_n(b) := P(y_n \mid t_n = b)$---this is a function of the bit $b \in \{0,1\}$. The likelihood ratio is saying that $$ \frac{q_n(1)}{q_n(0)} = \exp( 2 x y_n / \sigma^2 ). $$ Thus it is a binary symmetric channel with time-varying flip-probability. Converting this into the flip parameter $f_n$ requires knowledge of the prior. For example, if the prior is uniform, then I think $$ \frac{1 - f_n}{f_n} = \frac{q_n(1)}{q_n(0)}, \quad\text{ie}\quad f_n = \frac1{1 + q_n(1)/q_n(0)}. $$ I could be mistaken with this final claim, however.